leroy_hoffenfeffer
leroy_hoffenfeffer t1_jcd3tff wrote
Reply to comment by SgathTriallair in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
My point is that there isn't anything we can do outside of that.
All of the innovation in this space has been, and will continue to be, captured by entities that don't have the everyday person's best interest at heart.
Given that corporate capture has and will continue to happen, it's hopelessly naive to suggest that these tools will be used for anything other than profit motives.
To suggest that an Artificial Super Intelligence, built on the tools that have been captured by corporations, will somehow end up benefiting the masses of society flies in the face of how those corporations act on a daily basis.
I invite anyone to look at any corporation's response to having their taxes increased. You can expect a similar, if not worse response with respect to getting corporations to use these tools benevolently.
As of right now, that will *never* happen. The government would need to step in and actually regulate in a meaningful way. The only way that happens is through politics, much to the dismay of everyone.
leroy_hoffenfeffer t1_jcd0nqe wrote
Reply to comment by SgathTriallair in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
It will require a holistic exodus of establishment politicians who are bought and paid for by the corporations that run our society.
We'll need to axe Citizens United.
We'll need to increase support for Unions.
We'll need to double down on funding support systems, like Medicare, like Social Security, etc.
And, most importantly, we'll need to actually elect people who will fight for these types of things.
Without any of that happening, we're going to continue living in the crony capitalist society we live in. And the people at the top of our society will use AI for whatever means they see fit. Full stop.
Thinking that benevolent usage of these tools will "just happen" tells me you're ignoring the objective reality we all currently live in.
leroy_hoffenfeffer t1_jccsbwo wrote
Reply to comment by Tall-Junket5151 in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
> The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. ***It would be at the complete mercy of ASI.***
ASI is going to inherently be built upon the work in deep learning that predates ASI's creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft's wholesale purchase of OpenAI and its assets.
> Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth.
What world do you live in exactly? The only way a post scarcity world exists is if everyday people don't have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We're approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they're going to go about doing it, and if you think they'll magically become altruistic overnight, then that's hopelessly naïve.
> Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”.
The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn't expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people.
That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management.
leroy_hoffenfeffer t1_jccfpl2 wrote
> We can and need to actively work towards minimizing doomerist attitudes.
> Doomerism does not lead to anywhere, it only makes one give up all hope on living, it makes one irrationally pessimistic all while paralysing the ability to see reason, paralysing the ability to work towards a better future, a better life.
So you want to censor those of us who are advising heavy caution when adopting these tools, inherently made by those who control the levers of power?
Sounds like you don't really know much about the current state of politics, and the people that drive the current state of politics. News flash: the people in power are bought and paid for by the corporation that don't give two shits about the bottom 90% of the world.
Censoring opinions like these is literally 1984 shit. "Do not let your lying eyes deceive you."
leroy_hoffenfeffer t1_j5sad3g wrote
Reply to Future-Proof Jobs by [deleted]
No one knows.
Previous Industrial Revolutions brought with them tons of automation... Out of that fell new jobs and industry.
We're still scratching the surface of Machine Learnings potential. With what we have now, we can imagine Art and other creative works being done by machine, something that was previously thought to be impossible.
Most people (including me) might say Blue Collar work like plumbing or electrician based work. While stuff like that is a safe bet, it's also worth pointing out that that kind of work assumes that humans are still doing the design work for plumbing systems and construction. If A.I replaces design work for those jobs, it could also be within reason to develop other machines to automate the hard labor away. Think of those 3D House Printing robots. No humans required to build those. Why wouldn't someone take the next step and automate the maintenance jobs away too? After all, humans are fallible, machines, if programed correctly, are not. Machines would be the best entity to repair a machine designed house.
I could also say something like IT work or programming. Despite what people would say here, we're still at least fifteen years away from replacing those kinds of jobs. Most work done in that space comes from other humans needing engineers to design solution for difficult, human-based problems. The old joke is "Good luck getting human beings to describe their problems well enough for a machine to comprehend and solve." The punch line thus being that humans are awful at telling other humans what they want. A machine would need a perfectly understandable problem in order to solve it.
But even this is hard to predict. ChatGPT is pretty good at what it does, and it can write basic programs for basic problems. Will more advanced versions be able to suss out the nuance from a human enough to program a solution? That's yet to be seen.
I can tell you first hand that Animation as a tool is an area of active automation research. Hand animating is hard, time consuming, and expensive. Its in everyone's best interest to automate that stuff away. But automating that is also very difficult: you have to inherently rely on Graphics Engines that operate on specialized, usually memory light, environments. You can't employ usual Machine Learning methods here because employing those tools is runtime intensive. From a Video Game perspective, where frame rate is potentially the most important thing, that's a nonstarter: if employing an ML tool to automate animation causes significant lag, that tool will be tossed by the wayside.
This topic is all nuance. You would really need to deeply dive into each particular career path in a novel way in order to get grounded answers to your general question.
leroy_hoffenfeffer t1_j57k7rk wrote
Reply to When you imagine the future of technology, is it grim or is it hopeful? by ForesightInstitute
Technology is inherently controlled by entities that don't have average citizens best interest at heart.
So, mostly grim, unless some serious reform / revolution occurs that pries technology away from said entities.
leroy_hoffenfeffer t1_j4wvkoo wrote
Reply to comment by Arcosim in Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
A few issues with this thought process:
-
Even if folks were to retroactively add or edit robots.txt files to disallow scraping, that does nothing to address the content already scraped and downloaded. So the aspect of LAION downloading potentially copyrighted works is still in play.
-
I think it's an extremely flaky argument to say "Well, those artists should have edited their robots.txt files to disallow the thing they didn't know was happening". It's a very real possibility that the artists in question didn't even know this kind of stuff was happening, let alone there being something they could do about it. I'm not sure a court would view that argument as being sound.
-
I think LAION is a European company. Why this is relevant is because of their FAQs:
> If you found your name only on the ALT text data, and the corresponding picture does NOT contain your image, this is not considered personal data under GDPR terms. Your name associated with other identifiable data is. If the URL or the picture has your image, you may request a takedown of the dataset entry in the GDPR page. As per GDPR, we provide a takedown form you can use.
So, LAION is beholden to GDPR terms. I think the potential exists for someone to ask "Well... If my picture and data is considered personal data, why isn't the content I produce also considered personal data?" Current GDPR guidelines behave this way, but I think we may end up seeing edits or rewrites of GDPR guidelines given cases like this.
It's neither reasonable nor sound to say "Artists should have taken this very technical detail into account in order to protect their work."
leroy_hoffenfeffer t1_j4wmjso wrote
Reply to comment by Arcosim in Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
I know how they obtained URLs using CommonCrawl. CommonCrawl isn't the issue.
CommonCrawl only returns URLs. LAION had to take the URLs and download the content contained on the webpage therein.
leroy_hoffenfeffer t1_j4w7e4v wrote
Reply to comment by [deleted] in Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
That's definitely something that will be brought up in court.
From the layman's perspective, it would seem that Midjourney, etc. are no longer operating as research outlets and are instead offering a commercial product. Corporations are surely treating it like a commercial product at least.
leroy_hoffenfeffer t1_j4w0ixj wrote
Reply to comment by broadenandbuild in Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
We'll see how that plays out legally wirh respect to artwork.
Just because that's how things function currently doesn't mean it's going to end up being legal to do so in the future.
leroy_hoffenfeffer t1_j4vv3of wrote
Reply to comment by broadenandbuild in Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
Scraping URLs is not illegal, no.
Taking those URLs and downloading the images on/in those URLs is a different story though. Collecting URLs is benign. Extracting information from those URLs may not be though.
leroy_hoffenfeffer t1_j4vusek wrote
Reply to Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
Im not sure why the art tools themselves are being targeted and not the dataset developers like LAION.
Stable Diffusion, Midjourney, etc. are just using datasets from LAION. I would think the buck stops with them in terms of getting permission / rights to use image URLs in their datasets.
I don't think it's fair to target A.I developers themselves in this case - they can more so be considered users to some extent in this case.
leroy_hoffenfeffer t1_j2fd5a7 wrote
Full steam ahead, I guess.
It seems that the human race is going to quickly end up in a Cyberpunk or Bladerunner future.
If we don't consider ethical adoption of technology, the elite will control everything, and everyone else will just have to suck it.
leroy_hoffenfeffer t1_j1zs373 wrote
>“The things that agencies should be doing is beyond experimenting with this; they should be calculating now what it means for their business,” Curtis said. “It is a tool humans will use to kickstart creative thinking or to create the base level of something, which they then adapt continuously, or to move more quickly. … It is not an answer to everything, but it does radically shift the economics of a lot of what we do in creativity.”
Translated: "Were going to replace our creative workers with AI tools"
> Zach Kula, group strategy director at BBDO, said the industry should be thinking about this tool less in terms of how it could replace humans and more about the various ways it could revolutionize how creatives do their jobs.
Lol, this guy is either hopelessly naive, or just has no actual idea what businesses are going to do with these tools. Business owners don't really care about how creatives do their job: the end result is all that matters. If owners think they can save money and just use these tools instead of using creative experts, they'll absolutely just give using the tools a try. Worst comes to worst, you can just hire someone else if solely using the tools doesn't pan out.
> “In my mind, it doesn’t appear that many of the people commenting on this have even used the tool,” Kula said. “If they did, it would be obvious it’s not even close to replacing creative thinking. In fact, I’d say it exposes how valuable true creative thinking actually is. It puts the difference between original creative thought and eloquently constructed database information in plain sight.”
I mean again, business owners don't really care about "true creative thought" for the most part. Nowadays, it's all about catchy headlines, clicks and who publishes first.
This article downplays or ignores the actual concerns people have, makes businesses seem more practically ethical than they actually will be, and undersells how disruptive the tech will be once fully matured.
leroy_hoffenfeffer t1_j1wriu5 wrote
Reply to comment by Frumpagumpus in Concerns about the near future and the current gatekeepers of AI by dracount
>I am just pointing out it's all been re hashed over and over already
Has it though? Capital Hill seems totally and completely incapable of talking about technology in any meaningful way.
Joe Schmoe on Reddit can't do anything to affect national discourse: if the people who matter aren't discussing this, then the conversation has not been had yet.
leroy_hoffenfeffer t1_j1wd4g0 wrote
Reply to comment by Frumpagumpus in Concerns about the near future and the current gatekeepers of AI by dracount
I'm not sure what point you're trying to make here.
leroy_hoffenfeffer t1_j1w3crl wrote
Reply to comment by Relative_Purple3952 in Concerns about the near future and the current gatekeepers of AI by dracount
>I have com to terms with the fact that you are either an accelerationist and just want to get "it" done or a neo-luddite.
I want to try and safely and ethically consider the societal application of AI, and thus I'm a neo Luddite and totally against adopting any technology at all?
leroy_hoffenfeffer t1_j1vz5ls wrote
>If the job losses reach such high numbers this will cause massive social disruptions, likely ushering in the fall of capitalism to be replaced by something like a Cyberocracy (a government run by AI) or socialist or communist ideologies, with the potential of AI to accommodate the basic needs of the population (food, water, electricity, etc).
If you were to take the bulk majority of comments made in this subreddit at face value, you'd walk away thinking most people here would be completely fine with that. I'm glad there are people taking the consequences of the Fourth Industrial Revolution seriously.
>Can and should we give over the autonomy of our governments to the AI? Governed by pure logic and calculations? Unable to understand emotions or empathy? On the other hand it may be able to make many better decisions then our politicians can. Without bias, prejudice, corruption and self interested motivations? China already have AI "advising" every court ruling.
All great questions. Unfortunately, the global political class (outside of Europe in very specific circumstances) are wholly unequipped to provide any answers. Most US congressman barely understand how the internet works, let alone anything more complicated like Machine Learning. Those questions also assume people in power care about Ethics and Philosophy as it relates to technology, which... Is very questionable at best, and totally incorrect at worst.
>This technology is going to change everything and I hope there are people out there thinking about these sorts of things. And more than that, it is moving forward far faster than we have the capacity to think about.
Again, if you take the bulk majority of comments made in this sub reddit at face value... Then we're in for a world of hurt, seeing as most people don't seem to care at all, and actively want to usher in massive social upheaval as it relates to adopting AI across the board, willy nilly.
>don't know the answer, but the ones currently creating the AI make me very concerned about the future.
I don't at the moment know the answer either, and your fears are completely sound: most people don't think about this stuff at all.
leroy_hoffenfeffer t1_j1ipur1 wrote
Reply to comment by AndromedaAnimated in If your opinion is "it's good because it's AI," you're not really thinking very far ahead. by OldWorldRevival
>So you think that to know anything about AI you have to be in STEM? Is it because you are in STEM? Or why do you think that?
I wouldn't put it like that. What I would say is "If you want to speak about AI in any serious way, you should spend a good chunk of time understanding how it actually works, not how you think it works."
Based on my conversations yesterday, it doesn't seem like many people in this sub actually know how AI works.
For instance, with respect to the art tools, the AI is not 'inspired' by the artists it's been trained on. That's not how it works, that's not what's happening, so opinions espousing this viewpoint are ones I immediately dismiss.
leroy_hoffenfeffer t1_jcd8bp8 wrote
Reply to comment by Frumpagumpus in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
Mmk. Have fun with whatever future Libertarian, profit motivated AI results from not taking politics into account.