quantumfucker

quantumfucker t1_j6lptym wrote

Not to sound harsh, but I think that’s on you. OpenAI began as a nonprofit, funded by extremely wealthy entrepreneurs like Elon Musk, and then transitioned to having a for-profit arm in 2019. The fact that Microsoft and others are investing in them has been very open and public knowledge. They have already been criticized for that pivot for years.

So, how did they fail to be transparent? Don’t they literally give you a disclaimer that your responses will be used to improve it? Sorry, I don’t see how they “lured” anyone into anything.

Also, it’s extremely common for private companies to fund academic studies and institutions due to the high costs of running these labs and the potential for mutually beneficial partnerships. This also shouldn’t be surprising.

1

quantumfucker t1_j6iptju wrote

So you want to be paid to get access to software that makes your life better while someone else is running it on their own hardware? What’s the logic there exactly? Are you going to be mad at reddit next for running analytics on the comments you voluntarily gave them?

−13

quantumfucker t1_j6bjfq1 wrote

I thought I was edgy and jaded and politically cool at 15. Then I turned 25 and felt I was too idealistic and turned away from the world to focus on the few things in life I could control. Now I’m nearing 35 and it feels like you don’t really control anything, and everything good comes to an end. I hope cynicism operates on a horseshoe theory and by 70 I’ve somehow become enlightened.

9

quantumfucker t1_j679eqy wrote

That you can’t assume everything is possible just because some impressive things have happened. These breakthroughs didn’t come out of nowhere, they’re based in very sound and rigorous theories. You have provided none as to how or why a language model would be able to logically verify what it produces. You’re just vaguely gesturing at the fact breakthroughs exist.

0

quantumfucker t1_j62rrat wrote

“Malice” is something we ascribe to a person with intent. An AI is not capable of intent, which is why it’s not capable of malice. But that also means it cannot exist independently than humans. It will always be a tool humans make and humans evaluate. So, you’re still going to be choosing between humans, not an AI against a human.

And unfortunately, though the AI cannot have malice, it can fail successfully. Consider giving the AI a directive “minimize long-term human suffering.” It may determine that killing everyone instantly is the best way to guarantee that. Qualifying that reward policy is harder than you think.

5

quantumfucker t1_j62m3dp wrote

I don’t understand why they didn’t try to market this as an assistant to help you represent yourself rather than a substitute for a human lawyer. While we should encourage people to go with human lawyers, it’s still someone’s right to represent themselves, and this should’ve just been launched that way.

But that being said, I find it concerning that he was being threatened with jail time for trying this out. We should consider that a lot of lawyers are objecting not just because robots would be worse, but because it also threatens their personal livelihoods.

6

quantumfucker t1_j5dvn1q wrote

I work in tech buddy, you don’t know what you’re talking about.

It’s more reasonable to assume a lot of these big companies threw a lot of money at acquiring and retaining talent in response to the increased use of remote tech services during COVID, only to find the trends unsustainable later on and having to let go of a fraction of the many people they hired.

Compared to assuming that all these companies coordinated hiring and firing employees for what, the hell of it? Just to say “fuck you” to the working class? This just makes no sense. It’s a loss for them.

14

quantumfucker t1_j4veps6 wrote

What constitutes a “strong general base of knowledge” is a wild moving target and relies a lot on what society will be like. We obviously cannot teach everyone everything, so we need to make decisions based on the fact that public schools exist to prepare children for the future. If new tools become part of that, then they should be taught, just as we now teach programming in high school.

In a hypothetical world of accessible real-time translation, what exactly is the point of teaching foreign languages to students as a standard? Why do we need as many dedicated translators when anyone can work abroad using it? The people who need or want to pursue a finer study of it still can as a higher-education subject the same way people still can choose to study the classics in college and find niche applications of that.

0

quantumfucker t1_j4v5gpp wrote

No, it’s more like saying that because we have calculators, we don’t need to make kids good at mental math or make them memorize formulas. We can focus on teaching them general principles since they can rely on programs existing to calculate those for them. The entire field of computer science rests on the idea of being able to reliably abstract away some lower level functionalities so we can focus on design and higher-level applications.

And for all you said about language expression, it doesn’t change that people still need to be able to recognize good language in order to be able to get AI to give them an output they can meaningfully use as a skill in their daily life. They just don’t need to write every word of it themselves, which is a more efficient way to accomplish tasks.

The example you gave regarding Japanese seems to fall under what I said about enthusiasts/hobbyists. The goal of schools is to prepare students for the world and to be useful citizens, not to unlock the artistic eye of every individual. All the power to people if they want to, but I’m not sure it’s more important for education systems to do that over helping people understand newly developed tools and how to apply them.

1