JenMacAllister

JenMacAllister t1_jea4t4n wrote

I see this as making things more productive rather than replacing jobs.

Consider: It took 40 people 8 years (or so) to make and release Cyberpunk 2077, with all the bugs because they released before proper testing. With current AI's help I would bet this would have taken these 40 people far less time with far fewer bugs.

Yes, some people will no longer have jobs do to the way you can do more with less with these AI systems. But the productive improvements means these people will be doing other things, and we would have had Cyberpunk 2078 released in full VR in the very next year.

1

JenMacAllister t1_je9whgk wrote

China and Russia even if they sign this, and not continue past a GPT-4 ("Level") will mean they will catch up to where the west is now. Also these AI's will be trained on their respective countries internets. Which will mean they will have their countries bias, just like the ones we will be training in the west.

China's AI's will never no Tiananmen Square happened, Surveillance State is ok and Taiwan is a part of China, among other things. We can only guess at what the AI's in Russia will think of the people in Ukraine, etc...

Yes the West's AI's will also have these bias issues we are seeing now. The ones these guys are telling us to watch out for.

However the answer is not to stop research but to get these things in the open as soon as possible. The sooner these are beta tested by real people the better chance we will have in controlling them. Also the sooner we can test the less connected these things will be to our world.

We currently have the lead in this research and can shape these things before China or Russia can, because you know they will not. Not that I'm more confident the West will do it right, but I do know more people will have a chance to say there is something wrong and how these thing should be connected to our world.

5

JenMacAllister t1_jdx1l55 wrote

Simple, Have a AI create an app that people can send short messages to spread false or misleading information, bully or harass other people. Then allow any number of bots to control the positive or negative feedback controlled by a small number of people with an agenda. Then have the AI stand out of the way and let humanity destroy itself through conspiracy theories and really bad memes.

1

JenMacAllister t1_j743q7c wrote

I agree, the same way Doctors would use AI to diagnose patients because of the way the AI could access the entirety of human medical knowledge to make its suggestions. No reason why Lawyers and Judges could not do the same right now.

Over time the AI could earn more and more trust to where we might give up on those people and listen to the AI.

3

JenMacAllister t1_j741tz5 wrote

Yes it did. Anything created by humans will contain the biases of those humans. However others will recognize this and point it out so it could be removed in future versions.

I don't expect this to be 100% non bias on the first or even 100th version. I do not think all the humans on this planet could agree even what that would mean.

But over time I'm sure we could program an AI to be far more non bias than any human and most humans would agree that it was.

−1

JenMacAllister t1_iy40rka wrote

Anyone predicting anything more than 3 months out is simply going to be wrong. We can't see the future anymore than we can change the past.

It's been 7 years since I was supposed to get me hover board! Still waiting.

0