Submitted by banaca4 t3_11da34x in singularity
DadSnare t1_ja872si wrote
OP I bet you’ve made some very life altering assumptions. Go back over the things you are worried about and instead of just buying into the fear, examine your beliefs and make an effort to build knowledge in areas where those assumptions are made. For example, there’s no logical reason to believe that an AGI will go rogue and want to destroy humans; a commonly held belief on here. Just because a bunch of people are worried about it, doesn’t mean they know jack shit.
play_yr_part t1_ja8b98b wrote
Sydney was (before the recent nerf) already hugely misaligned, and that's today. There are billions of dollars being put into LLMs and other models whilst even the programmers themselves cannot explain why these chatbots come to the conclusions they do. And it's not so much about "wanting to destroy us", it could destroy us without having any negative emotions to us whatsoever.
It's certainly something to think about, if not completely change your life based on it. I don't mind if people don't think it's going to be an issue, live your life. But there are people who have studied it extensively and know their "jack shit" who think it's very plausible.
DadSnare t1_ja8ibe5 wrote
That’s fine, but even in your post I’m seeing some easy-to-claim stuff that has no solid basis. Are you sure that the programmers cannot explain why a chatbot errors out? Really? Also, who said anything about the emotional state of an AI? That’s hardly even possible because it doesn’t have an endocrine system. We may have strong emotions the way we do to help with memory formation and retrieval as much as anything else. That’s not a problem for a machine. What’s a plausible way we get destroyed? Does AI own the corporations too? How do I lose power, internet, food, etc,? The nuclear terminator version seems impossible unless we are going talk about hacking brains and adjusting behavior like crazy people think is possible.
Viewing a single comment thread. View all comments