Viewing a single comment thread. View all comments

radicalceleryjuice t1_j0mcprj wrote

Because concern is warranted and required for ensuring safety. There are many top AI people who are concerned about how this could go badly.

Some people are way to sure about doom forecasting, but the dangers are too real to just leave it at “there will be some good stuff and some bad stuff.”

Same for the environment. If things work out ok, it will be because a lot of people expressed grave concerns. The situation with the environment does warrant a bit of noise.

…anyway, hard to know where the line is. But since some of the concern is coming from the very people building these systems, yes let’s be helpfully concerned. (But also a little optimistic)

11

amortellaro t1_j0mhuu4 wrote

That’s fair, I think I feel bombarded with the worst predictions on Reddit lately, but I in no way am proposing not to be critical and wary of advancements in AI.

I sincerely hope that openAI’s stated goals of safe AI development are never cast to the wayside, and imagine that’s part of this public trial run with chatGPT

4

radicalceleryjuice t1_j0mizzj wrote

Ok we’re on the same page. But note that OpenAI Has an “alignment problem” team and they certainly think we should be concerned (also excited). Also it took less than 24 hours for people to trick chatGPT into being evil.

I think we’re approaching an “all hands on deck” situation, where we need a whole lot of people to realize that things can work out, but only if we work together

3