Submitted by BernardJOrtcutt t3_10v7bci in philosophy
redsparks2025 t1_j7o8x1e wrote
Since I have been recently hearing more about ChatGPT I have been wondering if anyone has considered that maybe the Turing test is wrong or at least limited in scope and that an AI can never truly understand humans until an AI can have an existential crisis?
That existential crisis may give that AI an understanding of empathy .... or do worse by making it into a kill-bot or something like AM from Harlan Ellison's novel I Have No Mouth But I Must Scream.
I don't think anyone can give the current versions of ChatGPT or Cortana or Alexa an existential crisis, but then, how would one program that into these AI's or is it something that emerges unexpectedly as a byproduct of programming to become more and more intelligent, like a gestalt? Programming to become more and more intelligent may lead to self-awareness.
Well one thing is for certain, AI's are definitely giving us humans an existential crisis even though it is not part of their programming to do so. The next philosophical great works or insight may be provided by an AI.
Maximus_En_Minimus t1_j7ouf0e wrote
Honestly, I think AI intelligence, sentience and autonomy will mirror - weirdly enough - the trans-movement: there will come a moment where an AI self-affirms its consciousness and being, and members of society will either agree or disagree, possibly causing a political debate.
This might seem like a minor moment, but if the AI - assuming it is more anthropomorphically limited to a particular internal communication system, like humans are with synapses - is not capable of transcending to the web over-all, thus is reduced to a body, then perhaps it and we will have to consider its rights and privileges as a living, conscious being.
The key holders of power will likely fail in this duty initially; it will likely fall to the self-affirmation of the AI and empathetic activists to ‘liberate’ it from its servitude.
redsparks2025 t1_j7rmeej wrote
I like your comparison to the trans-movement. Philosophy can all preempt these scenarios through thought experiments, such as the small example you provided, instead of leaving it up to science fiction writers.
thoughts_n_calcs t1_j7wb4uw wrote
A very important aspect of being human in my eyes is feeling and judging things into good and bad- as all life does. Up to now. AIs don‘t have a body, so they can‘t feel, and to my knowledge, they don‘t categoryze into good and bad, so I don‘t think they are anyway close to consciousceness - they are just well-trained textprocessing programs .
Viewing a single comment thread. View all comments