Submitted by Destiny_Knight t3_115vc9t in singularity
RiotNrrd2001 t1_j9bmlb2 wrote
I personally couldn't care less if it's "intelligent" or not. My own concern is mainly whether what comes out of it is useful or not. Whether a conscious mind produced that output or whether it was the result of a complicated dart game, as far as I'm concerned is an interesting question. But a more important question is - at least for me - is what it produces useful? It's less academic, and somewhat more objective. I can't tell if it's conscious. I CAN tell whether it's properly summarized a paragraph I wrote into a particular format, or whether the list of ideas I asked it for are worth delving into. I can't evaluate its conscious state, or even its level of intelligence, but that doesn't mean I can't evaluate its behavior, and I have to say that in those areas where factual knowledge isn't as necessary (summarizing text, creating outlines, producing lists of ideas, etc.) it behaves usefully intelligent. Does that mean it IS intelligent? To an extent, to me at least, that may not even matter except as an academic thought.
I almost want to look at these systems from a Behavioral Psychology point of view, where internal states are simply discounted as irrelevant and external behavior is all that matters. I don't like applying that to people, but it does seem tempting to apply it to AIs.
ChatGPT is not a calculator, it's more like a young, well-educated but inexperienced intern who wants to do a good job, but who still makes mistakes. I understand that I have to check ChatGPTs work. I can work with that.
Viewing a single comment thread. View all comments