Viewing a single comment thread. View all comments

ground__contro1 t1_jd17jet wrote

Btw it’s a terrible source. It can easily be wrong about established facts. Last week it tried to tell me Thomas Digges posited the existence of alien life. Digges is a pretty early astronomer when the church was dominant so that really surprised me. When I questioned it again, it “corrected” itself and apologized… which, great, but if I hadn’t already known enough about Digges to be suspicious, I would have accepted it in the list of all the other (correct) information.

Chatgpt is awesome, but it’s no more a source than Wikipedia, in fact it’s potentially worse because you don’t have anyone fact checking what chatgpt says to you in real time, whereas there is a chance others will have corrected wiki pages by the time you read them.

1

User1539 t1_jd2la65 wrote

oh, yeah, I've played with it for coding and it told me it did things it did not do, and couldn't read the code it produced after, so there's no good way to 'correct' it.

It spits out lots of 'work', but it's not always accurate and people who are used to computers always being correct are going to have to get used to the fact that this is really more like having a personal assistant.

Sure, they're reasonably bright and eager, but sometimes wrong.

I don't think GPT is leading directly to AGI, or anything, but a tool like this, even when sometimes wrong, is still going to be an extremely powerful tool.

When you see GPT passing law exams and things like that, you can see it's not getting perfect scores, but it's still probably more likely to get you the right example of case law than a first year paralegal, and it does it instantly.

Also, in 4 months, it's basically become accurate the way you'd expect a human to improve on things like the bar exam in 4 years of study.

It's a different kind of computing platform, and people don't know quite how to take it yet. Especially people used to the idea that computers never make mistakes.

2