was_der_Fall_ist t1_ir6py3g wrote
Reply to comment by [deleted] in "The number of AI papers on arXiv per month grows exponentially with doubling rate of 24 months." by Smoke-away
This is a measure of papers about AI, not papers written by AI. The chart goes back to the 1990s, when certainly no papers were being written by AIs. Even today, language models are not reliable enough to write scientific papers.
Cryptizard t1_ir76587 wrote
There were lots of AI articles in the 90s, just not on Arxiv. You could plot papers in general on arxiv and it would look exponential.
was_der_Fall_ist t1_ir7d9tn wrote
I’m saying there were no papers written by AIs in the 1990s. There were, of course, papers about AI.
Cryptizard t1_ir7do9o wrote
Oh, sorry, I gotcha.
Artanthos t1_ir6s62n wrote
was_der_Fall_ist t1_ir6t2wi wrote
This is unrelated to the chart in the OP’s post. Anyway, despite one person writing a paper with GPT-3, language models really aren’t reliable enough at the present moment to be writing scientific papers, and they certainly weren’t in the period from 1994-2020. Maybe GPT-4.
Artanthos t1_irbdxxm wrote
“This cannot be done.”
Example provided showing it has already been done.
“That doesn’t count, it still cannot be done.”
was_der_Fall_ist t1_irbjdoj wrote
There are a few points to make here. First, I’d like to make it clear that I’m extremely optimistic about the development of AI, and that I think language models like GPT-3 are incredibly impressive and important. I use GPT-3 regularly, in fact. So I’m not just nay-saying the technology in general.
Second, as far as I can tell, the paper by Thunström and GPT-3 has not been peer-reviewed and published in a journal. It has only been released as a preprint and “awaits review.”
Third, even if GPT-3 is perfectly capable of writing scientific papers, that does not relate to the overall purpose of my commenting, which was to explain that the chart in the OP’s picture measures the number of papers written about AI, rather than written by AI.
Fourth, the paper, entitled “Can GPT-3 write an academic paper on itself, with minimal human input?” is… strange. Even disregarding the “meta” nature of the paper, in which the subject matter is the paper itself, it exhibits problems that are typical of the flaws of GPT-3 which make it unreliable. For example, it starts the introduction to the paper by saying that “GPT-3 is a machine learning platform that enables developers to train and deploy AI models. It is also said to be scalable and efficient with the ability to handle large amounts of data.” This is a terrible description of GPT-3. GPT-3 is, of course, a language model that predicts text, not a machine learning platform that enables developers to train and deploy AI models. Classic GPT-3, writing in great style but with a pathological disregard for reality. With factual inaccuracies like this, I doubt the paper would be published in a respected journal as, say, DeepMind’s research is published in Nature.
I’m hopeful that future models will correct this reliability problem (many have already been working on it), but right now, GPT-3 too often expresses falsehoods to be a scientific writer, or to be relied upon for other purposes that depend on factual accuracy. This is why the only example of a GPT-3-written research paper so far is one that, to my understanding, does not qualify as human-level work.
Viewing a single comment thread. View all comments