Comments
Gaudrix t1_ir6otcr wrote
It honestly feels like we are living on that steep edge.
In just the last few years there has been like +3 revolutionary cancer treatments, advancement in fusion/solar/battery tech, ai creation of art/video/physics sims/voices/faces etc.
There are so many new breakthroughs that there isn't even enough time to profit off of anything and make a product because by the time you hit market, a free solution is made available and it's better. The next 50 years will look like the last 50 x 100.
We are living it.
TheCynicsCynic t1_ir6vojx wrote
Out of curiosity, what are the 3+ revolutionary cancer treatments from the last few years? I wonder if I've heard of them or they're new to me. Thanks.
LordOfDorkness42 t1_ir6zzt3 wrote
Going to presume at least one of them is that vaccine against... cervix cancer, I think it was?
HUGE deal when it was new. Still not nearly wide-spread enough yet, but it's slowly getting there.
explicitlyimplied t1_ir84p80 wrote
Gardasil?
[deleted] t1_ir911sa wrote
It was so huge you can't remember what someone else has imagined.
There is, as of today, only human intelligence in software. AI does not exist yet.
LowAwareness7603 t1_ir7it5w wrote
I said that we were living it....behold!, just yesterday. I got downvoted at least once.
Gaudrix t1_ir7uc04 wrote
Yeah people are weird in this subreddit. Everyone is working off a different definition of what the singularity is and what it entails. The singularity is a point but it's not possible to experience a point just what comes before and what comes after. It's very hard to determine where is the point specifically that's why it's easier to quantify the closeness or speed approaching the singularity than the singularity itself. I'd say we are firmly locked in and have an obviously accelerating trajectory. We are in the endgame.
LowAwareness7603 t1_ir7uyqz wrote
I'm going to become a cyber assassin if I'm not God.
Gaudrix t1_ir7yhiu wrote
š¤£ I want a full cybernetic body and then a brain once we solve the brain copy problem.
[deleted] t1_ir91jxz wrote
Quantum mechanics states that a quantum level copy is impossible. Quantum mechanics is science, AI is a cult. AI does not exist yet. You are speaking about glorified curve fitting.
But as with AI, you can simply imagine it exists of course. You an even select the date of this new technology as imagination has no limits.
Quealdlor t1_iriulsh wrote
How are we in "the endgame" if we've just started as a civilization? We still don't have AGI, FDVR, worker androids, FSD, commercial fusion reactors, anti-aging, cure for cancer, human augmentation, space elevators or arcologies. Let alone orbital rings, sombrero planets, Jenkin Swarms, Dyson Spheres or giant future things like that.
Eleganos t1_itdr6ww wrote
Same way that the space stage is the endgame of Spore despite, in reality, bejng the vast majority of the game for anyone who still actually bothers to keep playing once they hit it.
[deleted] t1_ir91bs0 wrote
As AI does not exist yet, and 70 years of AI research has led to zero AI and glorified curve fitting, at which rate the singularity will never happen, i think it is wise to define the singularity in such a way it can't be seen.
Imagined evidence, failing prophecies and Armageddon.
Hmm. That does not not sound like science, that is a cult.
kaityl3 t1_irbopkg wrote
> As AI does not exist yet
Bro what?
Wizard0fLonliness t1_ir8hzv9 wrote
Elaborate on solar fusion and battery tech, ootl
[deleted] t1_ir90yi2 wrote
As AI does not exist yet it is not very surprising that there nothing to be found about 3 cancer treatments invented by software.
Kinexity t1_ir6rcq3 wrote
Past performance does not predict future performance. Many processes initially look exponential while they aren't. This is not to say that singularity will not happen but this may not be an early indication some people think it is.
End3rWi99in t1_ir8n0r3 wrote
I am absolutely convinced the world is just a couple of years removed from a very rapid change in nearly every facet of life. I liken it to the internet boom in the 1990s but its impact is far wider and faster. Way faster.
[deleted] t1_ir91qkn wrote
But the internet existed in the 90's whilst imagined AI is really just glorified curve fitting or more generally software, which is 100% human intellect. So the projected date for the singularity is at present, never.
TheAnonFeels t1_ir6mhw5 wrote
You could make a paper! edit: Wait, these are AI papers, not papers on AI
Smoke-away OP t1_ir6t7bn wrote
The chart is papers about AI+ML.
Not papers written by AI.
was_der_Fall_ist t1_ir6ta3o wrote
No, theyāre definitely papers on AI; scientific papers in the fields of AI and machine learning. AI was not writing papers in 1994.
TheAnonFeels t1_ir74qy1 wrote
Yeah, that definitely makes sense at a closer look, immediately had to edit my response lol.
[deleted] t1_ir8zze7 wrote
Sure, after imagining AI exists, imagining the singularity has started is a logical next step.
lovesdogsguy t1_ir6qnhy wrote
So many advances pouring in every week / day now. I wonder what we'll have by the end of 2022?
2023 is going to produce the equivalent of years of progress at the current rate, maybe more.
Quealdlor t1_ir6vdp0 wrote
Doubling rate is 24 months, so in 2024 there will be 2x more new papers than in 2022.
LordOfDorkness42 t1_ir7374y wrote
I'd buy that increase rate, given how quickly art-AI is moving right now.
Less than a year ago, you got pretty and well colored but abstract blobs.
This is 23 hours ago as of posting.
Do pardon the MLP focus, but the first images where my own 'holy frick, AI art has come that far?!' moment, so I wanted to keep things fair so the difference is highly visible.
But... yeah. We certainly live in interesting times, and I'm very curious what the coming years will hold for us.
TheAnonFeels t1_ir756qp wrote
You've been pardoned.
BUT, have you seen this? https://cdn.discordapp.com/attachments/407355414229811200/1026593684441022594/AI1.png
I don't know much about where they came from, but the AI is still training
LordOfDorkness42 t1_ir76a5w wrote
Hadn't seen that one in particular, but I'd believe it.
Charlie, AKA penguinz0 did a video two weeks ago, where he was basically playing around with Stable Diffusion 1.5, and he made some really cool stuff.
A lot of it looked wonky, of course... but some of it I'd definitively stood and stared at for a few minutes if I'd seen it up on somebody's wall.
TheAnonFeels t1_ir7891t wrote
Yeah, i've seen a number of outputs from this guy and he's posted a few odd ones, bodies turned halfway through, sitting wrong way on a bench that also kinda disappears.. It has issues, but it can output quality more often than not..
Its just remarkable, I'm sure in a few months we'll see a whole lot more come out!
LowAwareness7603 t1_ir7jw0a wrote
Check out Nexpo's video about Loab. https://youtu.be/i9InAbpM7mU
AI generated anomaly.
Quealdlor t1_ir9cz4o wrote
AI works are getting better and better, I can see that. Still the vast majority are bad. The one you linked is good. I often see arms, hands being painted in a wrong way. I still think that it will take multiple years before AI is as good as the best artists. Stable Diffusion should be called Unreliable Diffusion or Unsteady Diffusion for now, judging by all the works I've seen and done.
TheAnonFeels t1_ira0ks4 wrote
Even the fact it can produce quality, discredits all the bad works it produces. Rejecting the bad ones is simple enough, even if humans have to do it...
I don't see how it has an error rate is a problem?
SowingKnowing t1_ir904dv wrote
That third image, holy fucking shit!!!
Thanks for pointing that out with such great examples!
LordOfDorkness42 t1_ir99vqn wrote
You're welcome.
And yeah, cherry picked examples, of course, but I really think the Art-AI stuff is sliding under the radar of the public right now due to how much else is going on.
I've even seen faked signatures, speach bubbles and Patreon links. Fƶr now, those are just blobby swirls that look right only from a distance, but still.
I'm not sure if this is where we'll see the birth of some of the first true AI... but if nothing else, this seems like the next smartphone to me.
Just... poof, everywhere overnight for those that weren't paying attention, and THAT'S when the public freak out for a bit.
[deleted] t1_irdys5o wrote
If you assert true ai, there is false ai. This is correct, current "AI" is false. It does not exist yet. The science is not here yet.
Powerful_Range_4270 t1_ir7bud5 wrote
In 2023 there will be 1.5 x the amount f papers than 2022.
Quealdlor t1_ir99qcp wrote
More like 1.4x.
lovesdogsguy t1_ir71yni wrote
Oh yes, that's correct according to this. I think I was actually thinking more about the new advances in text-to-video generation, which combined with all the other news this year, I find pretty astonishing.
cascoxua t1_ir9kto5 wrote
Doubling the number of papers does not double the knowledge. Most of them does not worth shit and makes more difficult to find the relevant ones. Big amounts of papers does not mean progress in science. Means that lots of people found a field with lots of interest and jumped to it and are publishing lots of irrelevant ones and that's becausse a key KPI for researchers relevance is the number of papers published.
LowAwareness7603 t1_ir7j60s wrote
Holy mackerel...
[deleted] t1_ir8zqxo wrote
What progress? AI does not exist yet. What is referred to is most often glorified curve fitting.
Smoke-away OP t1_ir5uz8y wrote
> The number of AI papers on arXiv per month grows exponentially with doubling rate of 24 months.
> How can we cope with this? AI itself can help, by predicting & suggesting new research directions.
> Predicting the Future of AI with AI: https://arxiv.org/abs/2210.00881
> I have about ~100 open tabs across 4 tab groups of papers/posts/github repos I am supposed to look at, but new & more relevant ones come out before I can do so. Just a little bit out of control.
prototyperspective t1_ir75k3w wrote
>How can we cope with this
I think society needs to start caring more about knowledge integration. At least papers that are published by journals (not preprints) should more often be put into context and made useful by integrating them into existing knowledge systems at the right places.
That's what I'm trying to do when editing science-related Wikipedia articles (along with my monthly Science Summaries that I post to /r/sciences), updating them with major papers of the year (that also includes the much-expanded article applications of AI). I would have thought somebody took care of at least the most significant papers.
It probably needs more comprehensive overview- & context-providing integrative living documents that help people make sense, properly discover and make use out of the gigantic loads of new science/R&D output beyond Wikipedia.
>AI itself can help, by predicting & suggesting new research directions
I think many make the false conclusion that AI is the solution to such problems not a help to a (small) subset of those. Suggesting new research directions seems like an interesting application.
Many ways that could be useful would only be software, not AI. For example, it would be great to somehow better "visualize" (literally or similar) ongoing progress / research topics/fields or categorize papers by their research topics so you can kind of get notified when new subtopics emerge or new research questions related to your watched topics/fields get heatedly debated/investigated etc or auto-highlight text to make things easier to skim etc. I've put some of my ideas (related: 1 2) for such to the FOSS Wikimedia project Scholia which could integrate AIs.
Here are some more similar stats about papers (more CC BY images welcome). Example: ArXiv's yearly submission rate plot
>I have about ~100 open tabs across 4 tab groups of papers/posts/github repos I am supposed to look at, but new & more relevant ones come out before I can do so. Just a little bit out of control.
See some ways/tools to deal with this in this thread at r/DataHoarder here
More R&D (studies, addons, ideas, ...) about such could be very useful as it could accelerate & improve progress on a meta-level.
Nebukire t1_ir647st wrote
theghostecho t1_ir8kx1c wrote
He got me into ai papers
Evil_Patriarch t1_ir64309 wrote
Any comparisons available for how an increase in paper publications translates to an increase in new tech actually reaching the market?
whenhaveiever t1_ir6ndm2 wrote
That's what I'm wondering. How much of this is actual useful research that advances the field and how much is sociologists plugging things into Dall-E and having opinions about the results?
But also, there's no possible way for any human to keep up with 4000 new papers per month. We almost need AI to read the AI papers and tell us what the good ones are.
MercuriusExMachina t1_ir6wnx0 wrote
Exactly. The bitter lesson would seem to indicate that compute is the determining factor, not algorithmic innovation.
But it's good to see that research is keeping up with compute.
zero_for_effort t1_ir7rs4z wrote
Can someone point me to these sociology articles about Dall-e or is this a facetious comment?
whenhaveiever t1_ir83saf wrote
Yeah, it's partly facetious. The thing I had in mind was the apoploe vesrreaitais guy who isn't a sociologist now that I look it up.
Kaarssteun t1_ir6a3ru wrote
Logic tells us more people working on something = faster progress
Xstream3 t1_ir6qc74 wrote
Since its software its extremely easy to bring to market (relative to physical products).
Kaarssteun t1_ir69zge wrote
Even the log scale looks ever-so-slightly exponential. Insane!
[deleted] t1_ir68hes wrote
[deleted]
Nerdler17 t1_ir6d1qi wrote
Amazing.
Smoke-away OP t1_ir6totg wrote
No. The chart is papers about AI+ML.
Not papers written by AI.
User1539 t1_ir6vt3t wrote
It was just an off the cuff, half joking, thing. I think I read that someone published an AI paper by an AI, and did a quick search for AI written research papers.
Basically a joke.
was_der_Fall_ist t1_ir6pgq7 wrote
No, Iām quite sure this is a measure of how many papers are written about AI and machine learning. Sure, someone used GPT-3 to write a paper (as the other commenter linked), but thatās not very effective yet. Scientific papers are still written by humans.
[deleted] t1_ir6s71g wrote
[deleted]
was_der_Fall_ist t1_ir6sugi wrote
Theyāre simply wrong. Do you think AI was writing papers in 1994, as this chart shows? No ā this is just a measure of papers about AI, in the field of AI, but written by humans. A couple of commenters here have linked an article about how a researcher used GPT-3 to write a paper, but that is unrelated to this measure of scientific papers in the fields of AI and machine learning. GPT-3 is, in general, not reliable enough to write scientific papers, and, anyway, it was only created in 2020, so it wouldnāt explain how this chart tracks AI papers in the period from 1994-2020.
Shelfrock77 t1_ir6701e wrote
In the future, youāll be able to code/hack in AR just by thinking about it. I got this realization through watching cyberpunk edge runners.
DellySys t1_ir67pkj wrote
lmao what
SWATSgradyBABY t1_ir6mjk2 wrote
He's right but I'm still lol at your response
Nerdler17 t1_ir68m6e wrote
Thought to text, thought to code
SWATSgradyBABY t1_ir6mr96 wrote
That's what's happening in Westworld with Maeve.
insectula t1_ir6zxbf wrote
If you are tuned in and looking you can feel this happening. I didn't need this metric to know this, but it reinforces what I have been thinking.
Kaarssteun t1_ir76nxs wrote
if anything, this assures me that things truly are moving exponentially. It's easy to feel that way with the recent advances, but maybe it's just me becoming increasingly immersed in this ai fiasco. This tells me otherwise though, I'm not crazy yet.
Kujo17 t1_ir6lbd9 wrote
Thank you I tried to post this, this morning and for some reason reddit wasn't working for me lol came back to post it now/try again and see this.
Cryptizard t1_ir76bjt wrote
Not to be a buzz kill but if you plot just generally the number of papers on arxiv per month it is also exponential looking.
Nice-Information3626 t1_ir7o876 wrote
Compare the doubling rate though
Zermelane t1_ir7xuxy wrote
I've never had anyone kill my buzz as little as by pointing out that no, it's not just AI, actually the rest of science is making exponential progress as well. If anything, it seems to be making my buzz even more alive.
(well, arxiv paper count anyway; there are different views on how that relates to the amount of progress in general)
Cryptizard t1_ir87dng wrote
Yeah I think it just tells you that arxiv is becoming more popular.
SWATSgradyBABY t1_ir6lkvs wrote
We are in the knee of the curve.
nebson10 t1_ir7el2k wrote
There is no such point that can be said to be a knee
davesp1 t1_ir6b0dh wrote
The precursor
saccharineboi t1_ir6n249 wrote
Many in academia and industry face the same question: Should I spend time trying to find a solution S to some problem P, or should I work on an AI system that can find a solution S' to any problem P' from a set of problems that P belongs to? Add to that the fact that computers are getting super fast ... Hence the explosion of AI papers.
Bakoro t1_ir98owh wrote
For real dealing with that right now. One way or another, I'm going to have to make some software to do this thing. Do I want a fairly okay solution right now, which I can iterate on and easily explain/justify why my solution is roughly correct, OR dump some resources into machine learning, have nothing to show for it up until I do, but very likely get something on the other end which is almost magically good but I don't know why...
Drifter64 t1_ir7klgc wrote
Most of them are garbage but once in a while you get a gem.
[deleted] t1_ir6b9rm wrote
[deleted]
was_der_Fall_ist t1_ir6py3g wrote
This is a measure of papers about AI, not papers written by AI. The chart goes back to the 1990s, when certainly no papers were being written by AIs. Even today, language models are not reliable enough to write scientific papers.
Cryptizard t1_ir76587 wrote
There were lots of AI articles in the 90s, just not on Arxiv. You could plot papers in general on arxiv and it would look exponential.
was_der_Fall_ist t1_ir7d9tn wrote
Iām saying there were no papers written by AIs in the 1990s. There were, of course, papers about AI.
Cryptizard t1_ir7do9o wrote
Oh, sorry, I gotcha.
Artanthos t1_ir6s62n wrote
was_der_Fall_ist t1_ir6t2wi wrote
This is unrelated to the chart in the OPās post. Anyway, despite one person writing a paper with GPT-3, language models really arenāt reliable enough at the present moment to be writing scientific papers, and they certainly werenāt in the period from 1994-2020. Maybe GPT-4.
Artanthos t1_irbdxxm wrote
āThis cannot be done.ā
Example provided showing it has already been done.
āThat doesnāt count, it still cannot be done.ā
was_der_Fall_ist t1_irbjdoj wrote
There are a few points to make here. First, Iād like to make it clear that Iām extremely optimistic about the development of AI, and that I think language models like GPT-3 are incredibly impressive and important. I use GPT-3 regularly, in fact. So Iām not just nay-saying the technology in general.
Second, as far as I can tell, the paper by Thunstrƶm and GPT-3 has not been peer-reviewed and published in a journal. It has only been released as a preprint and āawaits review.ā
Third, even if GPT-3 is perfectly capable of writing scientific papers, that does not relate to the overall purpose of my commenting, which was to explain that the chart in the OPās picture measures the number of papers written about AI, rather than written by AI.
Fourth, the paper, entitled āCan GPT-3 write an academic paper on itself, with minimal human input?ā isā¦ strange. Even disregarding the āmetaā nature of the paper, in which the subject matter is the paper itself, it exhibits problems that are typical of the flaws of GPT-3 which make it unreliable. For example, it starts the introduction to the paper by saying that āGPT-3 is a machine learning platform that enables developers to train and deploy AI models. It is also said to be scalable and efficient with the ability to handle large amounts of data.ā This is a terrible description of GPT-3. GPT-3 is, of course, a language model that predicts text, not a machine learning platform that enables developers to train and deploy AI models. Classic GPT-3, writing in great style but with a pathological disregard for reality. With factual inaccuracies like this, I doubt the paper would be published in a respected journal as, say, DeepMindās research is published in Nature.
Iām hopeful that future models will correct this reliability problem (many have already been working on it), but right now, GPT-3 too often expresses falsehoods to be a scientific writer, or to be relied upon for other purposes that depend on factual accuracy. This is why the only example of a GPT-3-written research paper so far is one that, to my understanding, does not qualify as human-level work.
JJP77 t1_ir8trms wrote
most of them are bullshit though
Poemy_Puzzlehead t1_ir6ej32 wrote
Whatās the blip around the year 2000? Would that be Y2K or maybe Spielberg/Kubrickās A.I. movie?
Lone-Pine t1_ir7b9x9 wrote
Schmidhuber's lab uploaded all their work that year.
DukkyDrake t1_ir7q5qb wrote
>Most NLP research is crap: 67% agreed that A majority of the research being published in NLP is of dubious scientific value.
What percent is NLP related? "The exponentially growth of crap"?
[deleted] t1_ir91zmk wrote
AI does not exist yet, and calling glorified curve fitting "AI" is beyond dubious scientific value, its outright quackery.
meatfred t1_ir96fum wrote
š¤£
azazelreloaded t1_iracqew wrote
Is the number of papers really the rate of progress? I can think of N number of architectures varying the layers, neurons and handful of hyper parameters.
[deleted] t1_ir71g9i wrote
[deleted]
[deleted] t1_ir649vo wrote
[removed]
Sophus__ t1_ir5zill wrote
You could make an argument for early stages of technological singularity based on metrics like this.