Comments

You must log in or register to comment.

JanneJM t1_iqt4922 wrote

One of these is not like the others.

"Prove something known empirically" is actually useful and important.

305

impossiblefork t1_iqv6u0x wrote

All of them are useful.

The 0.1% improvements have sort of added up, and then you get the 'baseline is all you need' and then people start adding on 0.1% improvements again, and then people prove something about it, or something else of that sort.

28

donotfire t1_iquppcj wrote

That’s what I was thinking

…empiricism-ception

19

Even_Information4853 t1_iqsk1i0 wrote

Andrew Ng - Geoffrey Hinton - Yann LeCun

Yoshua Bengio - ??? - ???

Jeremy Howard? - ??? - ???

??? - Demis Hassabis - Lex Fridman

​

Anyone can help me fill the rest ?

187

trendymoniker t1_iqsl2og wrote

Bengio - Daphne Koller - Fei Fei Li?

Howard - Jeff dean - ??

58

panzerex t1_iqtr0vc wrote

Most definitely Karpathy after Jeff Dean.

7

Hydreigon92 t1_iqslnxj wrote

Pretty sure third row, third col is Andrej Karpathy

20

Pd_jungle t1_iqt3bup wrote

2nd row 2nd column: Daphne Koller, professor works in probabilistic graphic model and causal inference

4

seba07 t1_iqshjec wrote

And the "results are 0.x% better" papers are often about challenges that aren't interesting anymore since many years.

139

Hamoodzstyle t1_iqsmn0k wrote

Also do l don't forget, no ablation study so that it's impossible to know which of the tiny changes actually helped.

61

maxToTheJ t1_iqt2dov wrote

This. Using a single "lego block" would be an improvement.

22

crrrr30 t1_iqta3tc wrote

using a single lego block WITH different optimizer, lr schedule, and augmentations…

8

jturp-sc t1_iqvcptz wrote

Most of them are really just CV padding to some 1st or 2nd year grad student. If you look into them more, it's usually just as trivial as being the first to publish a paper about using a model that came out 12 months ago on a less common dataset.

It's really more about the grad student's advisor doing them a solid in terms of building their CV than actually adding useful literature to the world.

15

sk_2013 t1_iqwen2y wrote

Honestly I wish my advisor had done that.

My CS program was alright overall, but the ML professor used the same undergrad material for all his classes and I've kind of been left trying to put together functioning knowledge and a career myself.

4

throwawaythepanda99 t1_iqsrohn wrote

Did they use machine learning to turn people into children?

130

GullibleEngineer4 t1_iqugyhm wrote

Is there a web demo somewhere? I would like to try it out.

6

blendorgat t1_iqun070 wrote

Stable Diffusion with img2img can do this with a bit of fine-tuning on the noise strength, though from the way it looks I wouldn't bet that's what was used here.

2

i_speak_penguin t1_iqzfkwz wrote

Seriously I swear these are photos of famous AI researchers who have been de-aged lol.

2

RageA333 t1_iqst9zc wrote

Proving is still an advancement

90

jackmusclescarier t1_iqv2pkg wrote

If MLT researchers could prove something that was relevant to the practice of deep learning 5 years ago they'd be ecstatic.

8

OptimalOptimizer t1_iqtmkgr wrote

You’re missing “Schmidhuber did it 30 years ago”

64

NalNezumi t1_iqvw8wk wrote

I think the joke is also that Schmidhuber is not there

9

sstlaws t1_iqx70rg wrote

It was all him in all 12 slots 30 years ago

3

Delta-tau t1_iqtnde1 wrote

All funny and right on the spot except the one about "proving what had already been known empirically for 5 years". That would be actually a big deal.

47

Parzival_007 t1_iqsfyoy wrote

LeCunn and Lex would loose their minds if they saw this.

25

LearnDifferenceBot t1_iqsg6ts wrote

> would loose their

*lose

Learn the difference here.


^(Greetings, I am a language corrector bot. To make me ignore further mistakes from you in the future, reply !optout to this comment.)

55

Parzival_007 t1_iqsj82z wrote

Oh come on

9

Deto t1_iqsyqz2 wrote

<comments in ML thread>

<gets wrecked by a bot>

Maybe the robotic overlords are here already?

63

Thie97 t1_iqt2w3t wrote

Our reddit commentary experiment proved AGI is already here

13

insanelylogical t1_iqskava wrote

Is it weird that I recognized Lex because of that hairline that doesn't know where to stop (which I am also jealous of)?

6

shepik t1_iqv3zbk wrote

>would loose their

*loss

−1

Magneon t1_iquektg wrote

Other common ones:

> We fiddled with the hyperparameters without mentioning, and didn't create a new validation set

and

> What prompted the layer configuration we selected? I dunno, it seemed to work best.

15

BrotherAmazing t1_iqtiyud wrote

This looks more like a “meme”-tag worthy post than “discussion”.

12

superawesomepandacat t1_iqtcxtl wrote

Top right is usually how it works outside of academia, data-iterative modelling.

7

forensics409 t1_iqu8xlc wrote

I'd say these are 99% of papers. 0.99% are review papers and 0.01% are actually cool papers.

6

MangoGuyyy t1_iquwrit wrote

Andrew Ng - Coursera - AI educator - Stanford Geoffrey Hinton - deep learning godfather - Canada Yann Lecun - chief AI at meta - deep learning - Canada godfather - founder of CNN

Yoshua Bengio - deep learning godfather Daphne Koller - cofounder Coursera - comp bio - Stanford prof Fei Fei Li - Stanford Vision Lab

Jeremy Howard - cofounder Fast AI - AI educator Jeff dean - Google engineer Andre kaparthy - Tesla AI head

?? Demi’s hassabis- deep mind head Lex Friedman - MIT AI prof - YouTuber / podcaster

6

rm-rf_ t1_iqw3q4u wrote

> Lex Friedman - MIT AI prof - YouTuber / podcaster

Lex is not a professor at MIT.

4

MangoGuyyy t1_iqypdyb wrote

Oh my bad I assumed he is, did he just do lectures there

1

moschles t1_iqtccm9 wrote

Some feelings were hurt by this meme.

5

KeikakuAccelerator t1_iquntc2 wrote

Bruh, demis hassabis and his team literally solved protein folding.

4

show-up t1_iqut6en wrote

Demis Hassabis: model provably surpasses human-level perf on these handful of tasks.

Media: Congrats!

Researcher spending more time on social media than the PI would like:

Results are 0.1% better than that other paper. Kek.

4

jcoffi t1_iqu9vqw wrote

I feel attacked

3

emerging-tech-reader t1_iqubs5s wrote

I saw an NLP-ML one a few years back that had a conclusion of "This would never work" and they really tried. (forgot what they were trying to do)

3

ScottTacitus t1_iqvht66 wrote

More of these need to be in Mandarin to represent

3

anyspeed t1_iqt54zp wrote

I find this paper disturbing

2

TheReal_Slim-Shady t1_iqurhqq wrote

When they are produced to find jobs or progress in careers, that is what exactly happens.

2

Fourstrokeperro t1_iqw4vuv wrote

We plugged one Lego block into another is too real omg

2

Frizzoux t1_iqx0p6q wrote

Lego block gang !

2

gonomon t1_iquweo8 wrote

This is perfect.

1

sephiap t1_iqvw3pk wrote

Man that last one is amazing, what a way to get your citation count up - goad the entire community. Totally going to use this.

1

supermopman t1_iqwdxmz wrote

What's the source for the images?

1

jonas__m t1_ircbeek wrote

missing from the list: Present 10 sophisticated innovations when only one simple trick suffices, to ensure reviewers find paper "novel"

1

Quetzacoatl85 t1_irzr5vy wrote

It's fun and I chuckled, but I'd also say this covers the majority of all papers in any scientific field, and I'd also say that that's ok. This is how science works, it can't all be groundbreak, status-upset and axiom-refute.

1