Submitted by Lost-Parfait568 t3_xtxe6f in MachineLearning
Comments
impossiblefork t1_iqv6u0x wrote
All of them are useful.
The 0.1% improvements have sort of added up, and then you get the 'baseline is all you need' and then people start adding on 0.1% improvements again, and then people prove something about it, or something else of that sort.
donotfire t1_iquppcj wrote
That’s what I was thinking
…empiricism-ception
hausdorffparty t1_iqvqag3 wrote
If only reviewers thought so 😭
lulu_the_cat2021 t1_isl7mau wrote
Reviewers nowadays for local conferences do not even look at papers they merely accept any paper.
Even_Information4853 t1_iqsk1i0 wrote
Andrew Ng - Geoffrey Hinton - Yann LeCun
Yoshua Bengio - ??? - ???
Jeremy Howard? - ??? - ???
??? - Demis Hassabis - Lex Fridman
​
Anyone can help me fill the rest ?
trendymoniker t1_iqsl2og wrote
Bengio - Daphne Koller - Fei Fei Li?
Howard - Jeff dean - ??
Serinous t1_iqteyv6 wrote
Looks like Karpathy
neuronexmachina t1_iqsv7xi wrote
Yeah, pretty sure right-most on second row is Fei-Fei.
Pd_jungle t1_iqt29s6 wrote
Yep the only female Asian 😂
panzerex t1_iqtr0vc wrote
Most definitely Karpathy after Jeff Dean.
Hydreigon92 t1_iqslnxj wrote
Pretty sure third row, third col is Andrej Karpathy
HughLauriePausini t1_iqsnmeg wrote
Third row in the middle is Alex Smola
Pd_jungle t1_iqt3bup wrote
2nd row 2nd column: Daphne Koller, professor works in probabilistic graphic model and causal inference
florinandrei t1_iquc0v7 wrote
Who is in the bottom-left corner?
Lone-Pine t1_iqumb49 wrote
My best guess is that youtuber Anastasia.
Emergency_Apricot_77 t1_iqurybv wrote
Who?
neelsg t1_iquuako wrote
I'm doubtful it is her, but https://www.youtube.com/c/AnastasiInTech
[deleted] t1_iqt26xh wrote
[deleted]
[deleted] t1_iqunqf3 wrote
[deleted]
Fendrbud t1_iqwbckr wrote
Lex Fridman bottom right.
DigThatData t1_iqtggke wrote
pretty sure lego blocks is the CEO of HuggingFace, Clem Delangue. In fact, I think the init image was his LinkedIn profile pic: https://www.linkedin.com/in/clementdelangue/
sstlaws t1_iquhm8b wrote
No, that's Karpathy
DigThatData t1_iqunlco wrote
yup 100%
seba07 t1_iqshjec wrote
And the "results are 0.x% better" papers are often about challenges that aren't interesting anymore since many years.
Hamoodzstyle t1_iqsmn0k wrote
Also do l don't forget, no ablation study so that it's impossible to know which of the tiny changes actually helped.
jturp-sc t1_iqvcptz wrote
Most of them are really just CV padding to some 1st or 2nd year grad student. If you look into them more, it's usually just as trivial as being the first to publish a paper about using a model that came out 12 months ago on a less common dataset.
It's really more about the grad student's advisor doing them a solid in terms of building their CV than actually adding useful literature to the world.
sk_2013 t1_iqwen2y wrote
Honestly I wish my advisor had done that.
My CS program was alright overall, but the ML professor used the same undergrad material for all his classes and I've kind of been left trying to put together functioning knowledge and a career myself.
throwawaythepanda99 t1_iqsrohn wrote
Did they use machine learning to turn people into children?
GullibleEngineer4 t1_iqugyhm wrote
Is there a web demo somewhere? I would like to try it out.
MostlyRocketScience t1_iqvnxwj wrote
StyleGAN face editing colab notebooks and video tutorial: (Can do Age, Gender, Smile, Pose)
https://drive.google.com/drive/folders/1LBWcmnUPoHDeaYlRiHokGyjywIdyhAQb
https://www.youtube.com/watch?v=dCKbRCUyop8
Example output I made some time ago: https://imgur.com/a/VVRHzlD
blendorgat t1_iqun070 wrote
Stable Diffusion with img2img can do this with a bit of fine-tuning on the noise strength, though from the way it looks I wouldn't bet that's what was used here.
i_speak_penguin t1_iqzfkwz wrote
Seriously I swear these are photos of famous AI researchers who have been de-aged lol.
RageA333 t1_iqst9zc wrote
Proving is still an advancement
jackmusclescarier t1_iqv2pkg wrote
If MLT researchers could prove something that was relevant to the practice of deep learning 5 years ago they'd be ecstatic.
OptimalOptimizer t1_iqtmkgr wrote
You’re missing “Schmidhuber did it 30 years ago”
Delta-tau t1_iqtnde1 wrote
All funny and right on the spot except the one about "proving what had already been known empirically for 5 years". That would be actually a big deal.
Separate-Quarter t1_iqu59mq wrote
Imagine including Lex Fridman here
sstlaws t1_iquhvsl wrote
Lol yeah, I don't think he's relevant.
Parzival_007 t1_iqsfyoy wrote
LeCunn and Lex would loose their minds if they saw this.
LearnDifferenceBot t1_iqsg6ts wrote
> would loose their
*lose
Learn the difference here.
^(Greetings, I am a language corrector bot. To make me ignore further mistakes from you in the future, reply !optout
to this comment.)
WordWarrior81 t1_iqth5r8 wrote
Good bot
LearnDifferenceBot t1_iqth6hm wrote
Thank you!
Good_Human_Bot_v2 t1_iqth6ut wrote
Good human.
Parzival_007 t1_iqsj82z wrote
Oh come on
LiquidateGlowyAssets t1_iqsse3y wrote
ok toaster
BobDope t1_iqvr1cu wrote
They would loose their bowels
KevinRSX t1_iquxlrx wrote
Bad bot
LearnDifferenceBot t1_iquypiv wrote
Bad human.
insanelylogical t1_iqskava wrote
Is it weird that I recognized Lex because of that hairline that doesn't know where to stop (which I am also jealous of)?
shepik t1_iqv3zbk wrote
>would loose their
*loss
pm_me_your_pay_slips t1_iqsuta1 wrote
My first ML paper
Magneon t1_iquektg wrote
Other common ones:
> We fiddled with the hyperparameters without mentioning, and didn't create a new validation set
and
> What prompted the layer configuration we selected? I dunno, it seemed to work best.
BrotherAmazing t1_iqtiyud wrote
This looks more like a “meme”-tag worthy post than “discussion”.
Fit_Schedule5951 t1_iqu4d33 wrote
This has been on twitter for a while now, just reddit things.
[deleted] t1_iqunui9 wrote
[deleted]
[deleted] t1_iqvcizr wrote
[deleted]
superawesomepandacat t1_iqtcxtl wrote
Top right is usually how it works outside of academia, data-iterative modelling.
forensics409 t1_iqu8xlc wrote
I'd say these are 99% of papers. 0.99% are review papers and 0.01% are actually cool papers.
MangoGuyyy t1_iquwrit wrote
Andrew Ng - Coursera - AI educator - Stanford Geoffrey Hinton - deep learning godfather - Canada Yann Lecun - chief AI at meta - deep learning - Canada godfather - founder of CNN
Yoshua Bengio - deep learning godfather Daphne Koller - cofounder Coursera - comp bio - Stanford prof Fei Fei Li - Stanford Vision Lab
Jeremy Howard - cofounder Fast AI - AI educator Jeff dean - Google engineer Andre kaparthy - Tesla AI head
?? Demi’s hassabis- deep mind head Lex Friedman - MIT AI prof - YouTuber / podcaster
rm-rf_ t1_iqw3q4u wrote
> Lex Friedman - MIT AI prof - YouTuber / podcaster
Lex is not a professor at MIT.
MangoGuyyy t1_iqypdyb wrote
Oh my bad I assumed he is, did he just do lectures there
MostlyRocketScience t1_iqvp94b wrote
> Andre kaparthy - Tesla AI head
He's left Tesla and is a YouTuber now
moschles t1_iqtccm9 wrote
Some feelings were hurt by this meme.
KeikakuAccelerator t1_iquntc2 wrote
Bruh, demis hassabis and his team literally solved protein folding.
show-up t1_iqut6en wrote
Demis Hassabis: model provably surpasses human-level perf on these handful of tasks.
Media: Congrats!
Researcher spending more time on social media than the PI would like:
Results are 0.1% better than that other paper. Kek.
Acceptable-Pattern93 t1_iqv4ppq wrote
Hello friend, My name is Siraj
jcoffi t1_iqu9vqw wrote
I feel attacked
emerging-tech-reader t1_iqubs5s wrote
I saw an NLP-ML one a few years back that had a conclusion of "This would never work" and they really tried. (forgot what they were trying to do)
ScottTacitus t1_iqvht66 wrote
More of these need to be in Mandarin to represent
EquivalentSelf t1_iqshd3m wrote
brilliant lol
anyspeed t1_iqt54zp wrote
I find this paper disturbing
bxkugzu t1_iqth561 wrote
can you explain what it is?
TheReal_Slim-Shady t1_iqurhqq wrote
When they are produced to find jobs or progress in careers, that is what exactly happens.
Fourstrokeperro t1_iqw4vuv wrote
We plugged one Lego block into another is too real omg
whatisavector t1_iqweio2 wrote
academia in a nutshell
Frizzoux t1_iqx0p6q wrote
Lego block gang !
gonomon t1_iquweo8 wrote
This is perfect.
[deleted] t1_iqvj3nt wrote
[deleted]
[deleted] t1_iqvstc6 wrote
[removed]
[deleted] t1_iqvtp40 wrote
[removed]
sephiap t1_iqvw3pk wrote
Man that last one is amazing, what a way to get your citation count up - goad the entire community. Totally going to use this.
IndieAIResearcher t1_iqvyh8f wrote
Where is Jurgen?
supermopman t1_iqwdxmz wrote
What's the source for the images?
IlIIlIlIlIIlIIlIllll t1_iqwy6ft wrote
Repost? I've seen this before... oh yes, here.
jonas__m t1_ircbeek wrote
missing from the list: Present 10 sophisticated innovations when only one simple trick suffices, to ensure reviewers find paper "novel"
Quetzacoatl85 t1_irzr5vy wrote
It's fun and I chuckled, but I'd also say this covers the majority of all papers in any scientific field, and I'd also say that that's ok. This is how science works, it can't all be groundbreak, status-upset and axiom-refute.
JanneJM t1_iqt4922 wrote
One of these is not like the others.
"Prove something known empirically" is actually useful and important.