tripple13
tripple13 t1_jeeaumr wrote
Reply to comment by 314kabinet in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
the AI Diversity, Equity and Inclusiveness community (AI Ethics)
tripple13 t1_jedt3cb wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Now that's a petition I can stand for.
Democratization of LLMs and its derivatives, is in fact, the AI safe way - Counterintuitive as it may sound to the AI DEI folks.
tripple13 t1_je5seed wrote
Reply to comment by mike94025 in [D] PyTorch 2.0 Native Flash Attention 32k Context Window by super_deap
tripple13 t1_jdz4bch wrote
Reply to comment by ZestyData in [P] 🎉 Announcing Auto-Analyst: An open-source AI tool for data analytics! 🎉 by aadityaubhat
+1
tripple13 t1_jcoq61v wrote
Reply to [Discussion] Future of ML after chatGPT. by [deleted]
Did you create an account, just to ask this question?
I don't think neither CV nor NLP is going away. CV is yet to be solved to the same extent as NLP, but I agree it might just be a matter of time.
Research wise, there are still tons of problems around uncertainty, complexity, causality, 'real-world' problem solving (domain adaptation) and so forth.
Just don't compete on having the largest cluster of GPUs.
tripple13 t1_jck9593 wrote
Does anyone know why they didn't add the flashattention directly into the Seems to be integrated, awesome!MultiheadAttention
-modules?
tripple13 t1_jb0ksx6 wrote
Reply to To RL or Not to RL? [D] by vidul7498
I find it quite ridiculous to discount RL. Optimal control problems have existed since the beginning of time, and for the situations in which you cannot formulate a set of differential equations, optimizing obtuse functions with value or policy optimization could be a way forward.
It reminds me of the people who discount GANs due to their lack of a likelihood. Sure, but can it be useful regardless? Yes, actually, it can.
tripple13 t1_ja6xhe0 wrote
If all you do is following trends, and whats in the "spotlight" you probably don't care about your research, but care about the accolades.
tripple13 t1_j97cxap wrote
Reply to comment by IDefendWaffles in [D] Is Google a language transformer like ChatGPT except without the G (Generative) part? by Lets_Gooo_123
Yes, indeed. While the lightbulb may contain properties which may or may not exhibit the Quantum Tunnel Effect (QTE), one must take great care not to confuse this with the Superposition Lightspeed Diffraction (SDL), as it is of paramount importance, that we do not make light of such phenomena - Essentially making all of humanity into sub-particle atoms in the progress towards enlightenment.
tripple13 t1_j8r8f7i wrote
This reads like some of those posts criticising OS-frameworks that don't always behave intuitively.
While I don't disagree that there are bugs, Hugging Face is doing more for Open ML than many large tech companies are doing.
HuggingFace, FastAI and similar frameworks are designed to lower the barrier to ML, such that any person with programming skills can harness the power of SoTA ML progress.
I think that's a great mission tbh, even if there are some inevitable bumps on the road.
tripple13 t1_j7dv4j7 wrote
Reply to comment by AdFew4357 in Are PhDs in statistics useful for ML research? [D] by AdFew4357
You're hired!
tripple13 t1_j7cpqqm wrote
Sure, could very well be.
Just have to leave all your p
-values at the door.
tripple13 t1_j783ca4 wrote
Reply to [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
This would be inherently bad, and create great opportunities for China, US, UK and elsewhere.
I'd like to believe they are smarter than this, but then again, I don't.
tripple13 t1_j723bf0 wrote
Reply to comment by new_name_who_dis_ in [D] Understanding Vision Transformer (ViT) - What are the prerequisites? by SAbdusSamad
I strongly disagree. Having an understanding of seq2seq prior Transformers, goes a long way.
tripple13 t1_j71b0xh wrote
Reply to comment by visarga in [D] Is computer science one of the most threatened jobs due to AI? by Suspicious-Spend-415
Certainly, one hundred per cent agree, if I understand you correctly.
Don't know about human entitlement, but from a simple time/energy-limitation perspective:
- The more time and energy you have in surplus, the more you're able to achieve. Like what is stopping human kind from populating the universe?
I'm sure time and energy is some of the reasons.
tripple13 t1_j70wvid wrote
History has shown what happens at technological breaking points. Yes, you may not want to earn a living as a horse carriage chauffeur, however, there are opportunity to become a car chauffeur.
I think your premise is wrong, it’s not about replacement, it’s about evolution.
It’s not about ‘threatening’ jobs, but improving certain aspects of it.
tripple13 t1_j6ihcum wrote
GPUs my friend. GPUs. I pray everyday, one day, an H100 may come my way. And yet, everyday, I pray, no H100 is yet here to stay.
tripple13 t1_j67xya1 wrote
Reply to [D] Laptop recommendations for ML by PleasantBase6967
Can we do a bot to autodelete these kind of posts?
tripple13 t1_j4m7ykq wrote
Well, somehow I expected TD's conclusion to be "Skip the current gen, wait for newer gen"
And yet, here we are.
tripple13 t1_j2ohf7r wrote
I continue working on that long backlog of things I'd like to implement:
- Additional models (change of encoder/decoder for future runs)
- Additional loss parameterisations (because you can never get enough)
- Additional dataloaders for the inclusion of more datasets (because without killing penguins, no paper)
- Additional bug-squashing/re-factoring which I've put off using TODOs as comments odd places in my code
tripple13 t1_j1lh7ce wrote
Reply to [R][P] I made an app for Instant Image/Text to 3D using PointE from OpenAI by perception-eng
Wow, super cool.
Now why do I feel like all my ideas are being gobbled up by OpenAI hahaha.
Bravo, nevertheless.
tripple13 t1_j1cu0yj wrote
God I love this post.
More genuine passion in this sub, please!
Keep us updated on your progress, would be great to follow.
tripple13 t1_j0zv1a5 wrote
Reply to [D] Why are we stuck with Python for something that require so much speed and parallelism (neural networks)? by vprokopev
Speed, quite literally.
Not computation, but ease of implementing a new idea and making a proof of concept.
Researchers try to maximize time spent on iterating through failure, rather than spending a lot of time to perfect a technique. (Generally speaking)
tripple13 t1_j0ysjpi wrote
Reply to comment by andreichiffa in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
Yeah, maybe. I think people will find there is no better alternative - For now.
tripple13 t1_jeetdjn wrote
Reply to comment by zbyte64 in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
What? How do you read that from my text?
I think most of them probably cares, just as much I'd asume you and I do, about how the next number of years play out for the benefit of man.