bitemenow999

bitemenow999 t1_jaa5b9n wrote

what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...

As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...

AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?

​

>This is not about academic labs, but about industry, governments, and startups.

Most of the startups are off shoots of academic labs.

0

bitemenow999 t1_ja9dl6k wrote

The problem is that the AI ethics debate is done by people who don't directly develop/work with ML models (like Gary Marcus) and have a very broad view of the subject often taking the debate to science fiction.

Anyone who says ChatGPT or DallE models are dangerous needs to take ML101 class.

AI ethics at this point is nothing but a balloon of hot gas... The only AI ethics that has any substance is data bias.

Making laws to limit AI/ML use or keeping it closed-source is going to kill the field. Not to mention the amount of resources required to train a decent model is prohibitive enough for many academic labs.

EDIT: The idea of "license" for AI models is stupid unless they plan to enforce the license requirements to people buying graphic cards too.

31

bitemenow999 t1_j7m7ctl wrote

my boss during my internship at FB (now meta) came from academia and was a professor at one of the well-known uni, literally didn't write a single line of code during my 3 months there, all I/we (most of the team) got were scribbled notes written during our weekly meetings on what to implement...

2

bitemenow999 t1_j7lqudl wrote

If you want to be an ML scientist and build actual models then you just need a lot of math and just enough programming skills for prototyping, go with any language and if you can code what you want then that is great. One thing to note is I have in my experience seen people only with a grad education and research experience in this field and some of them don't code they just write down algos and let developers implement that, so you might want to consider that.

If you want to be MLOps or data engineer that doesn't require much math or an advance degree, then start with books specific for those fields since these roles have slightly different stack.

One rule of thumb, if you are just dipping your toes in, is to start with a language that has great and free resources available, for ML (learning and prototyping) that happens to be python, but you need C++ if you actually want to deploy your model for a decent size industrial project.

0

bitemenow999 t1_j3e5jms wrote

I would suggest changing the publications to the standard format, nobody needs to know what journal/platform all of them are bad... Also drop PhD from the prof. name, they are assumed to have a Phd by default. Drop "selected" from selected projects.

3

bitemenow999 t1_j39qzwa wrote

>but that doesn't mean you can abdicate moral responsibility altogether.

if you design a car model will you take responsibility for each and every accident that happens where the car is involved irrespective of human or machine error?

​

The way I see it, I am an engineer/researcher my work is to provide the next generation of researchers with the best possible tools, what they do with the tools is up to them...

Many will disagree with my opinion here but given past research in any field if the researchers would have stopped to think about the potential bad apple cases then we would not see many of the tools/devices which we take for granted every day. Just because Redmond quit ML doesn't mean everyone should follow in his footsteps. Restricting research in ML (if something like this is even possible) would be similar to proverbial book burning...

2

bitemenow999 t1_j3784qe wrote

Dude we are not building Skynet, we are just predicting if the image is of a cat or a dog...

Also like it or not AI is almost getting monopolized by big tech given the huge models and the required resources to train the said models. It is almost impossible for a research lab (academic) to have the resources to train one of the GPTs or diffusion models or any of the STOA models (without sponsorships). Regulating it will kill the field.

10

bitemenow999 t1_j2bzpsx wrote

I just finished a tool/ml model (a shallow one, technically ml) that would suggest keywords and ideas for your next paper/project for maximum citations...

A bit of background for motivation, a couple weeks ago I was with a friend getting drunk and talking about the monopoly of big tech on deep learning and pattern of publication in major conferences, eg. this year it was diffusion, last year it was transformers etc.So I came home a bit drunk and wrote a script to scrape data from papers, nothing fancy just the keywords and citations. Woke up to decent size data, so I trained a quick decision tree (don't know why it made sense to my half-drunk brain). Sent it to some of my friends in my lab to play with, got some funny results and suggestions and it looks like I am gonna work on it a little bit to add more features and as a side project.

So it takes some inputs like a couple of keywords from your side like if your area eg. x-ray,cancer detection or inverse pde/system identification etc and gives next couple of key words like diffusion, transformers, clip-guided etc. as well as predicted number of citations.

3

bitemenow999 t1_j1yxhfy wrote

You do realize same results can be achieved irrespective of the "model", by changing the number of neurons in one layer you are essentially creating a new model...

"Protecting" your model doesn't make sense unless it has some new type of math involved in which case you can patent the method.

What you can protect is not disclose the training method (if it is somewhat unique) or not share the training data. Or you can wrap it up as a software and Copywrite it.

2