OpeningVariable
OpeningVariable t1_jaa3ldd wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
This is not about academic labs, but about industry, governments, and startups. It is one thing that Microsoft doesn't mind rolling out a half-assed BingChat that can end up telling you ANYTHING at all - but should they be allowed to? What about Tesla? Should they be allowed to launch and call "autopilot" an unreliable piece of software that they know cannot be trusted and that they do not fully understand. I think not
OpeningVariable t1_ja9xo2h wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I think requiring an audit of models and data before the model can be used commercially is not such a bad thing. E.g. audit of ChatGPT and granting permission for specific kinds of commercial use - once we figure out what those are, and what tools we can use for auditing the models.
OpeningVariable t1_j9g5llt wrote
Reply to [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
I don't think it can make a "real" research paper, but it surely is interesting to know. I think, writing it up in a short workshop paper could work. I also think, if you continue working on this and have multiple instances of observations and injections made over time, it could maybe become an overview article and something that could go in a journal.
OpeningVariable t1_jaa8zp8 wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
BingChat is generating information, not retrieving it, and I'm quite sure that we will see lawsuits as soon as this feature becomes public and some teenager commits suicide over BS that it spat out or something like that.
Re the tool part - yes, exactly, and we should understand what that tool is good for, or more specifically - what it is NOT good for. No one writes airplanes' mission critical software using python, they use formally verifiable languages and algorithms because that is the right tool for the amount of risk involved. AI is being thrown around for anything, but it isn't a good tool for everything. Depending on the amount of risk and exposure for each application, there should be different regulations and requirements.
​
>Most of the startups are off shoots of academic labs.
This was a really bad joke. First of all, why would anyone care about off-shoots of academic labs? They are no longer academics, they are in the business, and can fend for themselves. Second of all, there is no way most startups are offshoots of academic labs, most startups are looking for easy money and throw in AI just to sound cooler and bring more investors.