Submitted by dryuhyr t3_125ltpj in Futurology
In light of the recent post about a call for halting GPT-4+ development, it’s got me thinking. Of course, I don’t think any of us trust our beloved lawmakers to grasp the intricacies of AI further than they could throw a microchip, but what about others in the field?
I know for philosophy there are many fields where people have basically solved issues ages ago that are still plaguing us, just because the expert in the field isn’t the guy making the rules. It seems like guiding the development of AI is a topic that was just about as easy to theorize about in the 1990’s as it is today. Is there any sort of consensus by those in the field about some rules we should really be following going forward, which are of course being ignored by everyone with money and investments in this tech?
Dacadey t1_je4zamp wrote
The are no ways to regulate AI development. It was possible with nuclear weapons, because the massive scale of projects requiring hundreds of thousands of people and huge money injection made it possible to stop other countries from developing in by the leading superpowers.
In contrast, AI development can be done literally anywhere on a far lower budget. It's simply not possible to control the advances, that can also spread through the internet like the plague