Submitted by Neo-Geo1839 t3_124ksm4 in singularity
I believe that this issue, while not as important today will become pretty important in the near future, as artificial intelligence begins to evolve and as such will begin to have image generation, word generation etc. appear more human and less distinguishable from typical human-made images or words.
How will we go about regulating AI art (that will be used by humans for their own gain), deep fakes, AI word-writing etc. and how would we be able to enforce those rules? Like, can it really just be as simple as just stamping the name of the software used to create the images/videos? But, what about the words? The deep fakes? How will we able to fact-check if a political figure actually said this or did this and that it isn't AI generated?
How would we (in the future) be able to tell a human-made essay from an AI-written one. Like, how will we know that a student with low grades intentionally wrote an average tier essay with AI or on his own besides pure subjectivism? I would really like to hear your thoughts on this, as this could have profound consequences for human society. Not just with images, but also with deep fakes which can be used to sway public opinion and potentially hurt/improve a political figure's popularity, or just any figure.
sideways t1_jdzvvus wrote
AI made will be faster, cheaper and higher quality.