graham_fyffe
graham_fyffe t1_j4ne8yf wrote
Reply to comment by SoylentRox in How long until an AI is able to write a book? by Educational_Grab_473
Oh and by the way, using human ratings of the model output is exactly how ChatGPT is trained. Human-in-the-loop reinforcement learning.
graham_fyffe t1_j4ndc3j wrote
Reply to comment by SoylentRox in How long until an AI is able to write a book? by Educational_Grab_473
You can ask chatGPT to write a summary of the story first, then the chapter names and chapter summaries, then each chapter one at a time. Try it! This hierarchical method can already achieve some of what you’re talking about.
graham_fyffe t1_j4gfpzf wrote
Reply to comment by thegreenwookie in This is something that we should keep in mind. by [deleted]
Most entertainment is derivative, yes. But there’s still a tiny little bit of new creativity infused in these derivative works, except for the really bad ones. This all adds up and contributes to the Zeitgeist. And they also incorporate elements of the Zeitgeist that didn’t originate in the entertainment industry.
But this doesn’t invalidate my point. Without that small portion of truly creative new stuff being made, the whole thing will freeze. The more human artists we replace with AI, the fewer new ideas will be added to the culture each year.
graham_fyffe t1_j4f8k02 wrote
Reply to comment by LittleTimmyTheFifth5 in This is something that we should keep in mind. by [deleted]
I guess we’ll see soon enough.
graham_fyffe t1_j4f7q8y wrote
Reply to comment by LittleTimmyTheFifth5 in This is something that we should keep in mind. by [deleted]
This is going to get stale real quick isn’t it? The AI is compelling right now because it taps into the Zeitgeist and we feel tickled by this. But if there are no more humans producing new content, or very few, then the Zeitgeist will essentially be frozen and it will only be reruns, reboots, and mashups from then on.
graham_fyffe t1_ixguzyl wrote
Bohmian mechanics invalidates all their main points. You only get all the simulation-supporting weirdness if you reject nonlocality. Nonlocality doesn’t necessitate a simulation. So flip the point on its head and we can say “rejecting nonlocality leads us to conclude we are in a simulation, so maybe hey let’s not reject nonlocality so quickly eh?”
graham_fyffe t1_j87i6tw wrote
Reply to comment by ElbowWavingOversight in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
Look up “learned learning” or “learning to learn by gradient descent by gradient descent” (2016) for a few examples.