Submitted by ryusan8989 t3_yzgwz5 in singularity
blueSGL t1_ix1jt1d wrote
Reply to comment by onyxengine in 2023 predictions by ryusan8989
how many times this year did we see an image gen model released only to be swiftly followed up with a different company showing off theirs?
What makes people think that GPT4 won't be like that and one of the others being better in some aspect or another.
UniversalMomentum t1_ix26hsu wrote
I just see real AI as not having a ton of uses compared to just simple things like funding new drug and novel material designs using good ol fashion machine learning.
Yeah it's neat you can draw using fake AI/machine learning, but things like that are not as important as cancer vaccines and graphene nano structures, for instance.
AsuhoChinami t1_ix2s26c wrote
While I agree, I think improved AI will likewise be better at advancing medical tech.
botfiddler t1_ix4keuj wrote
Image generators aren't that much more human-like than some protein folding simulation AI. They still don't know what any of this in the picture means. Both is important, though. Imagine crushing big corporations oligopoly on content creation. Someone who could make a comic on his own, could make five with his characters, much faster. Or he could at some point make an anime based on his characters.
AsuhoChinami t1_ix5r14w wrote
I think AI has to have some kind of understanding in order to perform so well. AI in the past performed poorly because it obviously had poor understanding. I think "AI has no understanding" is kind of an unfalsifiable argument - it's kind of suspect that something which has no form of understanding whatsoever could produce such accurate and well-formed results, but it's also something that's impossible to argue for or against.
botfiddler t1_ix6fi2m wrote
Yeah, well, I'd say it understands how the words in the prompt relate to certain image elements and how those relate to each other. Nothing outside, to physics, human meaning of such pictures, ...
CriminalizeGolf t1_ix9p3k7 wrote
https://plato.stanford.edu/entries/chinese-room/
There is a philosophy thought experiment related to the difference between functional understanding and "true" understanding
blueSGL t1_ix4n90v wrote
I see image generation as an easy 'foot in the door', something that can be played with locally that will get people into the space that was never interested in any sort of AI/ML before. That's the true boon of this sort of tech going out there it will be helping with other advancements just not directly.
overlordpotatoe t1_ix1pgto wrote
I guess we can't account of things other people are working on behind the scenes. GPT4 should be pretty big, but of course there's no guarantee another company hasn't been quietly working on something even better.
SupportstheOP t1_ix30gcg wrote
I'm wondering if that played into GPT-4s development. There's immense competition to be cutting edge and first in the AI race, and releasing an AI system that is behind its predecessors is basically not releasing an AI at all. It's possible they delayed releasing it so they can make sure GPT-4 can go toe-to-toe with any would-be competitors.
overlordpotatoe t1_ix34y9o wrote
It's definitely become a fast paced, competitive space. Nobody can afford to stagnate.
KidKilobyte t1_ix66ts8 wrote
Likely Meta thought they had a "good enough" ai to have first mover advantage with Galactica, but had to withdraw it after only 3 days due to poor reception. I doubt this will happen to GPT-4. It will be interesting to see how quickly others can catch up. I suspect Google/Alphabet has stuff even more powerful than GPT-4, but are not releasing it yet due to alignment issues or public reaction to jobs going away.
https://www.reddit.com/r/Futurology/comments/yytfpr/meta_has_withdrawn_its_galactica_ai_only_3_days/
Viewing a single comment thread. View all comments