Submitted by nick7566 t3_z0um9t in singularity
Comments
botfiddler t1_ix8h6c6 wrote
Funny how "Faster than expected!" is almost a meme, here and in r/collapse at the same time.
182YZIB t1_ix9nlaf wrote
We're the same demographic you and me.
nick7566 OP t1_ix7h278 wrote
Abstract:
>DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results. However, the method has two inherent limitations: (a) extremely slow optimization of NeRF and (b) low-resolution image space supervision on NeRF, leading to low-quality 3D models with a long processing time. In this paper, we address these limitations by utilizing a two-stage optimization framework. First, we obtain a coarse model using a low-resolution diffusion prior and accelerate with a sparse 3D hash grid structure. Using the coarse representation as the initialization, we further optimize a textured 3D mesh model with an efficient differentiable renderer interacting with a high-resolution latent diffusion model. Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is 2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also achieving higher resolution. User studies show 61.7% raters to prefer our approach over DreamFusion. Together with the image-conditioned generation capabilities, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications.
idranh t1_ix89xpd wrote
Can I please get my head around text to image advancing so quickly? This is a lot.
GeneralZain t1_ix8cz6j wrote
no, and the whole field of AI will only get faster and faster from here on...that's the whole point of the singularity.
Advancements happening too fast to predict what's next, too fast to keep up.
idranh t1_ix8elha wrote
You're right. Once these advancements get on the radar of the public as a whole.... things will get crazy. Future Shock anyone? We're really not built to understand exponential growth, even people on this sub. I remember you saying text-video would follow quickly after Dalle-2 dropped and people here were saying 5-10 years! The next couple of years are going to be WILD.
GeneralZain t1_ix8kmxx wrote
haha yeah man I wasn't joking around when I said that :P
​
its only gonna get faster. 2023 may be the tipping point imo.
idranh t1_ix8mjkl wrote
Timelines are getting shorter and shorter, at least it feels that way. I've recently come around to AGI in 2029, but this year it feels like AGI might happen sooner. 2025 was the year things would get weird, but that could be next year! I'm on this sub and r/Futurology all the time and I'm having a hard time keeping up! I fear the rest of the decade is going to be destabilizing. The 20s are going to be a trip, the decade started with a once-in-a-century global pandemic! How will it end?
GeneralZain t1_ix8osxj wrote
we will find out soon enough :)
dasnihil t1_ix8mqig wrote
it's just a complexity/dimensionality issue. with 3d images, the training and diffusion principles are the same but your matrix gets one more dimension and dataset has to be of different nature. but since we don't have such datasets for training, i think these ppl somehow used the 2d trained model to create output in a dummy 3d space. i've done 3d modeling/rendering before and the challenge is just huge. this is too early but it's gonna mature so soon like everything else we've seen.
just wait for AI to publish more computer science research papers and just outdo itself, we just sit and enjoy the show. deepmind's AI already improved on matrix multiplication a few weeks ago, something humans couldn't do in 50+ years.
idranh t1_ix8q2p6 wrote
>AI to publish more computer science research papers and just outdo itself
Wait WHAT?!
dasnihil t1_ix8th1x wrote
sorry i meant wait for AI to start publishing research papers and peer reviewing with other AI models.
idranh t1_ix8tmtt wrote
Just the thought of AI publishing its own research papers is INSANE.
[deleted] t1_ixb0j94 wrote
Read the prelude to Life 3.0 by Max Tegmark.
It's amazing.
idranh t1_ixb30lu wrote
Thx for the recommendation!
My_reddit_strawman t1_ixbzehs wrote
I just did. Wow if only
KIFF_82 t1_ix7hvzb wrote
That was impressive… haha, can I use it?
NomzStorM t1_ixamvev wrote
This is a lot what the early 2d models looked like, hyped to see this so early
PrivateLudo t1_ix9e03w wrote
Hey but guys…. telling r/futurology that AGI coming in 2028 is crazy!
Particular_Leader_16 t1_ix8t94f wrote
At this point, AGI might come in the next few years.
WashiBurr t1_ix9bud1 wrote
That's incredible. NVIDIA has been absolutely nailing it recently.
Hopeful-Treacle9045 t1_ixd6sz9 wrote
They'd do a lot better if they trained on real 3D, as explained here: https://medium.com/@pauljoeypowers/creating-equitable-3d-generative-ai-c7b9947cba69
expelten t1_ixgpzk5 wrote
Great, It's exactly what I thought and what I was looking for. I wish this type of model was available in open source to train it on my own data and improve it.
[deleted] t1_ix892y7 wrote
[deleted]
ninjasaid13 t1_ix8qqti wrote
> Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes
Deformero t1_ix90w9m wrote
Is it possible to use this in gcolab?
Em0tionisdead t1_ixbuna5 wrote
Is this really that big of a deal tho?
SaudiPhilippines t1_ixclpl7 wrote
The lives and careers of aspiring video game developers will be positively impacted by this artificial intelligence programme. The ability to create high-quality meshes without expending a tonne of time or resources will finally be available to everyone!
sumane12 t1_ix88d53 wrote
Well this happened quicker than I expected