Comments
userbrn1 t1_iwni7hk wrote
I would caution against thinking this brings us close to FDVR; there is a fundamental difference between encoding and decoding neural patterns.
Progress like this is in decoding, meaning we take the neural patterns and try and determine what the person was trying to envision. This is analogous to neuralink monkeys playing pong with their minds or people controlling robotic limbs with their thoughts. We are making lots of progress in this area, and we can afford to be imprecise. For example, with a robotic limb, it's ok if your elbow bends 50 degrees instead of 49 degrees for the vast majority of tasks. Even fine motor tasks like writing have a point at which further precision is no longer useful. We also have the benefit of being able to measure the end result easily; we can measure the movement of limbs or the accuracy of cursor movement controlled by the mind.
Encoding would involve figuring out what signals we can put in in order to recreate a specific conscious experience; it is the opposite process of the above. Full dive VR would require us to master the ability to send a signal that gets interpreted by our brain in a specific way. For example, if you're on the beach on a windy day, you'd need to find a way to send a signal so precise that your brain truly interprets it as its own vision, which is incredibly complex. You'd need to find a way to simulate the very complex sensation of wind blowing across your arms, moving your clothes in certain ways, specific hair cells. You have likely never felt the same gust of wind twice, because of how rich our conscious experiences are. In contrast to decoding, encoding has orders of magnitude smaller room for error; if the sensation on your skin is even slightly off, you'll realize it's fake and weird. We also cannot measure the end result of encoding easily at all, since the end result is a conscious experience; imagine trying to describe in words to a researcher that it feels as though your proprioceptive sense of where all your limbs are in space relative to each other feels kind of off. The only way to actually empirically iterate would be to first master decoding, build up a massive database of decoded human experiences, and then simulate trillions of fake walks on the beach into a human brain hoping to get neural signals that, when decoded, are close to perfectly in line with the empirically derived decoded data from real human experiences. This is of course impossible to test on real humans and would likely require server farms lined with millions human brains in jars which, by definition, would have to be sentient and conscious in order to have it be relevant to our own conscious experience.
tl;dr it's good that we're getting better at decoding neural data but it is an entirely different problem from the encoding that FDVR requires. In my opinion we do not have a viable pathway to FDVR due to our inability empirically test neural encoding at the scale and precision needed to make FDVR worth doing.
Kaarssteun t1_iwnkbl5 wrote
Of course this is not magically enabling fdvr. First step to encoding neural patterns is understanding how to decode them, that's what i'd like to stress here. Haven't seen any works this coherent, and I'm excited!
Shelfrock77 t1_iwo0czy wrote
The person who replied to you is over complicating things. Encoding and decoding share a monotonous relationship that sometimes it can be overlooked and taken for granted. Our consciousness is like VR and it’s proven that synthetic data can provide far more data for AI and humans to use in future artifical neural networks. First we get text to image for the brain, then we compile up “time screenshots” to make text to video then once we get text to 3D image, text to 3D video, reality will basically feel 100% blended. The singularity will unlock our lucid dreams, something our ancestors would drool over. To live in the dream realm again. To make it simple, we are plugging our biological instruments into the same frequency “wirelessly” (but still wired just invisible to our eyes) into our computers for us to interpret back. We give the computer a command and it streamlines to another computer (being our consciousness) to interface with it. That’s why in the “old” days, when they said someone cast a “spell”, it’s referring to spelling words out into the keyboard/ or whatever your using to remote control someone. Imagine a little cute sim falling under a spell and you pop up in their world through a portal, they’ll be so brainwashed with religion, they’ll think you cast a spell/possessed them because they disregard science and give more meaning into magic. To program, to brainwash, to be under a spell all mean the same thing. This was my epiphany when I was on dmt. We are always programmed even when we think we aren’t. Free will is an illusion, why I say this is because multiverse “theory”. Natural and synthetic are illusions. It doesn’t matter if you are in a simulation, it only matter that you exist. When you get killed in a video game, you just respawn or what we would call reincarnate. Ik I sound like Alan Watts right now lol, anyways back to playing MW2 warzone.
AI_Enjoyer87 t1_iwo3dhc wrote
Magical rambling Shelfrock! What's your timeline predictions? 😈
Shelfrock77 t1_iwo8foc wrote
FDVR will probably be on the market anywhere between 2025-2028 for first generation (I put the deadline at 2030 nonetheless). As for when we get new bodies, that happens when we biologically die. Once minduploaded, your decision decides your fate, you can choose to stay in the computer as a “virtual being with a body” and not have a “real” body or you can choose a “real” body just like you choose a car at a dealership. I mean, I don’t think it’s far fetched to say that we can print out sex bots of all kinds with its own synthetic genes? Just like how we customize our characters in a video game perhaps? ASI may be able to help us with that, it’ll be like in cyberpunk 2077 where you can customize your hardware/mechanical biology. It’s like Los Santos Customs but for your vessel haha. Once we sync synthetic and natural data, reinventions will occur quickly in this solar system. We reinvent god/universes/existence/consciousness/soul.
BinyaminDelta t1_iwofjoh wrote
2025 is two years away. We're at "monkey playing pong" currently.
Can it accelerate? Yes, but there are many, many tech and bio problems that need to be solved and then perfected before FDVR.
I admire your optimism but wouldn't be surprised if Neuralink (or equivalent) takes five more years to be usable, and then another five to ten more to reach FDVR level.
AGI could accelerate that timeline if it is able to show us a biotech path we're not seeing. AGI also needs to exist first.
Shelfrock77 t1_iwog6jn wrote
That’s why I said 2025-2028 with 2030 being a deadline. My flair is a quote from a club full of billionaires addressing their plans. A privatized united nations known as the World Economic Forum. I personally think it will happen from 2025-2028 but it can happen 2029 or 2030. I honestly don’t think we have to wait any much longer from the way things are headed and it’s only 2022. We are going through the 4th industrial revolution era right now.
ihateshadylandlords t1_iwnnbqd wrote
Thank you so much for that breakdown
Redvolition t1_iwq2rwi wrote
Don't think encoding is going to be all that difficult. Once we figure out how to record signals traveling through nerves non-invasively, all we would have to do is install the tech on a few dozens of people, then run them through stimulus sets, logging the correlations. Machine learning would do the rest.
userbrn1 t1_iwqb63y wrote
How would we log those correlations? The end result we are trying to achieve is a conscious experience; we cannot directly measure that, so I'm not sure what data we would put into the machine learning model lol
Redvolition t1_iwr6f4o wrote
Take the vestibular sense, for example:
Step 1: Intercept nerve signals to projection pathways via implant.
Step 2: Put human on motion capture suit.
Step 3: Run human through a variety of motions and positions.
Step 4: Correlate motion capture data with nerve signals.
petermobeter t1_iwncpul wrote
finally we can get an accurate computer thingy of what someone’s lookin at
next stop: recordin dreams usin this technology
AI_Enjoyer87 t1_iwo2vsf wrote
This is awesome (I know it's been done before but produced images of very poor quality). Another step towards full dive vr is exciting no matter how many more steps to go. Hopefully we get extremely capable AI in the next year or two that can solve these problems lightening fast.
colonel_bob t1_iwopu3z wrote
What's really striking to me is that the reconstructed images don't have a direct correspondence to the prompts in terms of layouts or other basic visual arrangements but are undoubtedly "about the same thing" (at least in the examples provided)
vernes1978 t1_iwphtbo wrote
So, even tho it's not reconstructing the image based on your fMRI data.
It is comparing your fMRI data with fMRI of other people and the related images that got associated with the FMRI data.
That means we all have the same brain-area's associated with abstract concepts?
-ZeroRelevance- t1_iwpjvxf wrote
Yeah, it’s been experimentally proven a few times. The example I remember is that even for speakers of different languages, the word for ‘apple’ in their language lights up the same part of the brain.
It makes sense to be honest. If our brains weren’t almost entirely determined by our genetics, there’s no way we’d all be as smart as we are.
vernes1978 t1_iwpni0x wrote
> be as smart as we are.
For a certain definition of "smart" of-course.
"Takes a big bite of rainforest killing sojabean fed cowmeat filled with microplastics"
But that means a person is a dataset applied to a "generally" identical neural net.
Ok, that statement might be generally a lie but this is my question:
What would happen if we could measure all the synaptic weights/values of brain model A belonging to ZeroRelevance.
And just use those values to adjust the neurons in Brainmodel B (belonging to vernes1978).
Howmuch would Brainmodel B react differently then ZeroRelevance?
How big would the difference be?
-ZeroRelevance- t1_iwpo1pn wrote
You’re asking what would happen if all the neurons in your brain were rewired to be the same as mine? In a purely theoretical case, you would react exactly the same as I would, but in practice, the differences in the rest of our bodies would:
-
mean that there may be some issues in sensing the world and controlling the body
-
have that variation in stimulus lead to differences in the responses
On the other hand though, if you had a brain in a vat that was wired to be identical to mine, and also put my own brain in a vat, any given stimulus to either brain should give identical responses. Since there should be no fundamental difference between them.
vernes1978 t1_iwqlhw5 wrote
I wasn't aware that the wiring (connectome) was the data.
I kinda assumed there was a electro-chemical factor involved where the neuron had different trigger conditions which was the result of a learning process.
I was imagining that these factors could be transfered to a brain with a different connectome.
Since this image prediction was possible using fMRI data, I was wondering if our connnectome could be similar enough that the transfer of this (assumed) electro-chemical state of neurons would result in a personality that is similar enough to represent the person who's electro-chemical state you transfered to a different brain (connectome-wise).
Although this is sciencefiction stuff, it would be an interresting question wether or not you could clone yourself into a standardized artificial brain, by copying these electro-chemical variations.
-ZeroRelevance- t1_iws8y1g wrote
I’ll admit I didn’t really consider the actual neurons themselves as seperate to the wiring in my answer. Since neurons are created based on genetic code, every person’s neurons would likely react slightly differently, leading to a different end result. If you also consider the activation conditions to be different to the wiring, that would also obviously lead to pretty big differences, because the activation conditions are just as important as the wiring.
I just kind of combined both of those into my previous answer, so I concluded that there would be no differences. If it was solely the wiring, though, then there would likely still be big differences.
Keep in mind though that I’m far from an expert in anything to do with brains, just an enthusiast, and all of this is just my speculation based on what I know about brains and AI.
Comfortable-Ad4655 t1_iwnbf7d wrote
wut
FontaineFuturistix t1_iwrnee5 wrote
Wether the idea has merit or not. This is something that should never be accomplished a person's thoughts are their own and no science should be trying to pluck that out of their head onto a screen
[deleted] t1_iwwj4lw wrote
[removed]
SkaldCrypto t1_iwpke89 wrote
This is actually bullshit.
Sorry to say but all the fMRI papers got debunked in mid 2021. If I remember correctly it was University of Pennsylvania medical school that gave it a thorough dressing down.
Kaarssteun t1_iwncqco wrote
If you're into FDVR, this is huge. The first step to artificial stimuli streamed directly to your brain is understanding how we interpret them in the first place. While the nature of neural networks may not bring us, as humans, close to intellectually understanding the brain, this obviously shows an insane degree of "comprehension". Perhaps the tool we need to decode our brains simply are artificial ones.