Viewing a single comment thread. View all comments

The_One_Who_Slays t1_jdl8se6 wrote

That's actually amazing. Imagine an ability to record the dreams THAT YOU ALWAYS FORGET ABOUT AFTER WAKING UP, GODDAMMIT!

57

Throwaway-tan t1_jdlc6qs wrote

Yeah because it won't almost exclusively be used to violate the integrity of one's mind for the purposes of legal persecution and maximising workforce compliance through thought monitoring.

94

ginja_ninja t1_jdlm0ri wrote

The attempted implementation of mind jannies will be the breaking point for society where heads start rolling

23

ThisZoMBie t1_jdnal25 wrote

“Eh, I don’t care, I have nothing to hide.”

> The attempted implementation of mind jannies will be the breaking point for society where heads start rolling

I highly doubt it

4

WildGrem7 t1_jdpq72r wrote

the fucking worst. I had a co worker that would say this when we were talking about Snowden like 10 years ago. I couldn't believe people actually thought like this.

2

Chard069 t1_jdorrcj wrote

Thinking unsanctioned stuff is a severe offense. Think nice, now. Or else. 8-(

1

chocolatehippogryph t1_jdmiuev wrote

yeah man. We are on the prespice of horror and greatness...

Related annecdote: I met a ~60-65 German tech CEO guy on an airplane once, and we were talking about potential near future tech. I think we started talking about neuralink, my mind goes to the possibilities for increasing accessibility for disabled people etc. He immediately started talking about how if you could read people's minds you could make sure they were paying attention at meetings and generally keep them focused and productive during work.

It was pretty horrifying, but I think this will happen. Wealthier people will see the benefits of technology-mind integration. For the poorest, it will just be another implement of control.

16

Throwaway-tan t1_jdmktew wrote

External "read only" brain wave monitoring is one thing. Internal direct interface chips is a whole other can of worms.

Computers are inherently insecure, and now you want to intrinsically tie your existence to one. OK when someone ransomwares your free will, the government fires off a kill switch or a rogue brain worm sends everyone into a bath salts style murder rage I don't want to hear a peep from the optimists.

9

TheReverend5 t1_jdn5bxk wrote

There are already people receiving very beneficial therapies from secure and implanted brain-computer interfaces. The devices are built to make it impossible to deliver dangerous amounts of current.

The “optimists” in this case just have a better understanding of the current reality than you do.

1

avatarname t1_jdo15v3 wrote

They could already monitor computer screens either with security cameras or some software, if you work in office, yet at least where I am from they do not do it. Even though it is possible. They even do not monitor the time you log in or log off, at least for white collar work I do. Of course, it is different in manufacturing and warehouses etc. where people are treated as slaves... It has always sadly been so that white collar workers (especially not in entry level jobs) can slack off more than people who actually do hard physical work. I know part of that is that they think the more free we are, the better our brains will work and come up with million dollar ideas, but still...

2

FeatheryBallOfFluff t1_jdm8y0u wrote

How is this in the futurology sub? On every thread with a new technology, everyone is hating on it because it will be used for oppression.

13

PLAAND t1_jdmeyim wrote

Because tools can be picked up by anyone, even shiny new ones and we see very clearly who in the world has the power to pick up these tools and the kinds of things they tend to do with them.

11

Defiyance t1_jdmmsja wrote

Because if it can be used for that it will be used for that by the current pricks in charge. Maybe we should restructure our society before we come up with a bunch of tech out of a dystopian wet dream

5

Hiseworns t1_jdmc54a wrote

Well I mean, look around at how all current and even old technology is and has been used

4

Chard069 t1_jdoslth wrote

Electricity: Zap people and animals to death.
Mechanics: Crush people and critters to death.
Chemistry: Poison people and critters to death.
Mind-control: Scare people and animals to death.
Media: Bore people to death. Beware animals.
Politics: Bludgeon people to death. Run faster.

1

BaboonHorrorshow t1_jdn3z85 wrote

Because most Redditors are American and America is an inverted totalitarianism/oligarch-ruled dystopia.

4

Dentrius t1_jdntoo7 wrote

Its just some loud minority of people who think they smarter and above all the rest because they read or watched too many dystopian fiction and now can forsee the dark future!

2

BaboonHorrorshow t1_jdn3pg6 wrote

Yep, to say nothing of the volunteer thought crimes police that would sprint up.

They’ll try to destroy people for saying the wrong thing in social media, even if that person apologizes.

Imagine if Twitter could see your humor brainwaves spiked at some off color joke - you could lose your job over a bad THOUGHT

3

Alekillo10 t1_jdmbrg1 wrote

Ugh… It would be like a crappier version of Total Recall… “You dreamt of killing your wife! You’re going to jail!”

2

Philosipho t1_jdmkb8z wrote

People decided it was a good idea to let citizens control the economy and government, because they wanted the opportunity to have that wealth and power themselves.

Society is just one big episode of r/LeapordsAteMyFace

1

Throwaway-tan t1_jdmlriq wrote

What? I'm not sure what your criticism is targeting... Is it that society is run by people?

Society has generally been a net positive for everyone. We went from subsistence and survivalism to plenitude and philosophy.

Even a feudal society is preferable to no society in my opinion.

It's not perfect, but I much prefer the fucked up society we have now when compared to "return to monke".

5

sqwuakler t1_jdmwxtf wrote

"Democracy is the worst form of government (except for all the others that have been tried).”

4

MistyDev t1_jdmo3o5 wrote

Even if this was possible. The 5th amendment would absolutely protect against this kind of thing in the US.

I feel like you have to be unreasonably pessimistic to think that those would be the 1st areas where such a technology is used.

1

Throwaway-tan t1_jdmow4m wrote

5th amendment only protects you from incriminating yourself in potential criminal proceedings.

It does not prevent your employer from mandating you use it at work and then any data gathered being subpoenaed.

Or let's say it becomes something more ubiquitous like a smartphone, everyone uses it daily and all that data is gathered - your 5th amendment isn't going to do shit.

3

-zero-below- t1_jdon37p wrote

Additionally, the 5th would only protect what you say. It doesn’t, for example, prohibit search or manipulation of your body. For example, fingerprints are not protected by the 5th. I don’t see why brain fingerprints would be.

1

[deleted] t1_jdlne2j wrote

[removed]

−11

urmomaisjabbathehutt t1_jdmkncx wrote

Will it be able to pull images of possible suspects from its memory and recognize that the subject is familiar with those individuals?

that could be used for crime solving but also an authoritarian government would love to know which people a disenter meet and relates with

1

Inevitable_Syrup777 t1_jdormsh wrote

no, currently it would be using images in it's own database. that would mean harold smith would simply be drawn as john doe from the image database. john doe is just training data and doesn't exist in real life in this instance. i saw the image results, from looking at a skyscaper, yes it drew a skyscraper but the skyscraper looked like the training image, not the real life image seen by the person.

1

urmomaisjabbathehutt t1_jdpl13m wrote

Right, so at this point its able to resolve the subjec mntal image as a generic skyscraper basd on comparisons to its own database

the question would be if the rsolution would became good enough for it to assess that the subjet mental image correspond to one of the samples rather than something generic

i guss that if the subjct mental image was something easily recognizable may be easy even if the resolution is sketchy, but in any case this is question ofmaking improvements

1

elehman839 t1_jdmt4om wrote

The claims are interesting, but far more modest than people here seem to realize. This is what they say about their evaluation process:

we conducted two-way identification experiments: examined whether the image reconstructed from fMRI was more similar to the corresponding original image than randomly picked reconstructed image. See Appendix B for details and additional results.

So, if I understand correctly, they claim that if you take a randomly-generated image and an image generated by their system from an fMRI scan, then their generated image more closely matches what the subject actually saw than the randomly-generated image only 80% of the time.

This is statistically significant (random guessing would give only 50%), but the practical significance seems pretty low. In particular, that's waaaay far form a pixel-perfect image of what you're dreaming. The paper has only cherry-picked examples. The full evaluation results are apparently in Appendix B, which I can not locate. (I'm wondering wether the randomly-generated images had some telling defect, for example.) Also, the paper seems measured, but this institution seems to very aggressively seek press coverage.

4

The_One_Who_Slays t1_jdn2ac5 wrote

I mean, Rome wasn't built in a day. Just the fact that it's possible to do speaks volume. As for seeking press coverage, it's understandable: could be them trying to secure more funding by getting more publicity, could be them being genuinely passionate about their tech, could be both. The time will tell.

Still, it's an interesting application for the image gen technology, it's never even crossed my mind to my surprise.

2

elehman839 t1_jdollr0 wrote

If anyone cares: I found Appendix B, but there wasn't much more helpful information. In particular, I don't understand how the randomly-generated images in their evaluation process were produced. And, as far as I can tell, the significance of the paper comes down to that detail.

  • If the randomly-generated images were systematically defective in any way, then the 80% result is meaningless.
  • On the other hand, if these randomly-generated images are fairly close to the image shown to the person in the fMRI-- but just differing in some subtle ways-- then 80% would be absolutely amazing.

Sooo... I think there's something moderately cool here, but I don't see a way to conclude more (or less) from than that from their paper. Frustrating. :-/

2

The_One_Who_Slays t1_jdoodjv wrote

Yeah, some public trials would come in handy there. Show, don't tell, and all that.

1

nuclearbananana t1_jdlhqtw wrote

I'd rather not. The fleeting nature of dreams is part of what makes them special and surreal.

3

sanburg t1_jdn6g7o wrote

I can just see IKEA selling fMRI beds

1

m1cr05t4t3 t1_jdp34x3 wrote

Keep a small journal and a pen next to your bed. As soon as you wake up write down two words summing up what you were dreaming about. It's enough to allow you to remember the whole thing. You never lose your memories, just your ability to recall them. A little prompt hacking is often enough.

1

The_One_Who_Slays t1_jdp91rv wrote

I did that before, but I stopped, because I always go into excruciating detail and it takes a huge chunk out of my time. Just can't do the "two words summary" thing to save my life.

Plus, the idea of being able to watch a dream in a movie format is pretty amazing.

1

m1cr05t4t3 t1_jdp9gjj wrote

Oh I would totally buy a VR headset that replays my dreams. I would still write down two-words though to remind me of whoch 'movie' to pick.

(I would not want it to connect to the Internet though I'll do the updates with a USB or an SD card or something of a new version comes out).

1