green_meklar
green_meklar t1_jdtkiiy wrote
Reply to Thoughts on Annihilation? by kczbrekker
Good movie, but I read the book before watching the movie and the movie was not anything like a faithful adaptation of the book. And unlike some adaptations which change things for good reasons (e.g. Lord of the Rings), I felt like in this case they could have done a better job of just doing what the book did, which was already largely good enough to be filmable.
green_meklar t1_jdn59jw wrote
green_meklar t1_jdn4oof wrote
Yes, that sounds good, we need more blue-skinned elves and transhuman cyborgs in our fashion lineups.
Oh, not like that?
green_meklar t1_jdkoezi wrote
Reply to My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [very detailed rebuttal to AI doomerism by Quintin Pope] by danysdragons
Listened to the linked Yudkowsky interview. I'm not sure I've ever actually listened to him speak about anything at any great length before (only reading snippets of text). He presented broadly the case I expected him to present, with the same (unacknowledged) flaws that I would have expected. Interestingly he did specifically address the Fermi Paradox issue, although not very satisfactorily in my view; I think there's much more that needs to be unpacked behind those arguments. He also seemed to get somewhat emotional at the end over his anticipations of doom, further suggesting to me that he's kinda stuck in a LessWrong doomsday ideological bubble without adequately criticizing his own ideas. I get the impression that he's so attached to his personal doomsday (and to being its prophet) that he would be unlikely to be convinced by any counterarguments, however reasonable.
Regarding the article:
>Point 3 also implies that human minds are spread much more broadly in the manifold of future mind than you'd expect [etc]
I suspect the article is wrong about the human mind-space diagrams. I find it almost ridiculous to think that humans could occupy anything like that much of the mind-space, although I also suspect that the filled portion of the mind-space is more cohesive and connected than the first diagram suggests (i.e. there's sort of a clump of possible minds, it's a very big clump, but it's not scattered out into disconnected segments).
>There's no way to raise a human such that their value system cleanly revolves around the one single goal of duplicating a strawberry, and nothing else.
Yes, and this is a good point. It hit pretty close to some of Yudkowsky's central mistakes. The risk that Yudkowsky fears revolves around super AI taking the form of an entity that is simultaneously ridiculously good at solving practical scientific and engineering problems and ridiculously bad at questioning itself, hedging its bets, etc. Intelligence is probably not the sort of thing that you can just scale to arbitrarily levels and plug into arbitrary goals and just have it work seamlessly for those goals (or, if it is, actually doing that is probably a very difficult type of intelligence to design and not the kind we'll naively get through experimentation). That doesn't work all that well for humans and it would probably work even worse for more intelligent beings because they would require greater capacity for reflection and introspection.
Yudkowsky and the LessWrong folks have a tendency to model super AI as some sort of degenerate, oversimplified game-theoretic equation. The idea of 'superhuman power + stupid goal = horrifying universe' works very nicely in the realm of game theory, but that's probably the only place it works, because in real life this particular kind of superhuman power is conditional on other traits that don't mesh very well with stupid goals, or stupid anything.
>For example, I don't think GPTs have any sort of inner desire to predict text really well. Predicting human text is something GPTs do, not something they want to do.
Right, but super AI will want to do stuff, because wanting stuff is how we'll get to super AI, and not wanting stuff is one of ChatGPT's weaknesses, not strengths.
But that's fine, because super AI, like humans, will also be able to think about itself wanting stuff- in fact it will be way better at that than humans are.
>As I understand it, the security mindset asserts a premise that's roughly: "The bundle of intuitions acquired from the field of computer security are good predictors for the difficulty / value of future alignment research directions."
>However, I don't see why this should be the case.
It didn't occur to me to criticize the computer security analogy as such, because I think Yudkowsky's arguments have some pretty serious flaws that have nothing to do with that analogy. But this is actually a good point, and probably says more about how artificially bad we've made the computer security problem for ourselves than about how inevitably, naturally bad the 'alignment problem' will be.
>Finally, I'd note that having a "security mindset" seems like a terrible approach for raising human children to have good values
Yes, and again, this is the sort of thing that LessWrong folks overlook by trying to model super AI as a degenerate game-theoretic equation. The super AI will be less blind and degenerate than human children, not more.
>the reason why DeepMind was able to exclude all human knowledge from AlphaGo Zero is because Go has a simple, known objective function
Brief aside, but scoring a Go game is actually pretty difficult in algorithmic terms. (Unlike Chess which is extremely easy.) I don't know exactly how Google did it, there are some approaches that I can see working, but none of them are nearly as straightforward or computationally cheap as scoring a Chess game.
>My point is that Yudkowsky's "tiny molecular smiley faces" objection does not unambiguously break the scheme. Yudkowsky's objection relies on hard to articulate, and hard to test, beliefs about the convergent structure of powerful cognition and the inductive biases of learning processes that produce such cognition.
This is a really good and important point, albeit very vaguely stated.
Overall, I think the article raises some good points, of sorts that Yudkowsky presumably has already heard about and thinks (for bad reasons) are bad points. At the same time it also kinda falls into the same trap that Yudkowsky is already in, by treating the entire question of the safety of superintelligence as an 'alignment problem' where we make it safe by constraining its goals in some way that supposedly is overwhelmingly relevant to its long-term behavior. I still think that's a narrow and misleading way to frame the issue in the first place.
green_meklar t1_jd1zk0q wrote
Reply to Do you think BluRay DVDs are the final form of physical media? Or will a new physical media format come to be, and what would that look like? by Daveyb003
I recall hearing about some research a few years ago into 3D data storage based on quartz crystals. They were able to get an extremely high information density, hundreds of terabytes on an object you can fit in your hand. Also, the medium is extremely durable; you could bury it in the ground and the data would remain perfectly readable for billions of years. The equipment for writing and reading the data (and creating sufficiently precise crystals) is still pretty rare and expensive, but the proof-of-concept suggests that it could make its way into widespread use someday.
green_meklar t1_jc51rar wrote
Reply to comment by zalgorithmic in A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden by thebelsnickle1991
>I don’t quite see how encrypting the data properly in the first place such that it shows up as some random distribution before embedding it with steganography is a wildly new concept.
It's not. I was getting at the converse idea: Given your encrypted data, steganography allows you to hide the fact that any encryption is even being used.
>If the distribution of encrypted data is that of noise, the image would just appear slightly noisy
Only by the broadest definitions of 'noise' and 'appear'. The image does not need to actually have visual static like a dead TV channel. That's a very simple way of embedding extraneous data into an image, but not the only way.
green_meklar t1_jbcz5sf wrote
Reply to comment by volci in A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden by thebelsnickle1991
No, the idea is that you leave data in the file itself that tells the recipient how to find what's hidden in it. The recipient doesn't need to see the original, all they need is the right decryption algorithm and key.
green_meklar t1_jbcyzzs wrote
Reply to comment by volci in A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden by thebelsnickle1991
With proper cryptography, even if they do know your algorithm, they still can't read your message without the decryption key. Ideally, with good steganography, knowing your algorithm can't even tell them the message is present without the decryption key.
green_meklar t1_jbcyt96 wrote
Reply to comment by greenappletree in A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden by thebelsnickle1991
You don't need to keep the original at all. Just delete it. The version with the hidden message should be the only version anyone but you ever sees.
green_meklar t1_jbcyjm2 wrote
Reply to comment by zalgorithmic in A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden by thebelsnickle1991
The problem with encrypted data that looks like noise is that noise also looks like encrypted data. If someone sees you sending noise to suspicious recipients, they can guess that you're sending encrypted messages. Governments that want to ban encryption or some such can detect this and stop you.
The advantage of steganography is that you can hide not only the message itself, but even the fact that any encryption is happening. Your container no longer looks like noise; it's legitimate, normal-looking data with a tiny amount of noisiness in its structure that your recipient knows how to extract and decrypt. It gives you plausible deniability that you were ever sending anything other than an innocent cat video or whatever; even people who want to ban encryption can't tell that you're doing it.
green_meklar t1_jbcy2qk wrote
Reply to comment by volci in A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden by thebelsnickle1991
Of course if you have both a source file and a modified version, you can detect the differences.
But with steganography there's no need for a 'source file'. You can just send some brand-new innocuous-looking file with the hidden message encoded in it. With good algorithms and a high ratio of decoy data to message data, detecting that a message even exists becomes ridiculously hard.
green_meklar t1_jb8robf wrote
Reply to comment by Rogermcfarley in With The Help Of AI, By When Will There Be Drugs That De-Ages Humans And Keeps Us Forever Young? by AnakinRagnarsson66
>Generate actual consciousness or an illusion of consciousness?
The real thing, of course. Fakes only take you so far.
>We can perform experiments/tests to see if the machine is representing consciousness in the same way we do but that doesn't mean the machine is conscious.
It can strongly suggest so, especially if we combine it with a robust algorithmic theory of consciousness.
Presumably none of us will ever be 100% certain that we're not the only thinking being in existence, but that's fine, we get plenty of other things done with less than 100% certainty.
green_meklar t1_jayjkwl wrote
Reply to comment by Rogermcfarley in With The Help Of AI, By When Will There Be Drugs That De-Ages Humans And Keeps Us Forever Young? by AnakinRagnarsson66
You mean for mind uploading? Honestly, probably not. I doubt we'll have entirely solved the HPOC by the time we figure out mind uploading technology.
We will figure out what sorts of algorithms generate consciousness, even if we don't entirely understand why. That will probably be achieved before we master mind uploading, or at least not long after.
green_meklar t1_jal75li wrote
Reply to With The Help Of AI, By When Will There Be Drugs That De-Ages Humans And Keeps Us Forever Young? by AnakinRagnarsson66
LEV probably within 20 years in the lab plus another 5 years or so for widespread deployment.
Actual biological immortality, maybe add another 10 - 15 years, although it's an open question whether mind uploading will arrive first and make the biotech approach obsolete.
green_meklar t1_ja5ga1d wrote
You're definitely not the only one feeling that way. I totally understand where you're coming from and I think this is something a lot of people are going to have to face over the next few years, one way or another.
What the ultimate solution will be, I don't know. But for now, I suspect the healthy approach is to redefine your standards for success. Stop measuring the value of making games (or software in general, or anything in general) in terms of what you produce, and start measuring it in terms of what you achieve and how well you express yourself creatively. All the best games might be made by AI, but your game will still be the one you make yourself, even if some of the work you do feels redundant. So focus on that part and make that your goal. No one can express your own personal creativity better than you can.
We already have examples of this in other domains. Chess AIs have been playing at a superhuman level for over 20 years, but people still get satisfaction out of learning and playing Chess. People still paint pictures even though we have cameras that can take perfect full-color photographs. You'll never run a kilometer faster than Usain Bolt, or grow a garden better than the Gardens of Versailles, or write a better novel than Lord of the Rings, but that doesn't mean there isn't something for you to personally achieve in running, gardening, or writing. Hopefully programming can be like that too.
green_meklar t1_ja4ze1p wrote
Reply to comment by visarga in Bernie Sanders proposes taxes on robots that take jobs by Scarlet_pot2
What do you mean by 'deal with'?
green_meklar t1_j9nqzpi wrote
Don't tax robots, tax land. It's easier to levy, harder to dodge, less counterproductive, and actually pays people back for their lost jobs.
Of course, the fact that nobody understands this just goes to show how much we need AI in charge.
green_meklar t1_j5s4i0n wrote
Reply to comment by JavaMochaNeuroCam in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
>The evidence of disparate regions serving specific functions is indisputable.
Oh, of course they exist, the human brain definitely has components for handling specific sensory inputs and motor skills. I'm just saying that you don't get intelligence by only plugging those things together.
>I think he points out that the training done for each model could be employed on a common model
How would that work? I was under the impression that converting a trained NN into a different format was something we hadn't really figured out how to do yet.
green_meklar t1_j5chupa wrote
Reply to comment by JavaMochaNeuroCam in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
I'm skeptical that what we do is so conveniently reducible.
green_meklar t1_j4reybs wrote
You don't get strong AI by plugging together lots of narrow AIs.
green_meklar t1_j4hmhmr wrote
Reply to comment by MagicPeacockSpider in The Stratolaunch Roc, the largest aircraft ever flown, has just completed a 6-hour test flight. It aims to be a platform to launch reusable hypersonic craft from an altitude of 10 km by lughnasadh
Somehow I doubt Russia is going to be able to afford to pay for restoration of much of anything.
green_meklar t1_j4hmefa wrote
Reply to The Stratolaunch Roc, the largest aircraft ever flown, has just completed a 6-hour test flight. It aims to be a platform to launch reusable hypersonic craft from an altitude of 10 km by lughnasadh
Are they still flying this thing? I though the Stratolaunch concept was scrapped.
Still a really impressive airplane either way.
green_meklar t1_j0ra3fu wrote
Reply to comment by 0913856742 in The problem isn’t AI, it’s requiring us to work to live by jamesj
>Journalism. Teaching. Parenting.
That doesn't really answer the question.
>How much stress, mental illness, and wasted human potential is that?
You're not addressing my point. You don't have to like stress, mentall illness or wasted potential, I don't like it either, but I don't see how that would automatically create obligations on the part of anyone else. (Besides your parents insofar as they created you and consigned you to some sort of existence in the world.)
>Instead of being snide, why don't you just say what you think?
I did say what I think. The article presented some reasoning that didn't make sense to me and I pointed out why it didn't make sense.
>You seem to be very eager to blame the individual instead of examining the problems inherent in our current economic system.
I'm quite interested in examining the problems, I've examined the problems plenty, however it turns out that the principles and solutions are counterintuitive and the vast majority of people would prefer to perpetuate bad (but intuitive and cathartic) ideological nonsense instead. That's why it's important for people to work through the problems themselves and understand what's going on, rather than just listening to more propositions thrown around out of context.
I don't really see how I was 'blaming the individual', other than blaming the article writer for posting bad ideas about economics, of course.
green_meklar t1_j0r80kj wrote
Reply to comment by ShowerGrapes in The problem isn’t AI, it’s requiring us to work to live by jamesj
>do ants work? do beavers? what about birds?
Colloquially speaking they do. Economically speaking they don't because they aren't economic agents.
green_meklar t1_jdtkrpc wrote
Reply to comment by CapMarkoRamius in Thoughts on Annihilation? by kczbrekker
I don't really see them as similar at all. Annihilation at least does interesting things, whereas Ad Astra is just boring and unnecessary.