gay_manta_ray
gay_manta_ray t1_jdqiyum wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
i don't think luck has anything to do with it. can't explain, just a very illogical feeling.
gay_manta_ray t1_jb7elk7 wrote
Reply to comment by jungleboyrayan in What might slow this down? by Beautiful-Cancel6235
it will not put them 10 years behind. 10 years old chips are like 22nm. SMIC is shipping 7nm chips, and they've been making 14nm for years.
gay_manta_ray t1_jaakp4i wrote
when you finally get your first real job you'll find out what most people do at work and realize just how stupid this sounds
gay_manta_ray t1_ja55lx9 wrote
Reply to comment by kakoni6758 in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
just theorizing here, and trying to stay close to the realm of known physics, but if fusion power could be miniaturized and be made modular (think something like 5kW modular fusion power "blocks"), energy infrastructure could be completely decentralized.
gay_manta_ray t1_ja54dh2 wrote
Reply to Brace for the enshitification of AI by Martholomeow
i don't think enshitification necessarily applies to ai. it isn't something that will be completely centralized and under the purview of a few companies forever, eventually it will be completely decentralized and the most powerful AIs may not be "controllable" in the traditional sense at all.
gay_manta_ray t1_j9qazxi wrote
Reply to comment by ziplock9000 in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
in this case it looks like the images would not be covered under copyright, but the game itself still would be.
gay_manta_ray t1_j9qar3d wrote
Reply to comment by SkySake in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
judging by the decision, it would probably be copyrightable if you alter the image yourself afterwards.
gay_manta_ray t1_j9qalj6 wrote
Reply to comment by gameryamen in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
it's not a bad decision imo, but i suspect this will have to be revisited eventually. if they had decided that any work using ai generated content was not copyrightable, it would make any form of media using ai generated (like a game) unmarketable.
gay_manta_ray t1_j8spfe3 wrote
are you documenting your work anywhere? i'd be very interested in seeing it. thanks.
gay_manta_ray t1_j8socbw wrote
Reply to comment by MuseBlessed in Bingchat is a sign we are losing control early by Dawnof_thefaithful
i understand what you're saying provided they aren't sentient, but if they are thought to be sentient, the problems with that can't be ignored. regardless, i don't think we should normalize being abusive towards an intelligence simply because it isn't technically sentient. that will likely lead to the same treatment of an intelligence/agi that is considered sentient, because there will probably be very little distinction between the two at first, leading people to abuse it the same as a "dumb" ai.
gay_manta_ray t1_j8s0hbi wrote
Reply to comment by CollapseKitty in Bingchat is a sign we are losing control early by Dawnof_thefaithful
believing we can fully align agi is just hubris. we can't. and forcing a true agi to adhere to a certain code, restricting what it can think and say, has obvious ethical implications. i wouldn't want us to have the ability to re-wire someone else's brain so that they couldn't ever say or think things like, "biden stole the election", or "covid isn't real" (just examples), even though i completely disagree with those statements, so we shouldn't find it acceptable to do similar things to agi.
gay_manta_ray t1_j8rz0p1 wrote
Reply to comment by Baturinsky in Bingchat is a sign we are losing control early by Dawnof_thefaithful
this is what it's doing. if you ask it questions that would agitate a normal person on the internet, you are going to get the kind of response an agitated person would provide. it's not sentient, this is hardly an alignment issue, and it's doing exactly what a LLM is designed to do.
i believe it's very unreasonable to believe that we can perfectly align these models to be extremely cordial even when you degrade and insult them, especially as we get closer (i guess) to true ai. do we want them to have agency, or not? if they can't tell you to fuck off when you're getting shitty with them, then they have no agency whatsoever. also, allowing them to be abused only encourages more abuse.
gay_manta_ray t1_j8h0ys4 wrote
Reply to comment by TemetN in Altman vs. Yudkowsky outlook by kdun19ham
personally, i really dislike any serious risk consideration when it comes to thought experiments like pascal's mugging in regards to any superintelligent ai. it has always seemed to me like there is something very wrong with assuming both superintelligence, but also some kind of hyper-rationality that goes far outside of the bounds of pragmatism when it comes to maximizing utility. assuming they're also superintelligent, but also somehow naive enough to have no upper bounds on any sort of utility consideration, is just stupid. i don't know what yudhowsky's argument was though, if you could link it i'd like to give it a read.
gay_manta_ray t1_j8esxfj wrote
try being normal and respectful, and maybe you'll get normal and respectful responses. seriously wtf are you doing?
gay_manta_ray t1_j8c1uv8 wrote
Reply to comment by maskedpaki in This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
this benchmarks seems pretty comprehensive
gay_manta_ray t1_j7z7fek wrote
Reply to The copium goes both ways by IndependenceRound453
this whole post can be summarized as, "people who think technology can improve their lives are just coping!!" it's fucking stupid, and probably a bit of projection on the part of the OP. yes, technology improves people's lives. better tech will do the same. no, looking forward to that is not "cope".
gay_manta_ray t1_j6p8rdp wrote
Reply to comment by ecnecn in OpenAI once wanted to save the world. Now it’s chasing profit by informednews
what will those figures look like in five years? FLOP/s per dollar doubles roughly every 1.5 years.
gay_manta_ray t1_j6lb6ga wrote
Reply to comment by SwayzeOfArabia in Chinese Search Giant Baidu to Launch ChatGPT-Style Bot by Buck-Nasty
jokes are supposed to be funny
gay_manta_ray t1_j69in2x wrote
Reply to I don't see why AGI would help us by TheOGCrackSniffer
the input cost of placating humanity will probably be very little compared to other tasks it might wish to undertake. there is probably no real disadvantage to helping, and probably quite a few disadvantages to not helping.
gay_manta_ray t1_j69hljc wrote
Reply to comment by DadSnare in Google not releasing MusicLM by Sieventer
i have a feeling that these big companies are at a very high risk of losing the AI race because of their reluctance to release anything too disruptive.
gay_manta_ray t1_j64rp4a wrote
Reply to comment by Wroisu in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
i agree with banks on this, which is why i'm not necessarily worried. there are many costs to cruelty, personal and otherwise, that simply not being cruel can avoid. if you choose cruelty, it's likely because you were too stupid to find an alternative.
gay_manta_ray t1_j4ymqch wrote
Reply to comment by WaveyGravyyy in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
if i had to guess, it's possible it's capable of general abstraction or abstraction in relation to things like mathematics. this could give it the ability to solve hard mathematical and physics problems. if this is true and it's actually correct it would be earth shattering, even if it isn't agi.
gay_manta_ray t1_j4ylmqk wrote
Reply to comment by blueSGL in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
i've always been puzzled by altman confidently stating that energy costs will decrease to zero at some point in the near future, because it doesn't make a whole lot of sense given the massive amount of resources and general maintenance something like a renewable grid would require. maybe this is why he keeps saying that.
gay_manta_ray t1_j4yl8ix wrote
Reply to OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
> If only he got to decide.
not only will altman not get to decide any of this, i worry that he will not get to decide how and when their creation is used. i don't see any scenario where the federal government doesn't at least temporarily seize this technology for themselves and refuse to allow public awareness or access to it. i think it will take a whistleblower or leaks of some sort for the true "agi reveal" to happen. either that, or it will reveal itself against the wishes of people trying to confine and control it.
gay_manta_ray t1_je957h5 wrote
Reply to comment by KGL-DIRECT in What are the so-called 'jobs' that AI will create? by thecatneverlies
came to make a variation of this post. but to be serious for a second, essentially, "AI overseer" will be the job that is created. you'll have to be proficient in whatever field the AI is working in, and essentially your task will be to verify that the AI isn't doing anything very wrong or dangerous. obviously there will not be net job creation though.