manubfr
manubfr t1_jdzqpdq wrote
I did go to university but I dropped out... checkmate, AI!
manubfr t1_jdu95jl wrote
Reply to comment by jentravelstheworld in Story time: Chat GPT fixed me psychologically by matiu2
Your experience is quite interesting, would you say you found AI less biased than the average human?
manubfr t1_ja6rvp8 wrote
So you see, I believe ChatGPT has nearly 0% chance of being conscious.
Now this...
manubfr t1_ja2u74f wrote
Reply to comment by jeweliegb in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Ok but where's my coffee? Wait I'm dea...
manubfr t1_ja28jbt wrote
Reply to comment by Tigerowski in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
and world peace while you're at it. And bring me my goddamn coffee.
manubfr t1_j9kehjl wrote
Reply to comment by [deleted] in Transhumanism - Why I believe it is the solution by the_alex197
Pretty sure that was sarcasm mate :)
manubfr t1_j8o23uo wrote
Me: enters life into bot
Bot: reads, shakes head
manubfr t1_j8hf8la wrote
Reply to The new Bing AI hallucinated during the Microsoft demo. A reminder these tools are not reliable yet by giuven95
I've had access for a few days and I feel quite underwhelmed. Bing chat is VERY inaccurate, I'd say more than half the time when researching on topics I am very familiar with, it correctly identifies information sources and then botches up the output, making very plain mistakes (e.g. pulls the correct statement from a webpage except the year which it gets wrong, replacing 2022 with 2021 within the same statement). It also struggles with disambiguation, eg two homonyms will be mixed up.
I honestly thought web connectivity would massively improve accuracy, but so far I've been very disappointed. However, the short term creative potential of LLMs and image models is insane.
manubfr t1_j8haq4o wrote
Reply to comment by inglandation in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Yeah it's not like, say, a game developer with a chess background could become the CEO of one of the most exciting AI companies out there.
manubfr t1_j7kitro wrote
Reply to comment by JackFisherBooks in 200k!!!!!! by Key_Asparagus_919
400k? That's barely exponential! Let's aim for 4 million
manubfr t1_j67ybjg wrote
Reply to comment by ElvinRath in MULTI·ON: an AI Web Co-Pilot powered by ChatGPT that browses the web and automates the tasks by Schneller-als-Licht
With enough data and a smarter model you could probably ask it first to break down all tasks and then execute them sequentially without human intervention. That’s what Adept ACT-1 is trying to do.
I fully expect that a lot of complex digital tasks will one day be fully automated, you will enter a high level description of what you want, the model will propose options for you to pick, then calculate the compute budget requirements for your selected options and give you a few quotes.
So for example, “order a burger fries coke now” will essentially be free, while “write and design a 40-page comic book about the story of Theseus in the style of Frank Miller then publish it on amazon” will come back with options (maybe that task costs $20 or something, likely cheaper).
Automating entire workflows is, to me, the most exciting and realistic outcome of LLMs in the next few years.
manubfr t1_j5y8mo0 wrote
Reply to comment by CKtalon in Few questions about scalability of chatGPT [D] by besabestin
You're right, it could be that 3.5 is already using that approach. I guess the emergent cognition tests haven't yet been published for GPT-3.5 (or have they?) so it's hard for us to measure performance as individuals. I guess someone could test text-davinci-003 on a bunch of cognitive tasks on the PlayGround but I'm far too lazy to do that :)
manubfr t1_j5y6wko wrote
Google (and DeepMind) actually have better LLM tech and models than OpenAI (if you believe their published research anyway). They had a significant breathrough last year in terms of scalability: https://arxiv.org/abs/2203.15556
Existing LLMs are found out to be undertrained and with some tweaks you can create a smaller model that outperforms larger ones. Chinchilla is arguably the most performant model we've heard of to date ( https://www.jasonwei.net/blog/emergence ) but it hasn't been pushed to any consumer-facing application AFAIK.
This should be powering their ChatGPT competitor Sparrow which might be reeleased this year. I am pretty sure that OpenAI will also implement those ideas for GPT-4.
manubfr t1_jeequ3n wrote
Reply to What if language IS the only model needed for intelligence? by wowimsupergay
“The world is made of language” - Terence McKenna