TFenrir
TFenrir t1_jefd3vh wrote
Reply to comment by monsieurpooh in When will AI actually start taking jobs? by Weeb_Geek_7779
I'll give you a very common job - someone whose job is to take meeting notes, summarize them, turn them into meetings or documents, create slide shows - sometimes for a single person, execs, or a group of people. We have a couple of people with this job in my company.
TFenrir t1_jef6zp6 wrote
Reply to comment by SalimSaadi in When will AI actually start taking jobs? by Weeb_Geek_7779
It's not in people's hands yet, these are press releases, but the difference will be when it's folded into everyone's Microsoft/Google experience, which will take months. Maybe by the summer everyone will have access.
TFenrir t1_jeenq5p wrote
Reply to comment by Arowx in What if it's just chat bot infatuation and were overhyping what is just a super big chat bot? by Arowx
> The thing is it's like a DJ mixing records it could generate some amazing new mixes but if the pattern is not already out there it's very unlikely to find new patterns.
What does this mean in practice?
Hypothetically, let's say I ask a future (1-2 years out) model to write me a brand new fantasy book series, and tell it what all my favourite books are - and it writes me something that is stellar, 5/5. If someone comes to me and says "yes but is this TRULY original?" What does that even mean?
I think some people are very confident that LLMs cannot find new ideas, but I don't know where they get that confidence from - LLMs have continuously exceeded the thresholds proposed by their critics, and now it feels like we're getting into the esoteric. It's a bit of a... God of the gaps situation to me.
Hypothetically, let's say a language model solves a math problem that has never been solved before - would that change your mind? Do you think that's even possible?
TFenrir t1_jeeku3t wrote
Reply to Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
Looks like switching to PaLM based Bard will be happening soon, but even if that improves base capabilities (it will), there needs to be more than just a base model improvement. Integration into ecosystems that are cropping up is also essential, things like plugins, and more
TFenrir t1_jee9pmm wrote
Reply to What if it's just chat bot infatuation and were overhyping what is just a super big chat bot? by Arowx
What does it even mean to overhype this?
Let me ask this way - do you think that eventually this technology will be able to write coherent novels?
What impact to the entire world would something like that have?
Do you think it will only be able to write novels?
TFenrir t1_jec7prk wrote
Reply to comment by Weeb_Geek_7779 in When will AI actually start taking jobs? by Weeb_Geek_7779
No official dates yet as far as I know
Edit: lol just saw this -
TFenrir t1_jebuahq wrote
I think it's a hard question to answer, because many factors can go into layoffs - and after layoffs it's very common for companies to not hire back similar roles but replace tasks with software. That doesn't even get into the culture of layoffs - some companies just don't like doing it, and you'll hear stories about people who go into work all day and play Minesweeper or whatever.
That being said, I think we'll see the first potentially significant disruption when Google and Microsoft release their office AI suite.
I know people whose entire job is to make PowerPoint/Slides. When someone can say "turn this email chain into a proposal doc" -> "turn this proposal doc into a really nice looking set of slides, with animations and a cool dark theme" - that's going to be very disruptive.
TFenrir t1_jebq9go wrote
Reply to comment by TheDividendReport in Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
If Kurzweil forecasting skills are good enough maybe he knew that selling the book this summer would make him the most money lol
TFenrir t1_je16wdr wrote
Enjoy life, you don't know what the world is going to look like in 10 years, so pursue fulfillment by following those dreams you've been putting off, as soon as you can.
TFenrir t1_jdt9bng wrote
Reply to comment by [deleted] in Story Compass of AI in Pop Culture by roomjosh
I remember a completely different movie. In what I remember, the AI tries its best to not hurt anyone, and in the end - really doesn't, even when defending itself. And humanity in the end becomes it's own worst enemy, as in its fear of the AI, sets itself down a path of global pain and suffering.
Who's lives did the AI end up sacrificing? Didn't it save a lot of people's lives?
TFenrir t1_jdt5sir wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
In what way was Transcendence about an evil AI?
TFenrir t1_jdim3vv wrote
Reply to comment by light24bulbs in [N] ChatGPT plugins by Singularian2501
That is a really good tip.
I'm using langchainjs (I can do python, but my js background is 10x python) - one of the things I want to play with more is getting consistent json output from a response - there is a helper tool I tried with a bud a while back when we were pairing... Typescript validator or something or other, that seemed to help.
Any tips with that?
TFenrir t1_jdicafu wrote
Reply to comment by light24bulbs in [N] ChatGPT plugins by Singularian2501
Awesome! Good to know it will work
TFenrir t1_jdhqnb3 wrote
Reply to comment by light24bulbs in [N] ChatGPT plugins by Singularian2501
Are you working with the gpt4 api yet? I'm still working with 3.5-turbo so it isn't toooo crazy during dev, but I'm about to write a new custom agent that will be my first attempt at a few different improvements to my previous implementations - one of them namely is trying to use different models for different parts of the chain, conditionally. Eg - I want to experiment with using 3.5 for some mundane infernal scratch pad work, but switch to 4 if the confidence of the agent in success is low - that sort of thing.
I'm hoping I can have some success, but at the very least the pain will be educational.
TFenrir t1_jchmcei wrote
>So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce.
That's fair, there's actually just so much that is happening, and has been happening for years, keeping up with it all is overwhelming.
> I don’t really understand the scope of this AI and techno stuff going on. I’m not saying these advancements are not a big deal because it is. However, I can’t help but scoff in disbelief when I see people talk about things like, immortality achieved, true equality within society, capitalism replaced, labour reduced, climate change reversed and the worlds problems are fixed. I see a lot of utopian “possibilities” get thrown around.
No one can predict this. Anyone who says they are confident, is just idealistic and optimistic. No one knows. There are people in the world who want this to be true, who want us to move to a more utopian society. Plenty of those people are in this sub, because they believe that the upheaval and change from something like a true general intelligence can release us from all worldly burdens, to different degrees of craziness.
> Is change of this scale really coming? It seems kinda sci-fi to me. More fantasy than reality.
Honestly, no one knows. One common thought process is:
- AI will get smarter than humans
- AI will be able to self improve
- AI will then be benevolent
- AI will solve all health/scarcity problems
- AI will keep us alive forever
- AI will help us connect our brains to machines
- From this point, basically anything
This is a common dream for people in the sub, and it's influenced more and more but popular media (Sword Art Online, Upload (Amazon), etc). What's complicated about this is that while this is all basically fantasy or idealism after point 2, the first two points feel increasingly likely. And it opens up all kinds of doors. So I think you'll increasingly see these and other more fantastical dreams.
I just like to focus on the tech.
> I can’t really wrap my head around all the information and terms. Like those weekly AI news posts with all the things that happen in a week make no sense to me. I have no clue whats going on really. I’m inclined to believe we are really on the precipice of huge change since so many people talk as if we are. Although I don’t get the same enthusiasm outside of this subreddit. Its not really talked about in the news or governmentally.
It's starting to happen outside of this group, but yeah there are years and years of terminology and concepts that can be overwhelming for someone who is new to it all. But feel free to ask questions, and many people are willing to be helpful and answer.
> These are just my personal thoughts and to add some discussion aspect to this posts I’ll end of with a question. When do you think these advancements in AI/technology really start to seep into the inner workings of our society and make noticeable change for the layman?
I think it will start now, and will grow bigger soon. I think Google Docs/Microsoft office are going to be the big ones, but we are now getting these tools inside of apps like Slack as well.
This means that people will start using them every day for work, which will Herald in the overarching public discourse.
TFenrir t1_jadrb3u wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
I think if we can get a really good, probably sparsely activated, multimodal model that can do continual learning that shows transfer - ala Pathways, many white collar jobs are done.
Any system that has continual learning I think would also have short/medium/whatever term memory, and a context window that can handle enough at once that rivals what we can handle at any given time.
But the thing is I think that unlike biological systems, there are many different inefficient ways to get us there as well. A very dense model that is big enough, with a better fine tuning process might be all we need. Or maybe the bottle neck is currently really context, as in-context learning is quite powerful, what if we suddenly have an efficiency breakthrough with a Transformer 2.0 that can allow for context windows of 1 million tokens?
Also maybe we don't need multimodal per se, maybe a system that is trained on pixels would cover all bases.
TFenrir t1_jacuy0q wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
I think this is really hard to predict, because there are many different paths forward. What if LLMs get good at writing directly minified code? What if they make their own software language? What happens with new architectures that maybe have something like....RETRO or similar memory stores built in. Heck even current vector stores allow for some really impressive things. There are tons of architectures that could potentially come into play that make the maximum context window of 32k tokens more than enough, or maybe 100k is needed. There was a paper I read a while back that was experimenting with context windows that large.
Also you should look into Google pitchfork, which is the code name for a project Google is working on that is essentially an LLM tied to a codebase, that can iteratively improve it through natural language requests.
My gut is, by this summer we will start to see very interesting small apps built with unique architectures that are LLMs iteratively improving a codebase. I don't know where it will go from there.
TFenrir t1_ja9dwyk wrote
Is there something particularly romantic or praiseworthy in menial work to you? Do you think a single mother who would rather spend time with her family than stocking shelves, is acting immorally?
TFenrir t1_ja5m755 wrote
Reply to comment by Baetallo in Can we discuss idiocy of Deepmind’s decision to develop an AI to play a board game with limited degrees of freedom when compared to OpenAi’s decision to develop an ai to play a video game with nigh infinite degrees of freedom? by [deleted]
Okay let me try a different response. What efforts do you think DeepMind has had in creating models/agents that can play video games?
TFenrir t1_ja55hrf wrote
Reply to Can we discuss idiocy of Deepmind’s decision to develop an AI to play a board game with limited degrees of freedom when compared to OpenAi’s decision to develop an ai to play a video game with nigh infinite degrees of freedom? by [deleted]
? Idiocy? You think anyone at DeepMind is an idiot?
Look, we can talk about the value of solving for something like Go/Baduk, even in just getting people to understand the power of RL (as it was predicted it would be another decade before anyone would be able to beat a master), or we can talk about the fact that DeepMind has dozens of concurrent projects running at any given time and they have models that have been trained in everything from Atari games, to Starcraft - to models that are embodied in robots or models trained on stabilizing plasma... But what I think is more important is you try to remember that none of these organizations is composed of idiots.
They are made of international teams, filled to the brim with people who have excelled in their careers and in usually everything they have attempted in their lives. I don't think you need to show respect or anything dramatic like that, but if you are approaching your critique of their actions with this assumption that they are like... Dumb. Don't. We can put aside that even your critique here shows you don't have a good understanding of what they are working on, because I think before all that, you are doing yourself a disservice if you don't appreciate the depth of consideration that goes into basically everyone of their decisions. Even if there are missteps and mistakes, none of them come from idiocy.
TFenrir t1_j9v3joj wrote
A lot of it has to do with computational intensity and latency. Text to audio and vice versa takes a bit of time - and different challenges for local or cloud based solutions.
Let's say you want chatbot to real-time reply to you in audio, with a cloud based solution.
First you speak to it in audio, and that is sent to a cloud server - this part is relatively fast, and what already happens with things like Google home/Alexa. Then it needs to convert it to text, and run that text through an LLM. Then the LLM creates a response, and that needs to be converted to audio.
Let's say for a solution like we see with elevenlabs, it takes 2 seconds for every second of audio you want to generate. That means if the reply is going to be 10 seconds, it takes 20 seconds to generate. That would be too slow.
You might have opportunity to stream that audio, by only generating some of the text to audio before starting the process, but these solutions work better when they are given more text to generate all at once... Generating a word at a time would be like talking with A. Period. In. Between. Every. Word.
TFenrir t1_j9pff6n wrote
Reply to Been reading Ray Kurzweil’s book “The Singularity is Near”. What should I read as a prerequisite to comprehend it? by Golfer345
Ironically, I think the best way to deal with terms in the book you don't understand is to just ask chatgpt to explain it to you. Either by giving it an excerpt, asking it to define a word, or asking it about the book, eg "in the singularity is near, what does Kurzweil mean by algorithmic improvements outpacing hardware?"
Btw this is ChatGPTs answer:
> In "The Singularity is Near," Ray Kurzweil argues that we are approaching a point in the future where machine intelligence will surpass human intelligence, an event he refers to as the "Singularity." Kurzweil suggests that this rapid advance of technology will be driven by two major factors: the exponential growth of computing power and the development of more advanced algorithms.
> Regarding the concept of "algorithmic improvements outpacing hardware," Kurzweil is referring to the fact that the speed and power of computers are not the only factors that determine their capabilities. The software, or the algorithms that run on the hardware, also play a crucial role in what computers can accomplish.
> Kurzweil argues that as we continue to develop better algorithms, we will be able to do more with the same amount of computing power, or even less. This means that even if we reach a point where we cannot continue to improve the hardware, the development of more advanced algorithms will still enable us to create machines that are more intelligent and capable.
> Therefore, Kurzweil's idea of "algorithmic improvements outpacing hardware" suggests that the development of better algorithms will be a key factor in driving the progress of machine intelligence, even if we reach limits in terms of hardware capabilities.
TFenrir t1_j9p35nk wrote
Reply to Seriously people, please stop by Bakagami-
I generally agree, but sometimes I feel like there are very interesting conversations around improved functionality - I never share any of these myself, but if I saw one I would be interested. Here are a few examples from one particular person on Twitter who likes to put Bing through the ringer:
https://twitter.com/emollick/status/1628605530963845123?t=V63nQ-OLGRhUbaeNTA5hlA&s=19
https://twitter.com/emollick/status/1626084142239649792?t=RLI3NAv6CqjahpbZ3ZMy9g&s=19
There are more from that user that are interesting, and some things that are interesting is just how much more sophisticated Bing is at lying/hallucinating.
In general, I don't think that these tweets are thread worthy, but for example there are expected Bing updates today that might improve quality of updates, or add more options, or maybe in the near future updates that give Bing access to more tools (ala Toolformer), so I wouldn't want a hard and fast "don't share any chatbot outputs" rules.
TFenrir t1_j9iaxdg wrote
Reply to comment by turnip_burrito in OpenAI has privately announced a new developer product called Foundry by flowday
I really want to see how coherent and sensible it can be at 32k tokens, and a fundamentally better model. Could it write a whole short story off a prompt?
TFenrir t1_jefhazp wrote
Reply to comment by monsieurpooh in When will AI actually start taking jobs? by Weeb_Geek_7779
That's fair, I think there might be other jobs that are similar here or there, but I use it as a good example that I feel like could almost be replaced whole hog with just one new AI app