WarAndGeese
WarAndGeese t1_jdyi94w wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
You are thinking about it backwards. This stuff is happening now and you are a part of it. You are among the least of people who is "missing out", you are in the centre of it as it is happening.
WarAndGeese t1_jdy5z29 wrote
Reply to comment by tt54l32v in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I'll call them applications rather than neural networks or LLMs for simplicity.
The first application is just what OP is doing and what people are talking about in this thread, that is, asking for sources.
The second application has access to research paper databases, through some API presumably. For each answer that the first application outputs, the second answer queries it against the databases. If it gets a match, it returns a success. If it does not find the paper (this could be because it doesn't exist or becauase the title was too different from that of a real paper, either case is reasonable) it outputs that it was not found. For each paper that was not found, it outputs "This paper does not exist, please correct your citation". That output is then fed back into the first application.
Now, this second application could be a sort of database query or it could just consist of a second neural network being asked "Does this paper exist?". The former might work better but the latter would also work.
The separation is for simplicity's sake, I guess you can have one neural network doing both things. As long as each call to the neural network is well defined it doesn't really matter. The neural network wouldn't have memory between calls so functionally it should be the same. Nevertheless I say two in the same way that you can have two microservices running on a web application. It can be easier to maintain and just easier to think about.
WarAndGeese t1_jdubx7q wrote
Reply to comment by BullockHouse in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Also if the second neural network is running as a separate internet-connected application, it can go out and verify the output of the first, send back its results, and tell the first to either change or remove each paper that it cannot find and verify. The second neural network can make errors as well, but through these interconnected systems errors can be reduced somewhat largely.
WarAndGeese t1_jdt8f3u wrote
Reply to comment by bjj_starter in [D] GPT4 and coding problems by enryu42
Arguments against solipsism are reasonable enough to assume that other humans, and therefore other animals, are conscious. One knows that one is conscious. One, even if not completely understanding how it works, understands that it historically materially developed somehow. One knows that other humans both act like one does, and they also know that other humans have gone through the same developmental process, evolutionarity, biologically, and so on. It's reasonable to assume that whatever inner workings developed consciousness in one's mind, would have also developed in others' minds, though the same biological processes. Hence it's reasonable to assume that other humans are conscious, even that it's the most likely situation that they are conscious. This thinking can be expanded to include animals, even if they have higher or lower levels of consciousness and understanding than we do.
With machines you have a fundamentally different 'brain structure', and you have one that was pretty fundamentally designed to mimic. Whereas consciousness can occur independently and spontaneously and so on, it is not just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.
WarAndGeese t1_jdl5t0z wrote
Reply to comment by mxby7e in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Boo hoo to openai, people should do it anyway. Is the terms of service the only reason not to do it or are there actual material barriers? If it's a problem of money then as long as people know how much money it can be crowdfunded. If it's a matter of people power then there are already large volunteer networks. Or is it just something that isn't practical or feasible?
WarAndGeese t1_jdl5aq6 wrote
Reply to comment by kromem in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
That would be pretty nuts and pretty cool. It's still a weird concept, but if it becomes like an operating system that you update, that would be a thing.
WarAndGeese t1_jb0rsum wrote
Reply to comment by currentscurrents in [N] EleutherAI has formed a non-profit by StellaAthena
My mistake, it is a funny and good joke I just overreacted. I see too many non-ironic statements like that and it clouded my vision.
WarAndGeese t1_jax2k9d wrote
Reply to comment by WarAndGeese in [P] LazyShell - GPT based autocomplete for zsh by rumovoice
I think I would prefer that ends up not being the case, but I can see the trajectory of how it would be.
WarAndGeese t1_jax2je6 wrote
I imagine that stuff like this will be the future of interacting with computers, at least to a large extent, but it's frustrating how people sacrifice certainty for 'the probability of it being right are good enough'.
WarAndGeese t1_jaqz7b1 wrote
Reply to comment by WarAndGeese in [N] EleutherAI has formed a non-profit by StellaAthena
These things need to be free and open source, not have some profit motive to them. As soon as that day comes means interest in the project is lost and people will look for some other 'free' or 'eleuther' project.
WarAndGeese t1_jaqz31a wrote
Reply to comment by currentscurrents in [N] EleutherAI has formed a non-profit by StellaAthena
Why? That would be the end of it. If your comment was sarcastic then pardon my overreaction.
WarAndGeese t1_jalq339 wrote
Reply to comment by Educational-Net303 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Don't let it demotivate competitors. They are making money somehow, and planning to make massive amounts more. Hence the space is ripe for tons of competition, and those other companies would also be on track to make tons of money. Hence, jump in competitors, the market is waiting for you.
WarAndGeese t1_ja7z1kn wrote
Reply to comment by YobaiYamete in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Are you sure that YouChat is running on a version of GPT? (Presumably you mean openai's software.) I was speaking to a founder of a company that had some partnership with You.com and he was saying they roll their own machine learning stuff, that they (You.com) were already machine learning experts.
WarAndGeese t1_ja2wp7i wrote
Reply to comment by Scarlet_pot2 in Bernie Sanders proposes taxes on robots that take jobs by Scarlet_pot2
Hopefully just the threat.
WarAndGeese t1_ja2duq1 wrote
Great stuff
WarAndGeese t1_ja28t3v wrote
Reply to comment by RavenWolf1 in Bernie Sanders proposes taxes on robots that take jobs by Scarlet_pot2
Yes, the problem isn't that we shouldn't be doing it, the problem isn't that we haven't been doing it up until now.
Of course, it's not like we would come up with specific taxes on spreadsheet software and calculators. The financial gains from those are supposed to funnel their way down into profit that we tax, however there are such flaws in the tax structure that they aren't funneling their way down, so we aren't effectively taxing to collect some of the benefits that we get out of things like spreadsheet software and calculators.
WarAndGeese t1_j9zqih4 wrote
People should really focus on ideas. He is just a dude, and evidently a cult formed around him. I have stayed away in part from certain movements like effective altruism despite independently coming to the same logical inclusions long before hearing about them, because my suspicion that a lot of those in the movement were pursuing social status. That further seemed to develop into a cult. That's not to say that the effective altruist community is uniquely cult-y, it's probably less so than any other human community, but for a community that also calls itself rationalist you would think they would have disposed of that sort of behaviour long ago.
In short he's just a guy, people should stop focussing so much on people like that, and people should focus on the ideas like the potential impending threats of artificial intelligence, as well as other progress for humanity.
WarAndGeese t1_j9zp404 wrote
Reply to comment by diviludicrum in Stephen Wolfram on Chat GPT by cancolak
With humans we can safely assume that solipsism is not the case. With artificial intelligence though, we don't really know one way or the other. Hence we need to understand consciousness, to understand sentience, and then if we want to build it we can build it. If we don't understand what sentience is though, then yes like you say we wouldn't actually know if an artificial intelligence is aware. I guess part of the idea for some people is that this discovery will come along the way of trying to build an artificial intelligence, but for now we don't seem to know.
WarAndGeese t1_j9zohuk wrote
Reply to comment by RiotNrrd2001 in Stephen Wolfram on Chat GPT by cancolak
I think that's the natural order of the world. Thoughts and inventions get re-thought and re-invented so many times, and the first many times usually don't get written down. Or they get repeated multiple times in local conversations. Hence I agree that it still counts.
WarAndGeese t1_j9zmvgx wrote
Reply to comment by YaAbsolyutnoNikto in Bernie Sanders proposes taxes on robots that take jobs by Scarlet_pot2
Can't tax profit if corporations don't post profits. There's a reason that companies focus on growth and artificially inflate expenses if they can rearrange their accounting to minimize profits.
Now, the growth that happens as result is good because it translates to more production for society, however if the tax system was better then we could already have short work schedules and UBI.
WarAndGeese t1_j9zm18l wrote
> Then when unemployment is up and people are desperate, the socialists can purpose a UBI.
The UBI has to happen now, not as a response. Otherwise a war could likely break out and it would be done through a revolutionary struggle in which a lot of people would die.
WarAndGeese t1_j9zllpi wrote
Reply to comment by WarAndGeese in Microsoft Has Crazy Plans For The Future - Crushing Google Is Only An Afterthought For Them by LesleyFair
Again the article is well-researched and links sources and everything, I guess my comment belongs elsewhere in cases where other people keep framing things in terms of companies and corporations.
WarAndGeese t1_j9zlf9t wrote
Reply to Microsoft Has Crazy Plans For The Future - Crushing Google Is Only An Afterthought For Them by LesleyFair
Why is this framed in the context of corporate worship? If things go as they should them microsoft and google would cease to exist.
It's a well-written article so sorry for the contrarian comment. My comment isn't a response to your article, but of the regular framing people have about "this technology means X company will beat Y company". Who cares about these companies or about the people in them? Again, if things go as they should, the companies would functionally cease to exist.
WarAndGeese t1_j9zi7ft wrote
Reply to comment by Johnny_WakeUp in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Yes there are but I don't know all of them. Note though that Stable Diffusion blew Dall-E2 and Imagen out of the water. Because it was free and open source, it was much more widely used. Now Dall-E is probably still going to be used heavily in industry, but the closed tools and expensive tools tend to lose out to the free and open source ones. That's one thing that has happened so far with generative adversarial networks and that's one thing that would likely happen with large language models and other models as well.
WarAndGeese t1_jdyi9nm wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
I think a lot of people have falsely bought the concept that their identity is their job, because there is such material incentive for that to be the case.
Also note that people seem to like drama, so they egg on and encourage posts about people being upset or emotional, whereas both, those cases aren't that representative, and those cases themselves are exaggerated for the sake of that drama.