WarAndGeese

WarAndGeese t1_jdyi9nm wrote

I think a lot of people have falsely bought the concept that their identity is their job, because there is such material incentive for that to be the case.

Also note that people seem to like drama, so they egg on and encourage posts about people being upset or emotional, whereas both, those cases aren't that representative, and those cases themselves are exaggerated for the sake of that drama.

14

WarAndGeese t1_jdyi94w wrote

You are thinking about it backwards. This stuff is happening now and you are a part of it. You are among the least of people who is "missing out", you are in the centre of it as it is happening.

26

WarAndGeese t1_jdy5z29 wrote

I'll call them applications rather than neural networks or LLMs for simplicity.

The first application is just what OP is doing and what people are talking about in this thread, that is, asking for sources.

The second application has access to research paper databases, through some API presumably. For each answer that the first application outputs, the second answer queries it against the databases. If it gets a match, it returns a success. If it does not find the paper (this could be because it doesn't exist or becauase the title was too different from that of a real paper, either case is reasonable) it outputs that it was not found. For each paper that was not found, it outputs "This paper does not exist, please correct your citation". That output is then fed back into the first application.

Now, this second application could be a sort of database query or it could just consist of a second neural network being asked "Does this paper exist?". The former might work better but the latter would also work.

The separation is for simplicity's sake, I guess you can have one neural network doing both things. As long as each call to the neural network is well defined it doesn't really matter. The neural network wouldn't have memory between calls so functionally it should be the same. Nevertheless I say two in the same way that you can have two microservices running on a web application. It can be easier to maintain and just easier to think about.

2

WarAndGeese t1_jdubx7q wrote

Also if the second neural network is running as a separate internet-connected application, it can go out and verify the output of the first, send back its results, and tell the first to either change or remove each paper that it cannot find and verify. The second neural network can make errors as well, but through these interconnected systems errors can be reduced somewhat largely.

6

WarAndGeese t1_jdt8f3u wrote

Arguments against solipsism are reasonable enough to assume that other humans, and therefore other animals, are conscious. One knows that one is conscious. One, even if not completely understanding how it works, understands that it historically materially developed somehow. One knows that other humans both act like one does, and they also know that other humans have gone through the same developmental process, evolutionarity, biologically, and so on. It's reasonable to assume that whatever inner workings developed consciousness in one's mind, would have also developed in others' minds, though the same biological processes. Hence it's reasonable to assume that other humans are conscious, even that it's the most likely situation that they are conscious. This thinking can be expanded to include animals, even if they have higher or lower levels of consciousness and understanding than we do.

With machines you have a fundamentally different 'brain structure', and you have one that was pretty fundamentally designed to mimic. Whereas consciousness can occur independently and spontaneously and so on, it is not just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.

1

WarAndGeese t1_jdl5t0z wrote

Boo hoo to openai, people should do it anyway. Is the terms of service the only reason not to do it or are there actual material barriers? If it's a problem of money then as long as people know how much money it can be crowdfunded. If it's a matter of people power then there are already large volunteer networks. Or is it just something that isn't practical or feasible?

7

WarAndGeese t1_jalq339 wrote

Don't let it demotivate competitors. They are making money somehow, and planning to make massive amounts more. Hence the space is ripe for tons of competition, and those other companies would also be on track to make tons of money. Hence, jump in competitors, the market is waiting for you.

−1

WarAndGeese t1_ja28t3v wrote

Yes, the problem isn't that we shouldn't be doing it, the problem isn't that we haven't been doing it up until now.

Of course, it's not like we would come up with specific taxes on spreadsheet software and calculators. The financial gains from those are supposed to funnel their way down into profit that we tax, however there are such flaws in the tax structure that they aren't funneling their way down, so we aren't effectively taxing to collect some of the benefits that we get out of things like spreadsheet software and calculators.

1

WarAndGeese t1_j9zqih4 wrote

People should really focus on ideas. He is just a dude, and evidently a cult formed around him. I have stayed away in part from certain movements like effective altruism despite independently coming to the same logical inclusions long before hearing about them, because my suspicion that a lot of those in the movement were pursuing social status. That further seemed to develop into a cult. That's not to say that the effective altruist community is uniquely cult-y, it's probably less so than any other human community, but for a community that also calls itself rationalist you would think they would have disposed of that sort of behaviour long ago.

In short he's just a guy, people should stop focussing so much on people like that, and people should focus on the ideas like the potential impending threats of artificial intelligence, as well as other progress for humanity.

2

WarAndGeese t1_j9zp404 wrote

With humans we can safely assume that solipsism is not the case. With artificial intelligence though, we don't really know one way or the other. Hence we need to understand consciousness, to understand sentience, and then if we want to build it we can build it. If we don't understand what sentience is though, then yes like you say we wouldn't actually know if an artificial intelligence is aware. I guess part of the idea for some people is that this discovery will come along the way of trying to build an artificial intelligence, but for now we don't seem to know.

1

WarAndGeese t1_j9zohuk wrote

I think that's the natural order of the world. Thoughts and inventions get re-thought and re-invented so many times, and the first many times usually don't get written down. Or they get repeated multiple times in local conversations. Hence I agree that it still counts.

1

WarAndGeese t1_j9zmvgx wrote

Can't tax profit if corporations don't post profits. There's a reason that companies focus on growth and artificially inflate expenses if they can rearrange their accounting to minimize profits.

Now, the growth that happens as result is good because it translates to more production for society, however if the tax system was better then we could already have short work schedules and UBI.

1

WarAndGeese t1_j9zlf9t wrote

Why is this framed in the context of corporate worship? If things go as they should them microsoft and google would cease to exist.

It's a well-written article so sorry for the contrarian comment. My comment isn't a response to your article, but of the regular framing people have about "this technology means X company will beat Y company". Who cares about these companies or about the people in them? Again, if things go as they should, the companies would functionally cease to exist.

2

WarAndGeese t1_j9zi7ft wrote

Yes there are but I don't know all of them. Note though that Stable Diffusion blew Dall-E2 and Imagen out of the water. Because it was free and open source, it was much more widely used. Now Dall-E is probably still going to be used heavily in industry, but the closed tools and expensive tools tend to lose out to the free and open source ones. That's one thing that has happened so far with generative adversarial networks and that's one thing that would likely happen with large language models and other models as well.

1