Submitted by mossadnik t3_10sw51o in technology
henningknows t1_j73ugi1 wrote
I haven’t seen a single this about this that makes me think it’s widespread use is a benefit to society. Mostly look like it will cause problems
ex_sanguination t1_j73y4cz wrote
Meh, this happened during the industrial age as well. It's just new technology making certain jobs/roles obsolete. It's heartbreaking for those who're being affected, but it's a step in the right direction as a society.
I_ONLY_PLAY_4C_LOAM t1_j74f5si wrote
I don't think it's made very much of anything obsolete. It's still pretty shit. If anything, it's degrading the quality of content on the internet and making it less useful.
lycheedorito t1_j7556hw wrote
You can already catch ChatGPT responses on Reddit, ArtStation recently had been flooded with AI art... They now have a filter but it doesn't catch people being fraudulent about authenticity. Both of these things make me less inclined to engage or care. I suppose if you are completely unaware of it you might not notice, but people who are aware do. Is the idea that we'll all just tell AI to respond to everything for us, so we're just proxies for artificial conversation?
I_ONLY_PLAY_4C_LOAM t1_j758lje wrote
It makes the internet shittier
ex_sanguination t1_j74gk2s wrote
Oh for sure, it hasn't caused any major upheavals yet. But once it's refined it'll start to make a more noticable impact. This all being in the future. Give it 10 years? But who knows, maybe this is the same hoopla as self driving cars were back in 2015.
I_ONLY_PLAY_4C_LOAM t1_j74hkx2 wrote
The difference being that self driving cars work pretty well and are already being offered to the public.
ex_sanguination t1_j74i8h3 wrote
Right, but the fear that taxi services, truck drivers, and delivery drivers etc was blown out of proportion. Can it still happen? Sure. But people were saying by early 20s' there would be massive change in the workforce.
Trotskyist t1_j74wmh1 wrote
I mean, self-driving taxis are a thing now in several cities/states and are actively expanding into new markets. Obviously, it hasn't taken over yet and become the norm (if it does at all) but it's absolutely a growing industry.
henningknows t1_j73ycc6 wrote
Why is it a step in The right direction?
joanmave t1_j74ffqx wrote
Because it provides value. It generates answers for questions that can require a more extensive due dilligence. Instead of a human scouring the internet for answers it can directly and comprehensively answer the question in the context is asked with explanations. For instance, software developers are using it by being recommended actual implementations in code that actually works, solving problems much faster and being more productive.
Edit: I want to add that the answers are very specific to the problem stated by the user. ChatGPT does not provides a general answer but a very specific answer for the problem at hand.
henningknows t1_j74fzs4 wrote
Fair enough. I can see that being useful once all the kinks are worked out.
ex_sanguination t1_j73zgq2 wrote
Customer service roles across the board. It frees up time for workers to handle more important/critical thinking tastes vs. simple customer service based ones. It's a fantastic tool to bounce ideas off of, cultivating a person's/staffs creativity. It's also brilliant at taking information and writing articles/inquiries.
Regulation will be needed, but overall it's a netgain.
henningknows t1_j73zwqa wrote
Yeah, well when I start reading schools are considering not having written assignments anymore that worries me. People need to learn how to do things like that and think for themselves.
ex_sanguination t1_j740u29 wrote
Understandable, and like I said regulation is going to be needed. But ask yourself this? Kids curriculum nowadays (USA) is test based and has little critical thinking involved. The fact a fledgling AI can pass as a high school student is an issue, but is it an AI problem or an issue how our schools operate/teach?
Also, software to recognize AI generated content is already being made and I'm sure schools will implement a submit system that verifies their students work.
henningknows t1_j7414m9 wrote
Yeah I hear that. They have my kid memorizing spellings and definitions of words. He gets an A on every test. Then forgets all of it a few weeks later
ex_sanguination t1_j741dxy wrote
It's all above my pay grade and I don't envy you as a parent in today's climate, but I'm sure your little ones gonna be alright :)
Hell, remember cursive? 🤣
Art-Zuron t1_j742rzs wrote
I still write in pseudocursive to this day, and, while people say it's pretty, its also a bitch to read.
demonicneon t1_j74pfi3 wrote
You should try and learn architectural print. I was also cursive but switched in uni and it’s more legible and I write just as fast.
Jaysnewphone t1_j747nik wrote
I remember it but I don't remember why I had to learn it.
demonicneon t1_j74pcgn wrote
Memorisation and spelling are good. It wasn’t long ago people were saying autocorrect meant you didn’t need to learn spelling which is basically the same thing as this atm but for longer form writing.
Memorisation and spelling without putting into practice ie writing essays and reports and fiction, is bad because as you say people just forget it if they don’t use it.
Trotskyist t1_j74x4um wrote
>Also, software to recognize AI generated content is already being made and I'm sure schools will implement a submit system that verifies their students work.
I wouldn't be so sure. As soon as an algorithm is created to detect AI content that exact same model can and will be used to further train the neural network to avoid detection. This is the basic premise behind generative adversarial networks (or GANs,) one of the bigger ML techniques.
lycheedorito t1_j755lf9 wrote
And it will catch false positives and people will be punished for having done nothing.
lycheedorito t1_j755hy5 wrote
Regulation isn't going to do shit.
AccomplishedBerry625 t1_j74bdtp wrote
It happened with Google and Wikipedia as well.
Personally I think it’s just Google search on steroids, like a natural language API for Google Dorks
I_ONLY_PLAY_4C_LOAM t1_j74fcwl wrote
> it’s just Google search on steroids
This is incredibly ignorant lol
KeepTangoAndFoxtrot t1_j740xp5 wrote
Do you mean just in general? I'm in edutech and we're working on the development of a tool that will build personalized lesson plans for teachers using something similar to ChatGPT. So there's one thing for you!
henningknows t1_j741a49 wrote
Well for my job it’s not there yet. Everything I tried to use it for, it gave me nothing useful
Fake_William_Shatner t1_j748na3 wrote
When you work at a law firm, the AI doing the work of artists and writers, you might be able to tell them; "Be flexible, find another career."
When you hear about an AI creating legal documents and helping people in court. "Everybody sue this guy!!!!" Hey, and you could probably use an AI Lawyer to write that lawsuit -- make sure to send a LOT of them. Bankrupt the business before they can test it out!
I_ONLY_PLAY_4C_LOAM t1_j74ft6v wrote
I'm not convinced that this technology, in it's current form, will replace lawyers. It lacks the precision required by legal reasoning and still gets shit wrong all the time. Furthermore, as a software engineer, I have doubts on whether this tech is capable of solving these problems without radical new ideas. I foresee a lot of people giving themselves a lot of headaches by thinking they can rely on this technology, but not much more than that.
likethatwhenigothere t1_j74qmyo wrote
I asked it something today and it came back with an answer that seemed correct. I then asked for it to give me examples. It gave two examples and the way it was written seemed absolutely plausible. However I knew the examples and knew that they were wrong. It gave other examples that I couldn't verify anywhere, yet as I asked more questions it kept doubling down on the previous examples.
I won't go into detail about what I was asking, but it basically said the Nintendo logo was made up of three rings to represent three core values of the business. I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't. So fuck knows where it got the info from.
I_ONLY_PLAY_4C_LOAM t1_j74rwgf wrote
It's just giving you a plausible and probabilistically likely answer. It has absolutely no model of what is and isn't true.
likethatwhenigothere t1_j76c7nb wrote
But aren't people using it as factual tool and not just getting it to write content that could be 'plausible'? There's been talk about this changing the world, how it passed medical and law exams - which obviously needs to be factual. Surely if theres a lack of trust in the information its providing, people are going to be uncertain about using it. If you have to fact check everything its providing, you might as well just to do the research/work yourself because you're effectively doubling up the work. You're checking all the work chatgpt does and then having to fix any errors its made.
Here's what I actually asked chatgtp in regard to my previous comment.
I asked if the borrowmean symbol (three interlinked rings) was popular in Japanese history. It stated it was, and give me a little bit of history about how it became popular. I asked it to provide examples of where it can be seen. It came back saying temple gates, family crests etc. But it also said it was still widely used today and could be seen in Japanese advertising, branding and product packaging. I asked for an example of branding where its used. It responded...
"One example of modern usage of the Borromean rings is in the logo of the Japanese video game company, Nintendo. The three interlocking rings symbolize the company's commitment to producing quality video games that bring people together".
Now that is something that can be easily checked or confirmed or refuted. But what if its providing a response that can't be?
Fake_William_Shatner t1_j77obea wrote
These people don't seem to know the distinctions you are bringing up. Basically, it's like expecting someone in the middle ages to tell you how a rocket works.
The comments are "evil" or "good" and don't get that "evil and good" are results based on the data and the algorithm employed and how they were introduced to each other.
Chat GPT isn't just one thing. And if it's giving accurate or creative results, that's influenced by prompts, the dataset it is drawing from, and the vagaries of what set of algorithms they are using that day -- I'm sure it's constantly being tweaked.
And based on the tweaks, people have gotten wildly different results over time. I can be used to give accurate and useful code -- because they sourced that data from working code and set it to "not be creative" but it's understanding of human language helps do a much better job of searching for the right code to cut and paste. There's a difference between term papers and a legal document and a fictional story.
The current AI systems have shown they can "seem to comprehend" what people are saying and give them a creative and/or useful response. So that I think, proves it can do something easier like legal advice. A procedural body of rules with specific results and no fiction is ridiculously simple compared to creative writing or carrying on a conversation with people.
We THINK walking and talking are easy because almost everybody does it. However, for most people -- it's the most complicated thing they've ever learned how to do. The hardest things have already been done quite well with AI -- so it's only a matter of time that they can do simpler things.
Getting a law degree does require SOME logic and creativity -- but it's mostly memorizing a lot of statutes, procedures, case law and rules. It's beyond ridiculous if we think THIS is going to be that hard for AI if the can converse and make good art.
ritchie70 t1_j75anat wrote
I played with it today. It wrote two charming children’s stories, a very simple program in C, a blog post about the benefits of children learning ballet, a 500 word essay about cat claws, answered a “how do I” question about Excel, and composed a very typical corporate email.
Of the fact based items, they were correct.
I may use it in future if I need an especially ass-kissy email.
Fake_William_Shatner t1_j77myki wrote
>I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't.
You are working with a "creative AI" that is designed to give you a result you "like." Not one that is accurate.
AI can definitely be developed and trained on case law and give you valid answers. Whether or not they've done it with this tool is a very geeky question that requires people to look at the data and code.
Most of these discussions are off track because they base "can it be done" by current experience -- when the people don't even really know what tool was used.
lycheedorito t1_j754icl wrote
It won't replace artists either. Like chat, it gets shit wrong, it doesn't understand what it's making, you still need artists who understand art to curate and fix things at the very least, every time I explain this it feels like I'm talking to a wall which is not surprising. Probably the same for writing, or music, or whatever.
I_ONLY_PLAY_4C_LOAM t1_j755070 wrote
Artists can also respond much more accurately to feedback
lycheedorito t1_j75er4t wrote
And they can give feedback
KSRandom195 t1_j74njna wrote
Everyone will say this about their pet industry.
“Clearly my industry is harder than all the others because <reason>.”
No, your pet industry isn’t special, it will either be replaced or not like all the others.
Being a technical person, I don’t think AI is where it needs to be yet to replace practically any industries. If I’m wrong, it’s not really a problem I was going to be able to deal with anyway.
demonicneon t1_j74p1a9 wrote
It’s still very much a tool.
KSRandom195 t1_j74p4uz wrote
As a tool I see great potential. As a replacement I do not.
I_ONLY_PLAY_4C_LOAM t1_j74rg37 wrote
Having actually worked in legal technology, I'm honestly not sure what this does for existing lawyers. As I said before, legal documents require extremely specific and precise language. Lawyers are likely to have templates for common documents their firms create, and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of. The actual money to be made in legal technology is not in generative AI, but in document processing and search. Lawyers are increasingly having to deal with hundreds of gigabytes or even terabytes of documents in a given case. Ocr, which is also AI and is seeing in use in the industry, makes handwriting searchable. Advanced search techniques make legal review, the real driver of cost in the legal industry, faster and cheaper. Making legal arguments in court is not the reason why interaction with the legal system can be so expensive.
Fake_William_Shatner t1_j77p3ko wrote
>legal documents require extremely specific and precise language.
Which computer software is really good at -- even before the improvements of AI.
>and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of.
Yeah, lawyers memorize a lot of stuff and go to expensive schools. That doesn't mean it's actually all that complicated relative to programming, creating art or designing a mechanical arm.
I agree that document processing and search are going to see a lot of growth with AI. But being able to type in a few details about a case and have a legal document created, a discovery, and a bulk of all the bread and butter that is using the same templates over and over again with a few sentences changing -- that's going to be AI.
Most of what paralegals and lawyers do is repetitive and not all that creative.
I_ONLY_PLAY_4C_LOAM t1_j74pox9 wrote
This attitude that tech bros have about disrupting industries they don't actually understand or know anything about is pretty funny sometimes.
Fake_William_Shatner t1_j77pz70 wrote
"Tech bros"? There are AI developers. If they team with some lawyers to double-check and they get good case law data -- I can guarantee you it isn't a huge jump to create a disruptive AI based on that.
Revisit these comments in about a year. The main thing that will hinder AI in the legal world is humans suing it to not be allowed. Of course, all those attorneys will use it and then proof the output. And sign their names. And appear in court with nice suits and make deals. And they won't let AI be used in court because it is not allowed. For reasons.
The excuse that it can give an inaccurate result does put people at risk, so more effort is required for accuracy. But, AI will be able to pass the Bar exam easier than beat a human at chess.
It's not funny, but sad, that people are trying to convince themselves this is more complicated than writing a novel or creating art.
I_ONLY_PLAY_4C_LOAM t1_j77zlt0 wrote
RemindMe! one year
Has the machine consciousness supplanted the fleshy meat bags in the legal industry.
Fake_William_Shatner t1_j78fcnq wrote
No -- I didn't say it would replace them. The legal system won't allow it.
I'm saying it will be used to create legal documents and win cases -- albeit with the pages printed out before they go in the courthouse.
This isn't about the acceptance, but the capabilities. If there is one group that can protect their industry it's the justice system.
henningknows t1_j7494j0 wrote
It will be interesting to see how some of those ideas pan out. Ai lawyer will definitely be put to the test quickly with the fastest law suits to decide if it will be legal or smart to have an ai that can create legal docs. As for writing we have already seen you can’t patent work created by ai, and my assumption is search engines will learn to identify and de rank ai written articles.
Fake_William_Shatner t1_j74cr07 wrote
I have full confidence in America's legal system to protect itself from innovation, efficiency and fairness.
No longer having an expensive lawyer making it impossible for some people to be taken to jail, and to bury people who challenge a corporation in a two-tiered justice system -- well, that's just not going to happen on their watch.
henningknows t1_j74d0ii wrote
Not totally sold that ai could bring that change. I can agree on the legal system sucking
RollingTater t1_j74h8q5 wrote
Eventually AI will be able to build a better legal defense than a human can, and in that case it would be unethical to give people human defense teams.
However, that day is not today. ChatGPT has no hard internal logic. You can trick it into doing bad math for example, or sometimes it writes code that is just wrong.
I'm no lawyer but I'm assuming legal defense requires some sort of presentation of factual evidence, logic, and verification of that evidence. Right now you can't guarantee the AI hasn't just spat out a huge document of gibberish that looks right but has a hidden logical flaw.
henningknows t1_j74hll3 wrote
What makes you think an ai can make a better legal defense? You understand winning a court case is about persuading a jury just as much as having the law on your side.
RollingTater t1_j74hz4u wrote
Persuasion is the one thing chatgpt can do really well. That's something that doesn't require any hard logic. And it's also why this tech is dangerously deceptive, it will be so persuasively correct until it's not.
VectorB t1_j74o49s wrote
Wonderfull, our system is not based in rules or fairness, but inthe quality of charisma rolls your lawyer makes.
henningknows t1_j74p6ec wrote
No, it’s based on money
lycheedorito t1_j755rt6 wrote
Also AI is trained on existing things by humans. It's not going to do better than what it is trained on.
LionTigerWings t1_j74xlgy wrote
Could be really beneficial for education. It could essentially act as a tutor that's available 24/7. Schools will need to adapt to avoid cheating, but it should be possible to do that. Make testing and writing live with pen/paper.
Imagine asking it for clarification on a topic your having trouble understanding, or asking it how to solve a math problem you're struggling with. Currently it's wrong from time to time, but it can be improved upon, especially by feeding it textbooks.
FlackRacket t1_j74viym wrote
There will be some benefits... Even with humans making all the decisions, a legal assistant AI bot might be able to tell you what the "proper" course of action is, according to the law, and let humans leverage it for context.
An AI bot might also be able to tell you where new laws are in conflict, making it easier to keep the legal system clean(er)
[deleted] t1_j77cxjg wrote
[deleted]
BidnessBoy t1_j74lm7b wrote
You’re also reading about this through sources that are inherently biased, simply because this AI is a threat to the jobs of the writers who create these articles and the editors who review them.
PedroEglasias t1_j74wrdh wrote
It's not that different from just Googling a question and repeating the first result
Druggedhippo t1_j75jxy1 wrote
If you ask ChatGPT the lifespan of an arctic fox, it'll give you same result as Google would in the first result.
> Arctic foxes live for 3 or 4 years in the wild and up to 14 years in captivity.
But the real power of ChatGPT isn't that it can output a result, it is that it has a primarily conversational aspect and it's ability to merge multiple things into a coherent discussion whilst remembering what you said earlier.
Lets say you want to expand on that, so you ask "what about it's diet?" to ChatGPT. It gives you a good result. To formulate that in Google, you'll have to write "arctic fox diet". And you'll have to open a new tab or perform a new search.
This is important as this new search has no context, each search is independent of another. With ChatGPT it remembers that you are searching about Arctic foxes.
And it can link your context with entirely different subject matters.
> write a short owners manual
Suddenly it spits out multiple levels of items about how to look after an Arctic fox. Point 4 is enrichment. So you ask
> write more detail about point 4
And it spits out a paragraph about how the arctic fox needs toys and enclosures and such. Take note that I didn't ask it about enrichment, I typed "point 4", and it KNEW I was talking about point 4 in the result it had given me. And now you want to be able to have your 10 year old help you look after it.
> write that for a 10 year old
Bam, now your owners manual is using words and phrases appropriate for someone 10 years old. Now you want that in German?
> Translate to german
And oh no, your new Arctic fox you bought is lost. You have to do a lost poster. How would you even do one? Well, ask ChatGPT
> write an lost poster for an artic fox
>Here's a sample lost poster for an Arctic fox:
> LOST: Arctic Fox > Description: Our beloved Arctic fox has gone missing and we are very worried. He is a small, white-furred fox with pointy ears and a bushy tail. He is friendly but shy, so please approach him slowly.
> Last seen: [Insert location and date]
> If found, please contact: [Insert phone number and/or email address]
> Reward: [Insert amount or item for reward if desired]
> We miss our furry friend and are eager to bring him back home. Any information or sightings of the Arctic fox would be greatly appreciated. Thank you for your help!
Not only did it write the poster text, it added, on it's own, additional descriptive text like the color, tail, behaviour(shy!). That is linking things I didn't ask or even know at that point. If I had told it that my Arctic fox only had 3 legs, you would have gotten something more like this
> my fox only has 3 legs, rewrite the poster
> Sure, here's a revised version of the lost poster for an Arctic fox with 3 legs:
> LOST: 3-Legged Arctic Fox > Description: Our dear Arctic fox, who has only three legs, has gone missing and we are heartbroken. He is a small, white-furred fox with pointy ears and a bushy tail. He moves with a unique gait and may be timid, so please approach him slowly.
--
Now I'm not saying you can't many of the facts like this from Google, but the ability to speak in "plain language" and have it give back in "plain language" is the game changer here.
Also note, that on that above where I mentioned point 4. Google Nest can't understand what I'm talking about when I say "turn on the bedroom lights and set them to 100%", it doesn't understand that "them", it chokes because it has no context. Now imagine something like ChatGPT powering the text parsing.. it's understanding of context makes it light years ahead of existing public implementations. I fully expect personal assistants like Siri, Alexa, Bixby, Cortana, to be revolutionized.
PedroEglasias t1_j75kqzw wrote
Oh I 100% agree it's more powerful than a google search, I'm a dev and I use it every day to save me sifting through StackExchange results. I'm just pointing out to the haters that it's not that different from Googling information, it just saves you converting the results to a coherent / salient argument
EasterBunnyArt t1_j752n2u wrote
henningknows t1_j75379t wrote
Sorry, couldn’t make it past his pitch for the vpn he was selling
pwalkz t1_j75hjgr wrote
We haven't seen anything yet really. Just getting started
Viewing a single comment thread. View all comments