Submitted by Nightmarewasta t3_z8noi0 in explainlikeimfive
Comments
VanillaWaffle_ t1_iychikl wrote
that works with human too
steelbreado t1_iycltbz wrote
Excellent
shompyblah t1_iyd6pzc wrote
….but a computer doesn’t care if it’s rewarded or punished.
Liese1otte t1_iydcsvk wrote
Correct!
However, this is just an analogy. In reality programmers simulate "reward" and "punishment" using different techniques that effectively result in the same thing happening. In neural networks, those are represented in weights in biases, for example.
just_a_pyro t1_iyd8i1z wrote
It does, or maybe it simulates caring, that's the whole idea of machine learning, if it didn't care there would be no reason to change the original random playing to play better.
beingsubmitted t1_iychax7 wrote
Machine learning, as people have pointed out, is broad. However, I think that understanding gradient descent in general really gets to the heart of most new applications (especially neural networks).
Gradient Descent is kind of like a game of hotter/colder. You start by walking in a completely random direction, and then someone tells you you're either getting warmer or getting colder.
A neural network starts similarly, taking it's input and doing a bunch of random multiplications and getting random output. Then you tell it what the answers should have been and it knows how far off it was. Then it goes back to all those random variables (parameters) and calculates how much each one contributed to it being wrong, and adjusts them ever so slightly so that they would have produced a better result.
c00750ny3h t1_iycdozr wrote
Pretty broad question, but here's one application.
ML is about performing brute force search within a large data set and analyzing trends that converges upon an answer. Then building off on that to further improve the model.
An example is like chess. You can program in the chess rules very easily, i.e. knights move in an L shape, if a king is in check, it must move to safety etc.
Creating an AI to play chess is the Machine Learning part.
The dumbest possible chess playing strategy is to move pieces (within their constraints) at random. So you can run chess games simulations where two AIs move randomly. Then analyze the games where black won and the games where white won to see if there was any common pattern for victory. It may be that games where either side started with a knights open resulted in a victory indicating that is a strategic move. Then you can update the AI to incorporate that strategy for future games. Then repeat the chess games simulations and continue to find trends resulting in victory and continue to incorporate new strategies into the AI.
IonizingKoala t1_iydp33x wrote
I think there's a distinction between AI and ML that needs to be made here. For example Deep Blue, the chess machine that beat Kasparov, was certainly AI, but had relatively little ML involved. Earlier chess computers were AI, but had no ML.
regular-jackoff t1_iycf2ln wrote
Let’s play a game called “guess my age.”
You don’t know me. We are seated on either side of a wall in a room, you can’t see me, but we can hear each other.
The game goes like this: I tell you certain facts about myself, and you then use the information to guess my age.
I say “I play the banjo, I love reading and browsing Facebook. I graduated from high school several years ago. I have a pet dog.”
You say, “you are 42 years old”.
I say, “Not quite, you are off by 12. I’m actually 30.”
“Oh,” you say. “I should probably reevaluate my beliefs about people who browse Facebook in 2022. They are likely not as old as I previously thought.”
I now leave the room, only to be replaced with another individual who continues this very peculiar game.
“I play the flute, I hate reading and Twitter is my preferred social media fix,” they say. “I graduated only recently and I don’t have any pets.”
“You are most certainly 21,” you proclaim.
“Close but not quite,” comes the reply, “I’m actually 24.”
This goes on for several hours, you keep going through people and guesses, updating your beliefs along the way, until you have a very good idea of what facts about people are useful in predicting their age. E.g., you might conclude that a persons ability (or lack thereof) to play a musical instrument has no bearing on their age.
This is basically how machine learning algorithms work.
[deleted] t1_iycd71i wrote
[deleted]
Verence17 t1_iycens8 wrote
So, imagine playing a game: you are told a number, you add some X to that number and tell the result. You will be told if the result differs from the one expected by the person who told you the number, so you have to guess the correct X.
"1. What do we want as a result?"
"Well, maybe X = 0? 1+0=1, my answer is 1."
"No, for 1 we need something bigger. Let's try again, what do we want to get for 2?"
"Then maybe X = 2? 2+2=4, my answer is 4."
"No, we need less than that. Another try: what do we want for 3?"
"So, X is bigger than 0 but smaller than 2... Maybe X = 1? 3+1=4, my answer is 4."
"Yes, that's what we needed, you guessed the correct X!"
In this scenario, "take a number and add X to it" is your algorithm and X is a parameter for that algorithm. You don't know that parameter beforehand, you guess it in an iterative way only from the required answer.
Turns out, we can construct an algorithm with quite a lot of parameters (possibly, millions) in such a way that there will be possible values for that parameters which, in theory, will give us good results for the task at hand. Not perfect, but good. We don't know what exactly these values are, we only know that they can exist. The task can even be as complex as showing the algorithm an image of a bird and expecting the answer "bird", it still may work with some parameters unknown to us.
Learning methods allow the program, in a similar way to the example above, start with a completely random guess and then tweak all these parameters in a more or less sensible way only based on what the expected answer is. And the math goes in such a way that it will likely slowly find better and better combinations until it encounters something that actually works to an extent. This process is what's called machine learning and the set of values found for the parameters is called a model for this specific algorithm.
bwibbler t1_iycj9fv wrote
You might be more or less asking how a neural network works. That's what a lot of people will be thinking about when they think of the phrase machine learning.
A neural network is only a category of machine learning. It's not the whole picture.
Machine learning can be something complex like a neural network. But it can also be something very simple, like Menace. A simple to understand process, popularly known for learning tic-tac-toe.
Machine learning is all based on a goal, score, and reward/punishment system. It's a program that has a goal, gets somehow scored based on the results it gives, and receives a change relative to the difference.
The difference between the results and the goal is often called error. And the error is used to create the change. This change can be seen as a punishment or reward.
A* pathfinding isn't exactly machine learning. But I like to include this here too. Because it also uses some goal, score, and punishment/reward techniques. It can help get the right idea about how to compute to solve a problem.
A neural network is extremely difficult to wrap your head around. Particularly for obscure tasks like driving a car or creating images. They can be extraordinarily complex. It's a line formula (oftentimes multidimensional) that approximates a line you want given a set of point values as a goal. There's a lot of calculus and angry math involved.
Imagine trying to figure out a line formula that draws the path of a roller coaster. Then imagine a formula with variables that can be adjusted to draw the path of any roller coaster.
The Taylor Series is again, not machine learning. But can give you a little taste of what it's like behind the scenes of a neural network. Some of the math is kinda similar.
[deleted] t1_iyd22hv wrote
[removed]
niloysh t1_iye08ng wrote
No one really knows. It's kinda a black box.
People think they know how it works but then a Tesla goes and hits someone or a security bot kicks a kid.
There's this whole sub field about AI explainability that's gained traction in recent years.
just_a_pyro t1_iycedq0 wrote
Imagine giving a monkey a piano to play randomly. Every time you like what it plays you give it a banana and every time you don't you slap it with a rolled newspaper. Do it for a year or two and you get a composer monkey to tour the world with and make money. That's machine learning just with a monkey instead of a computer simulating a brain.
You can’t define good music and can’t write a computer program to do it. Monkey doesn’t even know what music is, and would be totally lost if given a guitar instead. But the result works out.