Viewing a single comment thread. View all comments

he_who_floats_amogus t1_j44k1rd wrote

I’m going to say no. Machine learning is a field largely dedicated to methodology that improves performance of an agent at fulfilling some task, whereas a genetic algorithm is a heuristic approach that can be used to find optimal (enough) solutions to some specific problem, which is how it used in this case.

It’s not a good or bad thing, these are just categorical descriptions of types of things, which are meant to help us delineate. All algorithms that produce an output have now “learned” something (the output!), but to say that any machine that could be interpreted as having learned something is tantamount to machine learning is too broad to be linguistically useful, and isn’t what is being denoted by the categorical description.

1

lavaboosted OP t1_j44t5xi wrote

That makes sense. After all it doesn't make sense to call something Artificial Intelligence if it doesn't act intelligently. I feel like it boils down to when the machine/neural net/function has learned to do a task which has reached a certain level of general applicability/usefulness then it can be considered AI / Machine Learning. And it makes sense that people draw this line at different points and disagree about where it should be drawn.

For example if you train a car to drive around a track but the track is a fixed width it's possible that you trained only a single parameter - the amount which the car should turn based on the difference between the distances to the left and right wall. Once that number is dialed in the car will be able to handle any track of that fixed width and will look pretty smart, but it could have been achieved with a simple function instead of a neural network.

I've heard similar concerns raised with AI related to Radiology for cancer screening since there is no way to actually know what factors the neural network is considering and how then it's possible that it was making the judgement based on something completely unrelated to the cancer. I tried to find a source for that but hopefully you get what I mean, basically just the black box problem.

1

he_who_floats_amogus t1_j44x3vm wrote

I don't think it's quite as arbitrary as you're making it out to be. I haven't perfectly defined the concept of a task here, but a core concept of ML is that it's focused on the the learning itself rather than producing a solution to some problem statement. The idea of learning implies an element of generalization, but that's different than general applicability or usefulness. The agent working on the task is our abstraction layer; our algorithm should work on the agent rather than producing the solution to the agent's task. Through some defined process, you're to create a generalized form of knowledge in that agent, without solutions for specific cases being explicitly programmed.

If you train a NN to generate a representative knowledge model that solves a "simple" problem that could have been solved with an explicit solution, you're still doing ML. It's not about how complicated the problem is to tackle, or how generally applicable or useful the result is, but whether what you have explicitly programmed is the direct solution to some problem, or is itself a modeled form of learning that can be applied to some agent that can then go on to solve the problem.

In the Strandbeest example, the program that is running is not modeling any learning. There is no agent. The output of the program is a direct solution to the problem rather than some embodied form of knowledge an agent might use to solve a generalized form of the problem. It's not ML and it's not a fuzzy question, at least in this case, imho. There could be perhaps be cases or situations where there is more fuzziness, but this isn't it.

Optimization, including heuristic optimization as in genetic algorithms, could find applied use in ML, but they are not themselves ML, and the use of a genetic algorithm to solve a problem explicitly is not ML.

3

lavaboosted OP t1_j479g4e wrote

>If you train a NN to generate a representative knowledge model that solves a "simple" problem that could have been solved with an explicit solution, you're still doing ML.

I guess my question would be when do you know that what you have is a representative knowledge model rather than just a simple function? Another question that might help clear it up for me is - what would have to change in order for the strandbeest program to be considered machine learning?

1