Submitted by lavaboosted t3_10a0zgy in MachineLearning

Theo Jansen, inventor of the strandbeest, explains in one of his videos that he used the principle of evolution to figure out the thirteen holy numbers using a computer program which he wrote in 1990. Would this be considered machine learning or is an evolutionary/selective breeding algorithm on it's own not considered ML?

The Strandbeest leg has 13 dimensions which he wanted to find the ideal lengths of each in order to have the foot generate a stepping motion "a curve which was flat on the bottom". His program generated batches of 1500 legs with randomized dimensions and chose the best from each batch as the basis for the next batch.

I wonder how he scored the curves. I know he wanted a flat bottom but I'd think he also wanted some way to score the stride length and height to avoid getting curves that just move back and forth in a tiny straight line. I can imagine maybe using the average difference of the y-coordinates of points sampled over the curve, or maybe some calc? If you have any ideas as to how to score a good step curve or if you know how he did it that I'd love to know.

Finally, I wonder if he has revisited this problem with modern computer capabilities to see if he can find even more optimized dimensions. I'd be shocked if others haven't already done this. If you know where to find more info on Theo's process, the compute program or modern advancements of the Strandbeest using machine learning please let me know I'd love to discuss more.

2

Comments

You must log in or register to comment.

piffcty t1_j42iqxl wrote

Given that we consider self-play in reinforcement learning to be machine learning I think that it's appropriate to think of genetic algorithms as an early form machine learning.

5

lavaboosted OP t1_j432q0k wrote

Cool thanks, I hadn't thought of the connection to board games like chess that learn with self-play sometimes.

2

CurrentMaleficent714 t1_j42g03f wrote

Genetic algorithms are optimisation heuristics. It's not machine learning per se.

2

lavaboosted OP t1_j431okr wrote

Interesting, thanks. It seems a lot of people do lump it in with machine learning such as this video using a neural network and evolutionary algorithm to teach a car to drive around a track. Does the use of a feed forward neural network make it qualify as machine learning or still no? Or is it just a gray area?

1

he_who_floats_amogus t1_j44kquo wrote

You can use all kinds of algorithms in machine learning. This is a “uses a” relationship rather than an equivalence relationship, in this case. If I’m building a piece of furniture, I am a carpenter. I could employ the use of a hammer to help me build the furniture. The hammer is not a carpenter.

I think you can imagine that the machine learning approach in that video may also rely on various data structures including graphs, trees, etc, and perhaps many other things which are also not machine learning.

2

CurrentMaleficent714 t1_j47pgku wrote

Machine learning is about learning from data. How you do that is wide open, but usually there is an optimisation algorithm involved somewhere or another. The optimisation algorithm itself does not learn from data, it is a tool that is applied in some scheme to learn from data.

1

the_scign t1_j43jx5z wrote

I tend to think of "machine learning" as the use of some automated algorithm to learn a ruleset as opposed to manually programming that ruleset. More often than not this algorithm requires some external dataset from which to learn the rules but in this case the algorithm is using another ruleset configured by Jansen to learn the rules. In that sense, since there was an automated algorithm that generated a "model" that abided by a set of externally provided rules, I would class this as machine learning.

That said, some people consider only scenarios where external data points were provided, rather than a set of rules, as machine learning. They may be right and I may be wrong - I'm open to debate on that.

2

lavaboosted OP t1_j43xkbz wrote

Yeah, it seems that this is an old question without an agreed upon answer. I've seen a lot of YouTube videos which claim to be AI which use this method but maybe there's a more agreed upon definition in academia or industry. It's not a big deal either way really I was just curious but I think I'll just go with "it depends who you ask".

2

he_who_floats_amogus t1_j44k1rd wrote

I’m going to say no. Machine learning is a field largely dedicated to methodology that improves performance of an agent at fulfilling some task, whereas a genetic algorithm is a heuristic approach that can be used to find optimal (enough) solutions to some specific problem, which is how it used in this case.

It’s not a good or bad thing, these are just categorical descriptions of types of things, which are meant to help us delineate. All algorithms that produce an output have now “learned” something (the output!), but to say that any machine that could be interpreted as having learned something is tantamount to machine learning is too broad to be linguistically useful, and isn’t what is being denoted by the categorical description.

1

lavaboosted OP t1_j44t5xi wrote

That makes sense. After all it doesn't make sense to call something Artificial Intelligence if it doesn't act intelligently. I feel like it boils down to when the machine/neural net/function has learned to do a task which has reached a certain level of general applicability/usefulness then it can be considered AI / Machine Learning. And it makes sense that people draw this line at different points and disagree about where it should be drawn.

For example if you train a car to drive around a track but the track is a fixed width it's possible that you trained only a single parameter - the amount which the car should turn based on the difference between the distances to the left and right wall. Once that number is dialed in the car will be able to handle any track of that fixed width and will look pretty smart, but it could have been achieved with a simple function instead of a neural network.

I've heard similar concerns raised with AI related to Radiology for cancer screening since there is no way to actually know what factors the neural network is considering and how then it's possible that it was making the judgement based on something completely unrelated to the cancer. I tried to find a source for that but hopefully you get what I mean, basically just the black box problem.

1

he_who_floats_amogus t1_j44x3vm wrote

I don't think it's quite as arbitrary as you're making it out to be. I haven't perfectly defined the concept of a task here, but a core concept of ML is that it's focused on the the learning itself rather than producing a solution to some problem statement. The idea of learning implies an element of generalization, but that's different than general applicability or usefulness. The agent working on the task is our abstraction layer; our algorithm should work on the agent rather than producing the solution to the agent's task. Through some defined process, you're to create a generalized form of knowledge in that agent, without solutions for specific cases being explicitly programmed.

If you train a NN to generate a representative knowledge model that solves a "simple" problem that could have been solved with an explicit solution, you're still doing ML. It's not about how complicated the problem is to tackle, or how generally applicable or useful the result is, but whether what you have explicitly programmed is the direct solution to some problem, or is itself a modeled form of learning that can be applied to some agent that can then go on to solve the problem.

In the Strandbeest example, the program that is running is not modeling any learning. There is no agent. The output of the program is a direct solution to the problem rather than some embodied form of knowledge an agent might use to solve a generalized form of the problem. It's not ML and it's not a fuzzy question, at least in this case, imho. There could be perhaps be cases or situations where there is more fuzziness, but this isn't it.

Optimization, including heuristic optimization as in genetic algorithms, could find applied use in ML, but they are not themselves ML, and the use of a genetic algorithm to solve a problem explicitly is not ML.

3

lavaboosted OP t1_j479g4e wrote

>If you train a NN to generate a representative knowledge model that solves a "simple" problem that could have been solved with an explicit solution, you're still doing ML.

I guess my question would be when do you know that what you have is a representative knowledge model rather than just a simple function? Another question that might help clear it up for me is - what would have to change in order for the strandbeest program to be considered machine learning?

1

TheGreatHomer t1_j41i6hm wrote

I'm pretty sure it's not ML by definition. Oxford definition:

the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.

There is no data(set) involved in evolutionary algorithms, so it's not ML. Genetic algorithms are usually seen as (a part of) AI, though.

0

pucklermuskau t1_j41nsa8 wrote

What are you trying to say by saying there's no dataset? There is data, in the orientation of the structure.

1

TheGreatHomer t1_j41rida wrote

There is no dataset from which you learn patterns. You usually evaluate objects which are then again used for mutation based on their performance.

Of course it's not happening in a vacuum, but that's not what "data" usually means.

1

lavaboosted OP t1_j41w1fi wrote

The data is the curve generated by each leg orientation. Each curve in the batch is then scored based on some criteria. If that isn't machine learning then neither is using a neural network and evolutionary algorithm and I think most people would say that it is.

2

TheGreatHomer t1_j42zoqf wrote

It generates data. It doesn't take data and learns patterns from that data.

If you have a very specific opinion and get defensive when someone disagrees, why pose it as a question instead of just stating your opinion?

−1

lavaboosted OP t1_j430t1u wrote

I'm not trying to be defensive just wanted to have the discussion and see what other people's takes on this was. What do you think of the car example?

2

TheGreatHomer t1_j433r8m wrote

>What do you think of the car example

I haven't read the paper, but only watched the brief video. I wouldn't say that's Machine Learning either.

Maybe a bad analogy but one I can come up with on a spot: A hinge isn't carpentry but metalwork and pretty much everyone agrees on that. Now if you build a wooden cabinet, you are probably using hinges; Nevertheless, you'd still call the cabinet as such carpentry, not metalwork.

Anyway, the definitions aren't clear and consistent enough to make super good and objectively true distinctions. In the end it often boils down to personal subjective interpretations.

Edit: Especially the classification of evolutionary algorithms has been an ongoing discussion for, like, decades. Which goes to show that there probably isn't an objectively right clear classification - if only because people don't agree on a single definition of Machine Learning as is. However, by the most common definitions that I know, evolutionary computation is its own subfield next to ML.

0

pucklermuskau t1_j44znly wrote

it takes data, and evaluates that data against a performance metric, and then adapts the structure in response, creating new data.

2