1. Symbolic (1956)Neural networks, as you see, has a spotty history. The basic idea is relatively old (as work in AI goes). 1986 marks the advent of back-propagation along with multilayered networks while the 2006 dates marks some new techniques ("deep learning"), much more computing power, and huge sets of training data. I found this discussion particularly useful. He shows us the following photo:
2. Neural networks (1954, 1960, 1969, 1986, 2006, …)
3. Traditional robotics (1968)
4. Behavior-based robotics (1985)
A Google program was able to generate this caption, “A group of young people playing a game of Frisbee”, and goes on to note:
I think this is when people really started to take notice of Deep Learning. It seemed miraculous, even to AI researchers, and perhaps especially to researchers in symbolic AI, that a program could do this well. But I also think that people confused performance with competence (referring again to my seven deadly sins post). If a person had this level of performance, and could say this about that photo, then one would naturally expect that the person had enough competence in understanding the world, that they could probably answer each of the following questions:Brooks' own work has been in the fourth approach, behavior-based robotics, where he is a pioneer. He remarks:
But the Deep Learning neural network that produced the caption above can not answer these questions. It certainly has no idea what a question is, and can only output words, not take them in, but it doesn’t even have any of the knowledge that would be needed to answer these questions buried anywhere inside what it has learned.
- what is the shape of a Frisbee?
- roughly how far can a person throw a Frisbee?
- can a person eat a Frisbee?
- roughly how many people play Frisbee at once?
- can a 3 month old person play Frisbee?
- is today’s weather suitable for playing Frisbee?
...I started to reflect on how well insects were able to navigate in the real world, and how they were doing so with very few neurons (certainly less that the number of artificial neurons in modern Deep Learning networks). In thinking about how this could be I realized that the evolutionary path that had lead to simple creatures probably had not started out by building a symbolic or three dimensional modeling system for the world. Rather it must have begun by very simple connections between perceptions and actions.Finally, Brooks has created a predictions scorecard in three areas, self-driving cars, AI and machine learning, and space industry. He first posted it on January 1, 2018 and has updated it on Jan. 1 of 2019 and again, Jan. 1 2020. The list contains (I would guess) over 50 specific items distributed over those categories with specific dates attached. It makes for very interesting reading.
In the behavior-based approach that this thinking has lead to, there are many parallel behaviors running all at once, trying to make sense of little slices of perception, and using them to drive simple actions in the world. Often behaviors propose conflicting commands for the robot’s actuators and there has to be a some sort of conflict resolution. But not wanting to get stuck going back to the need for a full model of the world, the conflict resolution mechanism is necessarily heuristic in nature. Just as one might guess, the sort of thing that evolution would produce.
Behavior-based systems work because the demands of physics on a body embedded in the world force the ultimate conflict resolution between behaviors, and the interactions. Furthermore by being embedded in a physical world, as a system moves about it detects new physical constraints, or constraints from other agents in the world.