Chapter 7 is definitely where he gets into more detail about how he would actually create a mind. He basically explains that siri and other similar speech recognition algorithms is based on a hierarchical hidden markov model. These are states with transition probabilities, where each state would be like a pattern recognizer, and the transitions are the effective synaptic strengths.
Every learning algorithm has what Kurzweil calls "God Parameters". In his HHMM model, there were many parameters that we required at the initialization of the system. In order to optimally choose those parameters he used a genetic algorithm. This would lead to unexpected optimizations.
What was really fascinating was that they then altered the model in subtle ways - like adding leakage across markov states. They would then repeat the GA, and get comparable prediction quality (maybe even better), but the GA optimized parameters were totally different. If they used the GA parameters of the original configuration, performance would go down.
This has some important insights into the biology of intelligence. If our brain has some leak problems, or some unintentional side-effects of an implementation based on proteins, then the genetic algorithm will pick-out parameters that can offset these consequences, and potentially even use them for its advantage. So when looking at the brain, there's the mathematically beautiful thing it is trying to do (some sort of hierarchical learning) and then there's what it actually did (hiearchical learning with some tweaks). The tweaks in many ways could help the system, but would be reflected in potentially counterintuitive selection of the parameters.
Another thing he mentioned was the overfitting problem. He said that adding noise to the inputs actually aided learning, as it prevented overfitting to the examples given.
The ultimate conclusion of the chapter is building hierarchical pattern recognizers. He says that there are multiple types of learning algorithms that could do it, but he prefers HHMMs as he is most familiar with them and they are well characterized. But there are other options. Regardless of the choice, there are always "God" parameters that will need to be optimized via a GA.
He briefly mentions some other ideas that would go into the brain - a system that checks for inconsistencies, a system that looks for new problems, a goal system (i.e. pleasure and pain signals from the old brain). And he describes some limitations of the biological cortex that will not be in a digital cortex - like how many things you can keep in memory, or the number of active lists you can operate on.
So HHMMs seem like an interesting idea. I don't think its the full picture of neocortex. What the make-up of each pattern recognizer is will be important. HHMMs may be useful to study just to understand how they work, and they may give us some insight into how to handle the temporal side of cortex. And he still doesn't really say anything about the top of the hierarchy. He mentions that we would want the cortex to build as many levels as it would want/need, but how to make an arbitrary hierarchy that can change is a problem itself. It seems like there must be some point where the hierarchy goes back down (like the top is PFC and this feedsback down to a language area, which allows you to think and build arbitrary hiearchies).
No comments:
Post a Comment