Dileep is now starting his talk on Understanding the Neocortex to Accelerate our Understanding of Intelligence. He is one of the founders of Numenta with Jeff Hawkins. He immediately started to explore the traditional
thoughts behind AI. Ignoring biology was common place, even with
neural network research. In the 1990’s things began to change.
Now there is a groundswell of reasearch into biologically accurate
systems. Hierarchical Temporal Memory is one of these research
areas. This is a focus on the neo-cortex. He explained that:
if I opened the top of your skull, I would see your
neo-cortex. If I pulled it out, it would really be a crumpled
sheet, about 1mm thick, and you could spread it on the table. It
would look like a big thin tortilla. All of your memories from childhood on would be stored in that tortilla.
Ok … so I have a tortilla in my head! 🙂
Supporting the talk from yesterday, there is a main stream belief that
the entire sheet of the neo-cortex is based on the same replicated base
pattern. A basic neural module. So what does it do?
- the neocortex is a memory system (hierarchical, stores sequences)
- through exposure, it creates a model of the world (discovers causes of sensory data and how they behave)
- recognizes inputs and predicts the future (by analogy to past events)
- behavior is a by product of prediction (behavior and prediction are the same)
Reptilian brains do not have a neocortex. It was mammalian brains
that gained the neocortex. Initially only on the sensory side …
in humans it went even further and took control of the motor
skills. In addition it is hierarchically organized. The
hierarchy implements a series of feedback loops … each level stores
sequences of patterns. It passes a recognized pattern “up” by
name, and also predicts the next element. This prediction is then
passed “down” towards the senses to provide a reinforcing feedback loop.
Numenta is well along the way of creating artificial systems that
provide the same sort of trainable memory systems … amazing.
His demonstration showed a series of trained images – very low
resolution for now – and then he would draw on another screen and allow
the software to predict/select which image he had drawn. He
showed how the recognition was very resistent to noise, and able to
easily distinguish between similar images. It was crude … but
very impressive. He expects to see commercial solutions within 3 to 4 years.