Ye Olde Natural Philosophy Discussion Group

Reviews and comments on
Jeff Hawkins:
On Intelligence [2004]




      Everybody in our group really liked this book, and though we had some reservations about some of Hawkins’ positions, we all gave it high ratings. Our group average (on a scale of 0 to 10) was an impressive 8.2!

      Thomas appreciated the fact that Hawkins was willing to go out on a limb, and present an eye-opening theory about human intelligence—even if it is of a speculative nature. He liked the “logical and engineering-oriented” nature of the theory Hawkins presented. He also liked Hawkins’ criticism of traditional AI and even neural network approaches for pretty much ignoring biology and actual brain neurophysiology. However, he felt that Hawkins didn’t really have sound enough reasons for completely rejecting the AI and neural network approaches. Still, he felt inclined to believe Hawkins more than not believe him. Overall, he found the book so interesting that he “couldn’t put it down”!

      Ron, like everyone, also really liked the book. But he felt its biggest flaw was that Hawkins didn’t really come up with an adequate definition of what intelligence is—which given the title he certainly should have done!

      Scott, while he is impressed with the book, emphasizes that at this point Hawkins’ theory has to be viewed as a set of so-far unproven hypotheses. While Hawkins condemns traditional AI and neural network approaches to artificial intelligence as having produced little in the way of actual results, the fact is that his own memory-prediction theory of intelligence hasn’t actually produced any concrete results yet either!

      John also liked the book a lot, despite the fact that he also viewed it as total speculation. “Virtually none of it is actually proven yet”, he commented. But John is convinced that Hawkins’ appoach is an important avenue of research. And he really enjoyed the whole thing.

      Rich commented that “for our group this is a great book!” and a “fascinating book”. But he added that Hawkins seems to be in large part a salesman here, pushing a theory but providing no real scientific proof. “He’s thinking about it, he’s inching forward, but I’m not buying it all myself yet.”

      Rosie agreed with a lot of the earlier comments, and that Hawkins “too quickly dissed all of AI and neural network theory”. She added that the “big jump”, the big breakthrough, in this area of science has not come yet. She felt that a lot of what Hawkins said seemed rather self-evident. Rosie had read the book quite a long while ago and was therefore somewhat hazy about it, but she remembered especially that the book was very readable.

      Kirby liked the book most of all, and gave it our highest rating of 10. He said that since reading the book he has been constantly reminded of Hawkins’ emphasis on brain patterns while he (Kirby) went about his own daily pursuits. He also was impressed with Hawkins’ inclusion of testable predictions at the end of the book. However, Kirby agreed with Ron that Hawkins doesn’t adequately explain what intelligence actually is. While there are some flaws in the book, Kirby said he thinks it is especially good and important because of its emphasis on patterns in the brain and the concept of feedback.

      Barbara found the book fascinating, and really a good book. She said it reinforced her view that computers are not intelligent, and that machines cannot have any emotions.

      Scott, however, claimed that programming emotions into machines (computers) is actually a pretty simple thing (at least compared with programming general intelligence). Emotions are simply different internal states which lead the animal (or machine!) to behave systematically differently than they would in a different state. Setting an emotion in a computer program therefore is simply a matter of setting a “flag” (or set of flags, probably including a numerical value to show the intensity of the particular emotion at that moment). Moreover, as Antonio Damasio has long argued, real intelligence (of a human sort at least) requires the incorporation of emotions (emotional states). If Commander Data on StarTrek really had no emotions, then why would he care whether the star ship “Enterprise” survived a Borg attack or not?! (I.e., Data could not act in what we would consider to be an intelligent fashion in that, or any other, circumstance if he really did not have any emotions.)

      Scott also said that Hawkins’ dismissal of “computers” as being totally incapable of real intelligence actually only showed that he was using the word ‘computer’ in a bizarrely narrow way (to encompass only present-day computers). After all, Hawkins himself is hoping to build some sort of artificial intelligence in some sort of computer based on the theory of intelligence he puts forth in this book.

      Early in the book (p. 65ff.) Hawkins attacks the popular analogy among AI people of brain/mind to hardware/software. Since Scott wrote a whole essay based on this analogy, he feels he needs to also respond specifically to that point. Of course the analogy only works if you have a broad enough conception of computers and software, one that goes well beyond our current computing systems. Moreover, the argument is not that each transistor in a computer corresponds to a neuron in a brain (as Hawkins seems to suggest on p. 66). It is actually more correct to view each neuron itself as a very limited digital computer, and the brain as a system of many billions of such computers, organized in a complex way.

      Kevin thought that Hawkins presented a reasonable critique of neural network and AI techniques as probable dead ends. The issue came up of whether existing AI/neural techniques have developed means to learn things. Scott mentioned genetic algorithms in this connection (which inherently include a goal-seeking learning mechanism), and also pointed to the AI program which was recently developed which is capable of learning from its human trainers how to rate the facial beauty of women. However, Kevin (and Hawkins) may be correct if they are saying only that no general purpose learning algorithm seems to have yet been developed (of the sort that a human baby has built in). Kevin also thought that chapter 8 of the book, on the future of intelligence, was the weakest in the book. But he still rated the book as a whole at “9.5” (which by rule got lowered to 9!).

      While some people, especially Scott, had a great many specific criticisms of this book, we all felt that overall it is an enormously stimulating and very important and worthwhile book to read!




Return to our complete list of books.

Return to our Science Group home page.