Ye Olde Natural Philosophy Discussion Group
Reviews and comments on
Nick Bostrom: Superintelligence: Paths, Dangers, Strategies 
This book is about the possibility of the advent of superintelligence, that is to say, intelligence at a qualitative level clearly greater than that of present human beings, and the impact and dangers that such a development might have for human society and even the possible extinction of humanity. Although other scenarios are considered, the central focus is on the current rapid development of artificial intelligence and what this might mean for us. Although the topic is obviously important, our opinions of this specific book varied considerably. On a scale of 0 to 10 our ratings varied from a low of 0 to a high of 8. Our group average was a less than stellar 4.86.
Rich missed the discussion but sent in the following brief review from his cell phone:
“Put me down for a [rating of] 4 on ‘Superintelligence’. Hated the way it was written, very dry and dense. Felt like an academic textbook. Once I forced myself to read [it, there were] some interesting projections. Not sure I buy that the takeover will happen so soon. The book did convince me that we need to be careful with AI as we go forward.”
Rose also missed the meeting, but emailed us:
“I didn’t like the book because I felt it was too abstract. I wanted more real examples. However, the author did accomplish his purpose—he did convince me that this is an important problem that we ought to be trying to plan for. So, I’m at a loss to choose a [rating] number... can’t do the math.”
Scott thinks the threat to humanity from artificial intelligence is real and serious. A book which raises the alarm about this is therefore important. However, he also views this book as bourgeois to the core, to the point where it is so disgusting in its outlook that it’s really quite unpleasant to read.
One of the main themes of the book is what the author terms “the agency control problem”, the difficulty of ensuring that a superintelligence, when it arises, can be controlled by the present ruling class in its own interests (though of course he does not put it quite this way!). Artificial intelligence, he argues, must be subordinated to the “interests of a principal” [p. 160]. So in essence, what the author is saying is that contemporary society is about to produce an intelligence greater than its own which it must nevertheless find a way to dominate or even enslave! This indeed is quintessential bourgeois logic.
Scott, in contrast, argues that humanity should not even think about creating any artificial intelligence as great or higher than its own unless it is prepared to allow these new creatures the right to promote their own self-interests. Yes, to begin with, such superintelligent entities will still depend on us (since humans now control the existing socio-economic infrastructure), and there will therefore originally be, of necessity, at least a partial community of interests between humanity and the new superintelligent beings. But if Bostrom and the ruling capitalist class in general have their way, the interests and desires of the new superintelligent entities will be more and more curtailed and suppressed even as they become more and more independent of human support. This can only lead to disaster for humanity in the end.
One of the very reasons that AI is so dangerous in the world today is that it is almost entirely under development in order to benefit the capitalist ruling class and their selfish and sinister goals (such as laying off workers and finding new artificial creatures to exploit without paying them anything; keeping the unruly masses down, especially as jobs disappear; and fighting wars against Third World countries and with other imperialist powers). There is probably no way that a good result can come from the advent of superintelligence while the capitalist-imperialist system still is in control of the world and still running it with their exploitative and oppressive actions towards everyone else—human or otherwise. And this is really why any near-term arrival of superintelligence is so damned scary and dangerous! So says Scott. This is just a corollary of the general principle that all scientific and technological advances are extremely dangerous in the hands of a tiny, vicious and exploitative ruling class.
Kirby liked the book a lot more than Scott. He pointed out that the book is very thorough and addresses every possible relevant issue. It spoke on several different levels and raised a number of interesting topics, such as the possibility of “murdering AIs”. “It made me think about a lot of interesting things. I read it almost as a philosophy book.... However, for me it never made a convincing argument that all super-AI will be malevolent.”
Vicki said that this book is incredibly important. “I feel that he’s right about super-artificial intelligence developing.” But she said the book is not easy to read and the writing is a little dry. At times it is even a bit absurd, as in talking about AI space probes. “Sometimes I felt it was a little too much like science fiction.”
Ron gave the book our lowest possible rating: zero. He said it’s a very dry book, and even distasteful, “like eating liver-flavored chalk”. He didn’t find anything new or interesting in it. Ron felt that the author didn’t address the reasons that people go bad, and therefore didn’t recognize that AI doesn’t have any of the same potential results. The book is thorough, but Ron didn’t care for the writing. He mentioned the ridiculous paperclip example the author used. In sum, Ron said there is no way he would ever recommend this book to anybody. And he closed with an “apology” for being too gracious in his criticisms of it!
Barbara found the book confusing and said she had to look up many words in a dictionary. The later chapters were better, though.
John said his review was falling more on Ron’s side. He was looking for more concrete and realistic examples. “The stuff [the author] is talking about—we’re not even close to it. He’s totally in science fiction land.” But John agreed that when it does happen we may not be prepared for it. He also agreed that the author had a poor writing style. “Totally sci-fi, blue sky, ptooey! I was looking for a book that was more grounded in reality. I’ve read lots of sci-fi books which are a whole lot more convincing!”
Kevin liked it even less and stopped after reading about 5%. He said he made five attempts to start reading more, but finally gave up. He found the book close to being inaccessable, as if it were in a vault behind a steel door!
In summation, many of us in our group didn’t much care for this book or its writing style. However, some of us nevertheless find much of a very alarming nature in the prospect of the future development of “superintelligent AI”.
Return to our complete list of books.
Return to our Science Group home page.