I have decided to read a book each week in 2016 as part of my resolution for the new year. As it is winter break, I have already gotten started reading the first book: “Machines of Loving Grace, the Quest for Common Ground between Humans and Robots” by John Markoff. I am about halfway through the book, but want to share my initial impressions.
There is a lot to like in this book. Perhaps the most interesting aspect thus far has been breadth and depth with which Markoff covers the history of artificial intelligence (AI). It covers everything from the dawn of the computer age to the use of deep learning to caption images. In its capacity as a history of AI, it is interesting and accessible.
Where the book seems to falter is in its advocacy. Time and time again, Markoff stresses a dichotomy between AI and intelligence augmentation (IA). The intent is clearly that scientists should interest themselves in IA rather than AI. He states that engineers can design humans into or out of the systems they create and that the correct answer is clear, as IA has been more successful the AI in the past. He cites a machine working in conjunction with a human can defeat either a human or machine alone.
What Markoff seems to miss, and perhaps this will be covered later in the book, is that IA has been more successful than AI in the past because it is inherently easier. An engineer working in IA can simply require a human do the work they are unable to code into their system. This could be seen in early search engines. Unable to understand natural language, they required humans to list keywords and use logic operators to find the correct information. Years later, Google Now provides information without the need for human input. With regards to the Google Self-Driving Car Project:
“In the six years of our project, we’ve been involved in 17 minor accidents during more than 2 million miles of autonomous and manual driving combined. Not once was the self-driving car the cause of the accident.”
While there are social ramifications for replacing human labor with machine, that is not a reason to include humans unnecessarily and perhaps to the detriment of all involved. As the field of artificial intelligence advances, it will become more plausible to code AI systems that do not need humans at all.
One additional area of the book that I take issue with is how Markoff treats the idea of a technological singularity. He is dismissive of the idea and misrepresents it in the book. He claims it is a religion and that anyone who believes in the singularity simply wants to live forever. In this section, he barely mentions Ray Kurzweil, perhaps the greatest proponent of the idea of the singularity with his book “The Singularity is Near, When Humans Transcend Biology,” instead focusing on his contributions to the Google Brain project that came many years later. It boggles the mind.