Why Tech Giants Are Not Pursuing "Intelligent" AI – interview with Dr. Robert Shank
Daniel Faggella
2015-09-08 00:00:00

The first boat navigates by attempting to pre-program the thousands (if not an exponential amount more) of possible scenarios and related actions that a machine might encounter when playing a game of chess. The second boat treads a very different path and attempts to find its way by building a machine that can actually learn and imitate a master in real-time. "I always saw AI as being a field that could tell us more about people than anything else by getting us to figure out how to imitate people by doing the kinds of things that people do," says Schank.



There is more to the lines of thought followed in the development of AI, but historically, much of the scientific community has hopped into the first boat. Unfortunately, the outlook for building a complete and intelligent entity does not look too promising – there are simply too many variables for which to account. Investing in and studying the human mind will certainly help provide an essential framework, but even then, a truly intelligent entity seems to need room and space to interact and to learn from trial and error.

Richard Schank thinks this type of build-and-let-learn AI architecture is possible. If so, what is keeping tech giants like Google from investing in and developing a more authentic model of artificial intelligence? The answer, according to Schank, is simple – those companies have not found the second approach to be economically viable; they're more interested in good advertising methods or self-driving vehicles that produce a profit.

Roger is quick to point out that he is not undermining these important contributions to the field of AI – it is just that these successes are a different breed of advanced technology, one that does not truly reflect an intelligent entity based on the definition of a learning being.

Computers do not have their memory altered by experience. Facebook recently released "M", its virtual assistant, but the AI relies on humans to fill in the learning gaps; this may be a primitive flash of a form of AI that can learn through experience, though the distance still to be covered before complete AI autonomy is anyone's guess.

The machines Schank envisions would go far beyond today's now common virtual assistants. At the end of the day, Watson and Siri only fool people into thinking that they are "smart"; they are certainly not capable of having a conversation (admit it, some of us have tried), and they are not going to adapt or get smarter over time.



"AI research in the 1950s and 1960s was always sponsored by the Defense Department", says Schank, "and they wanted us to recognize targets." Big data is an offshoot of this investment in AI, but it is of the variety that goes against the second approach of imitation. Again, he makes the argument that this is not a computer being intelligent. "When Google Search finds something for you, it does not know what it found; it may be very useful…but there's nothing intelligent about it; it's the use of algorithms that work well."

An argument exists that recent ventures into the world of deep machine learning are steps in the right direction. Could we build a more intelligent search engine? "Yes – it's something I worked on for years," remarks Schank. "But Google doesn't work here, because it's very expensive and there's not as much 'bang for the buck'." Again, Roger comes back to the root of the definition. "Intelligence is about getting smarter with every interaction…computers don't do that."

"If you had asked me this question (if we would have a truly artificial intelligent entity) back in the 1980s, I would have said yes, it will exist in my lifetime, but what I didn't know about was the AI winter – AI was so overhyped, that it didn't work, and it lost funding. I can't say it will happen in my lifetime anymore, the funding hasn't been around for the last 30 years."

There is no doubt that oceans of money are being put into AI, but Schank insists that's not the relevant question. What we should be asking about is the end product i.e. is there a machine that society should invest in that would be ultimately useful on a higher level? "Does it have a point of view that's new? Can it engage in an interesting argument with you? Does it have something that it wants to teach you and you can teach it, and after you talk to it a while it will now be smarter?" These are the types of questions that Schank believes we should be asking if we are to develop a more authentic form of AI.

Image #1: Dr. Robert Shank