Making Dogs Smarter Than Humans
Mike Treder
2009-08-11 00:00:00



When that development finally happens, the smarter thing, being so much smarter than us, will proceed to become even smarter still, since it presumably will have the ability to improve its own thinking and increase its intelligence. This is especially assumed to be so if the first better-than-human brain is contained in an artificial intelligence, because that entity could, presumably, rewrite its programming code, making it work faster, more efficiently, and more creatively than anything we puny humans could ever design.

This concept is the underpinning of the Singularity. When the smarter thing is able to make itself way smarter, it soon -- perhaps almost instantly -- will proceed to take control of systems around it, upgrading them as well, using its rapidly and recursively improving brain to solve problems in ways that humans never could. Before long, global warming will be a thing of the past, poverty and disease will be conquered, abundant energy will be sustainable and free, space will be opened, and wars will end forever. ‘Tis a consummation devoutly to be wished.

The rub, of course, is that this brainy new intelligence might not necessarily be inclined to work in favor of and in service to humanity. What if it turns out to be selfish, apathetic, despotic, or even psychotic? Our only hope, according to “friendly AI” enthusiasts, is to program the first artificial general intelligence with built-in goals that constrain it toward ends that we would find desirable. As the Singularity Institute puts it, the aim is “to ensure the AI absorbs our virtues, corrects any inadvertently absorbed faults, and goes on to develop along much the same path as a recursively self-improving human altruist.”

So, what we want is a very very smart friend who will always be trustworthy, loyal, and obedient.

(Could obedience be too much to hope for, though, since the thing will not only be more intelligent but also much more powerful than us? When this question is raised to the friendly singularitarian, the answer given is usually something like, because we’ve seeded the AI with our virtues, we’ll have to trust that whatever it does will be to our benefit -- or at least will be the right thing to do -- even if we can’t comprehend it. Along the same lines as, God works in mysterious ways, and His ways are not for us to understand.)

We know, of course, that not all humans are good. In fact, none of us are all good. It might be a risky proposition, therefore, if the first superintelligence turns out to be a vastly improved human brain. What certainty could we have that the particular human would not have a screw loose somewhere and would not use his or her newly acquired powers for nefarious purposes?

Trusting artificial intelligence might be dangerous, unless we know beyond all doubt that we can program the AI to be friendly, and to stay friendly, toward us. Trusting any given human with superintelligence might also be fraught with danger. What, then, is the best course?



Using adapted tests designed for human children, psychologists have learned that average dogs can count, reason and recognize words and gestures on par with a human 2-year-old.

"They may not be Einsteins, but are sure closer to humans than we thought," said Stanley Coren, a professor emeritus at the University of British Columbia and leading researcher on dog behavior.


Dogs may be much smarter than we thought.



The average dog has a vocabulary of about 165 words. The smartest canines understand up to about 250 words and are able to figure out new ones on their own.

"That kind of fast language learning we thought was only possible among humans and some of the higher apes."

But more than that, tests suggest that dogs and apes both have some of the same basic emotions -- fear, anger, disgust and pleasure -- that toddlers experience, said Coren, while both the animal groups are missing some of the more complex, learned emotions such as guilt.


Dogs not only are smart in ways of reasoning and problem-solving, but they also have emotional intelligence, remarkably similar to that of humans.

Dogs may even understand fairness.

In one experiment, a researcher trained two dogs to shake a paw. After both learned the trick, the researcher started giving a treat to one of the dogs every time he got it right but not to the other.

Not only did the unpaid dog stop performing, he wouldn't even look the researcher in the eye. "He doesn't want any part of you. He doesn't think this is fair."


Man’s best friend already is a lot smarter than we previously recognized. We also know, through long and rewarding experience, that dogs are unfailingly -- inhumanly -- loyal, trustworthy, and obedient. There is a reason for that, of course: we bred them to be this way, over many thousands of years. We have patiently selected them for friendliness toward humans, eugenically guided them to be emotionally compatible with us and supremely dedicated to pleasing us.

It might just be, then, that this is the ideal repository for the first greater-than-human intelligence. If we’re going to instantiate superintelligence in some substrate, why not make it one that we already know is devoted to us? Why worry about designing a “friendly AI” when the friendliest friend we could ever imagine is sitting right at our feet, just waiting to serve?



That’s the answer. Forget about making a computer, a robot, or an enhanced human into the Singularity savant. Just use Fido, and all our problems will be solved.