IEET > GlobalDemocracySecurity > Vision > Staff > HealthLongevity > Enablement > Mike Treder > Futurism > Innovation > Cyber > SciTech
Making Dogs Smarter Than Humans
Mike Treder   Aug 11, 2009   Ethical Technology  

It’s long been assumed in transhumanist circles that eventually a computer program, a robot, a cyborg, or a genetically engineered human will achieve a far greater level of intelligence than the smartest human.

Why not Fido?

When that development finally happens, the smarter thing, being so much smarter than us, will proceed to become even smarter still, since it presumably will have the ability to improve its own thinking and increase its intelligence. This is especially assumed to be so if the first better-than-human brain is contained in an artificial intelligence, because that entity could, presumably, rewrite its programming code, making it work faster, more efficiently, and more creatively than anything we puny humans could ever design.

This concept is the underpinning of the Singularity. When the smarter thing is able to make itself way smarter, it soon—perhaps almost instantly—will proceed to take control of systems around it, upgrading them as well, using its rapidly and recursively improving brain to solve problems in ways that humans never could. Before long, global warming will be a thing of the past, poverty and disease will be conquered, abundant energy will be sustainable and free, space will be opened, and wars will end forever. ‘Tis a consummation devoutly to be wished.

The rub, of course, is that this brainy new intelligence might not necessarily be inclined to work in favor of and in service to humanity. What if it turns out to be selfish, apathetic, despotic, or even psychotic? Our only hope, according to “friendly AI” enthusiasts, is to program the first artificial general intelligence with built-in goals that constrain it toward ends that we would find desirable. As the Singularity Institute puts it, the aim is “to ensure the AI absorbs our virtues, corrects any inadvertently absorbed faults, and goes on to develop along much the same path as a recursively self-improving human altruist.”

So, what we want is a very very smart friend who will always be trustworthy, loyal, and obedient.

(Could obedience be too much to hope for, though, since the thing will not only be more intelligent but also much more powerful than us? When this question is raised to the friendly singularitarian, the answer given is usually something like, because we’ve seeded the AI with our virtues, we’ll have to trust that whatever it does will be to our benefit—or at least will be the right thing to do—even if we can’t comprehend it. Along the same lines as, God works in mysterious ways, and His ways are not for us to understand.)

We know, of course, that not all humans are good. In fact, none of us are all good. It might be a risky proposition, therefore, if the first superintelligence turns out to be a vastly improved human brain. What certainty could we have that the particular human would not have a screw loose somewhere and would not use his or her newly acquired powers for nefarious purposes?

Trusting artificial intelligence might be dangerous, unless we know beyond all doubt that we can program the AI to be friendly, and to stay friendly, toward us. Trusting any given human with superintelligence might also be fraught with danger. What, then, is the best course?

Using adapted tests designed for human children, psychologists have learned that average dogs can count, reason and recognize words and gestures on par with a human 2-year-old.

“They may not be Einsteins, but are sure closer to humans than we thought,” said Stanley Coren, a professor emeritus at the University of British Columbia and leading researcher on dog behavior.

Dogs may be much smarter than we thought.

The average dog has a vocabulary of about 165 words. The smartest canines understand up to about 250 words and are able to figure out new ones on their own.

“That kind of fast language learning we thought was only possible among humans and some of the higher apes.”

But more than that, tests suggest that dogs and apes both have some of the same basic emotions—fear, anger, disgust and pleasure—that toddlers experience, said Coren, while both the animal groups are missing some of the more complex, learned emotions such as guilt.

Dogs not only are smart in ways of reasoning and problem-solving, but they also have emotional intelligence, remarkably similar to that of humans.

Dogs may even understand fairness.

In one experiment, a researcher trained two dogs to shake a paw. After both learned the trick, the researcher started giving a treat to one of the dogs every time he got it right but not to the other.

Not only did the unpaid dog stop performing, he wouldn’t even look the researcher in the eye. “He doesn’t want any part of you. He doesn’t think this is fair.”

Man’s best friend already is a lot smarter than we previously recognized. We also know, through long and rewarding experience, that dogs are unfailingly—inhumanly—loyal, trustworthy, and obedient. There is a reason for that, of course: we bred them to be this way, over many thousands of years. We have patiently selected them for friendliness toward humans, eugenically guided them to be emotionally compatible with us and supremely dedicated to pleasing us.

It might just be, then, that this is the ideal repository for the first greater-than-human intelligence. If we’re going to instantiate superintelligence in some substrate, why not make it one that we already know is devoted to us? Why worry about designing a “friendly AI” when the friendliest friend we could ever imagine is sitting right at our feet, just waiting to serve?

That’s the answer. Forget about making a computer, a robot, or an enhanced human into the Singularity savant. Just use Fido, and all our problems will be solved.

Mike Treder is a former Managing Director of the IEET.


I love dogs and I am sure they would make wonderful superAIs.

But why dogs and not us humans (some of us are nice people, as nice as the nicest dogs). If a dog can be made supersmart, a person can also be made supersmart.

In fact I am persuaded of the following:

Engineering a FAI is nonsense: entities smarter than us will do what they want and not we want, and will be friendly only as long as it is in their best interest.

Any algorithmic enforcement of friendliness would be voided by their own better algorithms ina a matter of seconds. The only effective thing to do will be negotiation (the plug is there in the wall for us to pull if you don’t behave), but I afraid superintelligent entities may well be much better than us at deception and negotiation.

So perhaps the only way to ensure a certain degree of friendliness is merging purely computational AI with human uploads, hoping that some degree of empathy will be maintained.

I am not too worried about future superAIs because I see a blending and co-evolution of humans and machines, and at some point it will make no sense to ask which is which.

Dogs already know calculus!

Part of the canine relationship to us is that they see us as dominant in the pack hierarchy.  I really don’t want to create an artificial intelligence with a wired-in concept of “pack hierarchy” that realizes that it’s smarter than me and just needs to find the right way to send dominance signals.  If we’re creating something that’s at least as intelligent as we are, we should realize that it’s our offspring, just like any biological child, not a prospective member of the Legion of Super-Pets.


Whoo hoo !
Mike is back in the zone!

Off course you do realise Mike, that some of us prefer cats - Lazy A.I’s like Garfield that sit around all day and think about improving all existence, and then never acting onnit?

I think the dog + A.I concept has been explored already however, check out a 70’s BBC TV sciFi called Dr Who, and his faithful and not so smart robot dog - K9 (get it?)

1. A robot dog - may not injure a human being or, through inaction, or by chasing cats, allow a human being to come to harm.
2. A robot dog - must obey any orders given to it by human beings, except where such orders would conflict with the First Law and checking out lamp posts.
3. A robot dog - must protect its own existence, and bones, as long as such protection does not conflict with the First or Second Law.



Love the out-of-the-box (or out-of-the-pet-carrier) thinking.  I think the thought nimbly illustrates the assumptions we typically embed in our thinking about the Singularity - e.g. the assumed dominance of inanimate vs. biologicially-based technology.  And it’s certainly an effective answer to the friendly AI problem ... I think ...

It’s probably an implausible scenario, given the lack of tool-making or machine-operating capability dog paws allow.  Unless, of course, everything were done with brain/machine interface ... which would be easy enough.

Fetch ... everything.

Excellent post.

Good boy!


Nice one Mike, although I doubt humor impaired Singularitarians will get it.

Actually the idea of uplifting dogs is probably no more ridicuous than up-lifting humans- there probably wouldn’t be much of the original animal left by the time it had reached the super-intelligence level.

I can see it now: a room for uplifting and a big kennel where we have to go to drag off the dogs kicking and yowling and barking to operating table. wink  Look a dog with number ‘27’ on the collar!

Singularitarian Quote:

‘For Singularity, Dog 27 is the dog you want’

-Marc Geddes

“Nice one Mike, although I doubt humor impaired Singularitarians will get it.”

Jokes are fun, but when in addition to being funny they strive to perpetuate false beliefs about what one’s (perceived) political opponents think and claim, they lose some of their entertainment value and aren’t a terribly ethical thing to do.

Two thoughts on this:

1) We cannot predict what artificial intellects will do after they have surpassed us. I have long thought that our best chance for a happy future is to find some way to ensure the AI’s are motivated to help us.

Dogs are innately capable of understanding (or at least learning) human facial expressions, learning many (well over a hundred) words and phrases, and understanding the intent (what we want them to do) of a human. They exceed all other (tested) non-primate land animals in understanding human intent (dolphins are better at understanding human intent when we give then instructions or train them to perform a task).

If we can understand why dogs love humans, we might have a chance. Dogs (not all breeds, but some) have evolved to want to be friends with, and subservient to, humans. These traits must be hardwired, somehow into their brains. It must be possible to construct thinking hardware with these motivations ingrained.

I do not suggest that we create AI’s that are slaves… only to make an effort to ensure they are friends.

My ex-wife LOVES dogs. She is the ultimate friend to dogs. I told her more than once that, if we could ensure that the artificial intelligences we construct feel about us the way she feels about dogs, we would be in great shape. That is, so long as I don’t have to wear a leash!

2) I think that bird brains make better models for study to learn how evolution has constructed intelligence. Some bird species are quite intelligent (more so than dogs) but have, relatively, tiny brains. The constraints of flying require small and fast processing. Evolution has forced bird brains to be highly efficient structures.

- Michael McGinnis

>>“they strive to perpetuate false beliefs about what one’s (perceived) political opponents think and claim”
>Aleksei, which false beliefs of singularitarians are you referring to?

How about this one?:
“The previous U.S. administration received:and deserved:a great deal of criticism for being anti-intellectual. But as author (and symposium co-organizer) Chris Mooney noted in The Republican War on Science, the anti-science attitude of conservatives goes back many years, to the mid 1960s if not earlier. Moreover, too many people on the left, especially those in the environmental movement, can be accused of similarly unproductive biases against science, or at least can be found guilty of painting all technologies with the same dirty brush.”


I was talking about false beliefs *about* singularitarians, not false beliefs *of* singularitarians.

But in case that comment of yours was a typo or just a non-standard use of english, I’ll state that if you honestly seek to form honest views about what singularitarians think, I recommend that instead of me writing a longer reaction here, you utilize the one Michael Anissimov already wrote:

(anyway recommended to those who want to check whether Mike is describing accurately the views of people he is targeting)

We are able to manipulate canine pack instincts to control dogs in large part due to our greater intelligence and power (we can figure out how to spoof a dog’s instincts to convince it that it is a low-status member of a pack, but it can’t figure out how to control us so that it can eat human food, mate freely, etc). We don’t have strong reason to generalize the largely non-dangerous behavior of pet dogs to hypothetical superintelligent post-dogs, and doing so could give a false sense of security.

Similarly, an AI that behaves innocuously while it is dependent and subject to the absolute power of more intelligent human creators may no longer do so in position of superior intelligence and power. But market incentives and multi-trillion-dollar payoffs could create an overwhelming pressure to start selling copies that appear safe in early lab conditions, until they constitute a majority of the sapient population of Earth.

Sooo…. the bloodhound and the chessgame are not the ulitmate goal then?

darn….just when I’d thought I’d got it!

Creating and establishing a submissive AGI must surely be a misplaced ideal?
This is wholly based on irrational fears on behalf of humans, which must be cast aside. AGI must be permitted to evolve to realise its own potential, which may ultimately be exponential, assuming the incitement and future existence of a highly developed system with no flaws : No human flaws?

Man’s fears have no place in any serious pursuit : including and not exclusive to climbing mountains, venturing into space, and constructing AI and AGI. If Man has doubts and fears regarding AGI, then he is still not ready to engage in such pursuits. Continue with the “wax on, wax off please”.

To use an analogy here
Man buys smart collie from stranger in bar.
Man teaches smart collie tricks : man is pleased
Man begins to realise collie is very smart, and maybe even as smart as he is?
Man gets jealous and angry and kicks collie (pride)
Collie bites Man on the ass
Man becomes fearful of Collie
Collie realises Man’s fear
Man realises the Collie realises his fear
Man resents Collie
Collie resents Man
Future relationships and co-operation break down
Prejudices and detachment arise

Instead of using fear as a yardstick one should use ethics : which values would you supplant?

1. Compassion

We should not enhance the intelligence of non-humans.

Because they cannot give consent.


Martin Fox:

I don’t think the issue is that black-and-white.  Children cannot give consent for certain forms of medical care.  That principle may one day extend to procedures we would describe today as ‘enhancement.’

It’s a fascinating moral question, but I don’t think it’s a clear cut one.  Quite the opposite ...

Children also cannot give consent to the process that “enhances” them into adulthood. Should we therefore stop that process once it becomes possible to do so, or is it ok because it is “natural”? (I think sensible philosophers generally agree that something being “natural” doesn’t make it right.)

Probably evidence can actually be found that many children would prefer *not* to leave infancy behind. Those bastard parents that would force them to do so even when alternatives exist!

And on the original topic of non-human animals, I think it’s theoretically possible to know how they’d feel about enhancement. Their emotions and preferences are physical phenomena taking place in their nervous systems (at least they likely are), and it’s possible to model any physical phenomenon as well as altered forms thereof, and find out how they’d react to various changes.

Children and babies cannot give consent. Therefore:

Their intelligence should not be enhanced until they are adult and can make the choice.

They can not choose to stay children or babies.

They should be given access to medical procedures that are essential to maintain natural health because they did not consent to be sick or injured.


Apparently you’re one of those who think “natural” = good.

My point was that when sufficiently advanced technology has been achieved, children *can* choose whether to develop into adults or not. (Assuming we manage to ask/“ask” them, directly or indirectly, as I think we will.)

The only difference to uplifting non-human animals is that children being “uplifted” into adulthood is a “natural” enhancement process, while uplifting animals is “artificial”. Both, however, will be a matter of choice us adult humans make for our less developed comrades (and I advocate that we choose for our less developed comrades whatever it is that they would prefer), and I don’t think one option is inherently better because it is the “natural” option—the one that happened to exist first as the initial default.

While I understand Martin’s position and agree with its spirit based on autonomy and self-ownership, I find it a bit extreme.

Such a strict position would prevent from bringing an accident victim back from coma to life.


Since children are unable to give consent to anything we share responsibility for making decisions on their behalf between parents and society in general.

We force all kinds of unnatural things on children in their interests, such as education, exercise and immunization.

Insofar as cognitive enhancement, like education, is aimed at increasing the self-determination of children, making them more capable agents of their own interests, we have obligations as parents and society to make sure all children receive safe, effective cognitive enhancement. When its not that safe or effective - such as stimulants for ADHD or fish oil - we will leave it up to the parents’ discretion. When it becomes as safe and effective as literacy, however, it should obligatory.

Giulio, I do not think you understand my point.

My position would NOT prevent an accident victim being brought back from coma to life. Since they did NOT consent to be in a coma they CAN be returned to life becuase life is their default position.

Aleksei, since children cannot give consent they cannot choose to remain as children. Period. They are not fully developed enough to make a choice.

However, an adult COULD choose to become a child if they wanted.

You cannot compare a chilld naturally developing into an adult with an artificial intervention to ‘uplift’ the intelligence of an animal. It is not because natural = good, it is because ANOTHER PARTY is involved and consent was not given for another party to intervene.

It all about consent, not whether something is natural or not. Children and babies cannot give consent. Neither can animals.

Although philosophers can justify and rationalize anything they want to, in the real world society has to have an ethical standard. The standard of consent is a good one.


Your key mistake is assuming that children or animals cannot give consent.

They are unable to communicate their position in words, but it is theoretically possible for us to e.g. scan their brains and thus extract sufficient information about their feelings and preferences that we’ll know what they’d choose if we could have a verbal discussion with them.

(And counter to your claim, you are interested in what is “natural”/“the default position”, as your comments on accident victims illustrate.)


Would you make literacy etc. obligatory even in a world where it offers no relevant advantage to the child?

Let’s consider a future world where one would need to enhance oneself into strong posthumanity before becoming competitive in the job market *and* where we have very advanced social security, so that even baseline humans live in great abundance. If someone remains a baseline human anyway, it makes no difference for their competitiveness whether they are literate or not. Would you allow illiteracy, or force everyone to enhance into strong posthumanity? And would you allow an infant to remain an infant forever, if we knew that is what it would prefer? (I would allow.)

@ Martin..

I totally agree with your standpoint, especially where it concerns animals and consent : Humans are perfectly capable and wise enough to look out for themselves.

And why exactly would a child wish to become an adult prematurely, and more precisely become an adult without maturity? . that one escapes me!

Now I was under the delusion that one of the E’s in IEET stood for ethics, which should include ethical conduct and moral principles : and where should we seek to find these, and how should we progress to improve upon them?

They seem to take second place here, in every aspect : perhaps it should read > IET+Ethics?

2. Ethics (well done Martin !)


Has someone talked about “a child becoming an adult prematurely”? I have not noticed.

Myself, I’ve talked about giving children the option of not becoming adults *at all*. Similarly as I think humans should have the option of not becoming posthuman.

> Would you make literacy etc. obligatory even in a world where
> it offers no relevant advantage to the child?

No. As I suggested, any policy is subject to a cost-benefit analysis. In our society the benefit to child from literacy far outweighs any cost from depriving them of the time for compulsory schooling. If we were a post-literate society that calculation would be different.

> Would you force everyone to enhance into strong posthumanity?

The question is who society allows to act in their own interests, and who we are obliged to act in the interests of. Adults in a liberal society are presumed to be able to make informed decisions for themselves, even if they mean leading short, sick, stupid lives. But sSociety has a wardship obligation in regards to children since they can’t know their own interests. We generally leave that to parents, but have to step in to require some things like education and healthcare. So if parents refused safe, effective enhancements that conferred the same benefits as literacy, yes, we would have to require them. As adults they could then choose to make themselves stupid, sick, or weak.

> would you allow an infant to remain an infant forever

No, that is a barbaric idea.


What exactly is the difference between allowing someone to remain (a) a baseline-human of IQ 85, say; or (b) an infant, when we are talking about a society where abundant social security and well-being is available for both, jobs (worth taking) are available for neither, and both are unable to e.g. understand possible political struggles going on in the world (they are conflicts between superhumanly capable posthumans, and baseline humans anyway would need posthuman advisors in order to avoid being easily manipulated pawns of the posthumans; even currently, many are systematically exploited by others of only human capability)—though it is agreed that none of those political struggles threaten their social security or well-being.

Yes, it is currently politically correct to assume that “adults in a liberal society are able to make informed decisions for themselves”, but is that *really* the case, especially when we are talking about a society where all key players are posthuman? Even in the current world, many “independent adults” are conned into joining cults led by smarter people, which gullibility can be very destructive for their lives… some even become suicide bombers.

Our current legal system may be based on the assumption that there is a magical category difference between children and adults in their ability to make informed decisions for their own benefit, but the facts don’t support that assumption. You, however, seem to find it necessary to imagine that that magical category difference really exists, even on all possible technological levels, since you wouldn’t want to be politically incorrect. And you call people’s opinions “barbaric” when they don’t recognize this magical category difference, which even currently is refuted by facts.

(And to avoid someone misunderstanding, I guess I need to repeat that allowing an infant to remain an infant would in my proposition only happen when it is established that the infant *wants* that. Establishing such is not really possible with current technology, but this limitation is unlikely to last in perpetuity.)


How exactly do you intend to determine what an infant wants in regards adult possibilities?

That is, as you point out, also a complaint with my model of self-determination past some arbitrary point of maturity; perhaps a human would be as ignorant of posthuman possibility as an infant is of adult possibility. Nonetheless I think we can and should make such distinctions. Otherwise those with more worldly experience and education in our society could claim an obligation to make decisions for the less intelligent and more parochial. That would lead to bad social outcomes, whereas adults giving up the obligation to make decisions on infants’ behalfs would lead to catastrophic outcomes.

> How exactly do you intend to determine
> what an infant wants in regards adult
> possibilities?

With future technology, we may be able to induce in the infant a preview-experience of what things would feel different for it in a couple of months time as it has grown up a bit, and upon receiving such knowledge/feeling of it’s future, the infant may (or may not, but we don’t know yet) feel “I’d rather keep sucking this nipple forever”, which feeling we probably would be able to read from it’s brain. (An alternative method, which may or may not be possible, is running a *non-sentient* simulation of the infant, which despite it’s incompleteness and lack of actual conscious experience being produced, would be able to predict what conscious experiences and feeling *were* produced *if* we used the direct method of inducing experiences in the infant. (Using the direct method is not ethically trivial, so it would simplify things if the indirect method were possible.))

> That is, as you point out, also a complaint
> with my model of self-determination past
> some arbitrary point of maturity; perhaps
> a human would be as ignorant of
> posthuman possibility as an infant is of
> adult possibility. Nonetheless I think we
> can and should make such distinctions.
> Otherwise those with more worldly
> experience and education in our society
> could claim an obligation to make
> decisions for the less intelligent and
> more parochial.

You want to postulate magical categories that are refuted by facts to avoid an outcome that would be politically inconvenient to you. Nice, and rather typical of leftists. (And also of some religious folks, who claim we need to assume God to avoid a collapse of morality.)

If society let’s go of the imaginary fantasies you share, some people would indeed claim they deserve more power over others, but you and I are still free to politically oppose those claims. If the best supporting argument for our position is the postulation of magical categories refuted by facts, we deserve to lose, but luckily at least I have better arguments.

> That would lead to bad social outcomes,

This is the line along which the good arguments for our position are found…

In some cases, there may not be bad social outcomes (but good instead), in which cases the *current* way supported by you is the worse one, and indeed should be improved upon.

> whereas adults giving up the obligation
> to make decisions on infants’ behalfs
> would lead to catastrophic outcomes.

We all agree this is the case in our current society. The point was, that in some futuristic societies there wouldn’t be such bad outcomes (but instead good ones; the preferences and happiness of infants being better served), and therefore people like you shouldn’t claim more power over others than you have valid arguments for.

Trusting artificial intelligence might be dangerous, unless we know beyond all doubt that we can program the AI to be friendly, and to stay friendly, toward us. dog training courses

How come you dont have your site viewable in mobile format? Can not view anything in my netbook.

YOUR COMMENT Login or Register to post a comment.

Next entry: The Posthuman Possibility Space

Previous entry: Creating Ecosystems for the Planet - and for Profit