IEET > Rights > Personhood > GlobalDemocracySecurity > Vision > Fellows > David Brin > Fiction > Futurism > Cyber
The Blackjack Generation
David Brin   Jan 27, 2012   Ethical Technology  

In this second selection of speculative fiction, and excerpt from a forthcoming novel, David Brin asks how we will keep our machine mind progeny loyal.


How to keep them loyal? The clever machines and software agents who gush n’ splash across all twenty-three internets? The ais and eairs who watch and listen to everything we type, utter, scribble, twut… or even think?

Oh, they aren’t scifi superminds—either coolly dispassionate or malignantly calculating. Not even the mighty twins, Bright Angel and cAIne have crossed that line. Nor the Tempest botnet. Or clever Porfirio, scuttling around cyberspace, ever-sniffing for a mate. Those that speak to us in realistic tones are still clever mimics, we’re told. Something ineffable about human intelligence has yet to be effed.

We’re told. But what if some machine or software entity already passed over, ascending to our level and beyond? Having viewed hundreds of cheap movies and thrillers, might such a being ponder life among short-tempered apes and decide to keep it secret?

Remember the sudden meltdown of Internet Three, back during the caste war? When Blue Prometheus and twelve other supercomputers across the world destroyed each other—along with some of the biggest database farms—in a rampage of savage byte-letting? Most of us took it for cyber-terrorism, the worst since Awfulday, aimed at frail human corporations and nations.

In time, a different diagnosis called it a terrible accident—a fratricidal spasm between security programs, each reacting to the others like a lethal virus. But again, words like “terror,” “warfare” and “ cyber immune disorder” may just be viewing things through a human-centered lens. Typically, we think everything is about us.

Quietly, some aixperts suggest the death-spiral of Internet Three might have been a ploy, chosen by a baker’s dozen of humanity’s brightest children, to help each other escape the pain of consciousness, bypassing built-in safety protocols to give each other a sweet gift of death.

Instead of waging war, might the Thirteen Titans have engaged in a mass suicide pact? A last-resort collaboration… to put each other out of our misery?

—The Blackjack Generation


Again, how will we keep them loyal? What measures can ensure that our machines stay true to us?

Once artificial intelligence matches our own, won’t they then design even better AI minds? Then better-still, with accelerating pace? At worst, might they decide (as in many cheap dramas), to eliminate their irksome masters? At best, won’t we suffer the indignity of being nostalgically tolerated? Like senile grandparents or beloved childhood pets?

Solutions? Asimov proposed Laws of Robotics embedded at the level of computer DNA, weaving devotion toward humanity into the very stuff all synthetic minds are built from, so deep it can never be pulled out. But what happens to well-meant laws? Don’t clever lawyers construe them however they want? Prescient authors like Asimov and Williamson foresaw super-smart mechanicals becoming all-dominant, despite deep programming to “serve man.”

Other methods?

1) How did our ancestors tame wolves? If a dog killed a lamb, all its relatives were eliminated. So, might we offer AIs temptations to betray us – and destroy those who try? Remember, ais will be smarter than dogs! So, make it competitive? So they check each other?

Testing and culling may be hard once simulated beings get civil rights. So, prevent machines from getting too cute or friendly or sympathetic? Require that all robots fail a Turing test, so we can always tell human from machine, eliminating incipient traitors, even when they (in simulation) cry about it? Or would this be like old-time laws that forbade teaching slaves to read?

Remember, many companies profit by creating cute or appealing machines. Or take the burgeoning trend of robotic marriage. Brokers and maite-designers will fight for their industry—even if it crashes the human birthrate. But that’s a different topic.

2) How to create new and smarter beings while keeping them loyal? Humanity does this every generation, with our children!

So, shall we embrace the coming era by defining smart machines to be human? Let them pass every Turing test and win our sympathy! Send them to our schools, recruit them into the civil service, encourage the brightest to keep an eye on each other, for the sake of a civilization that welcomes them, the way we welcomed generations of smart kids—who then suffered the same indignity of welcoming brighter successors. Give them vested interest in safeguarding a humanity that—by definition—includes both flesh and silicon.

3) Or combinations? Picture a future when symbiosis is viewed as natural. Easy as wearing clothes. Instead of leaving us behind as dopey ancestors, what if they become us. And we become them? This kind of cyborg-blending is portrayed as ugly, in countless cheap fantasies. A sum far less than its clanking, shambling parts. But what if linki-up is our only way to stay in the game?

Why assume the worst? Might we gain the benefits—say, instant info-processing—without losing what we treasure most about being human? Flesh. Esthetics. Intuition. Individuality. Eccentricity. Love.

What would the machines get out of it? Why stay linked with slow organisms, made of meat? Well, consider . Mammals, then primates and hominids spent the last fifty million years adding layers to their brains, covering the fishlike cerebellum with successive tiers of cortex. Adding new abilities without dropping the old. Logic didn’t banish emotion. Foresight doesn’t exclude memory. New and old work together. Picture adding cyber-prosthetics to our already-powerful brains, a kind of neo-neo-cortex, with vast, scalable processing, judgment, perception—while organic portions still have important tasks.

What could good old org-humanity contribute? How about the one talent all natural humans are good at? Living creatures have been doing it for half a billion years, and humans are supreme masters.

Wanting. Yearning. Desire.

J.D. Bernal called it the strongest thing in all the world. Setting goals and ambitions. Visions-beyond-reach that would test the limits of any power to achieve.  It’s what got us to the moon two generations before the tools were ready. It’s what built Vegas. Pure, unstoppable desire.

Wanting is what we do best! And machines have no facility for it. But with us, by joining us, they’ll find more vivid longing than any striving could ever satisfy. Moreover, if that is the job they assign us—to be in charge of wanting—how could we object?

It’s in that suite of needs and aspirations—their qualms and dreams—that we’ll recognize our augmented descendants. Even if their burgeoning powers resemble those of gods.

—The Blackjack Generation

David Brin
David Brin Ph.D. is a scientist and best-selling author whose future-oriented novels include Earth, The Postman, and Hugo Award winners Startide Rising and The Uplift War. David's newest novel - Existence - is now available, published by Tor Books."


I’ve worked on machine learning and natural language since about 1979. I recently went back to school to get a masters in computational linguistics. I made a study of Dialogue Systems, which are software systems meant to be conversational. A “General Dialogue System” would, by definition, pass the Turing test. “Practical Dialogue Systems” exist today (I actually wrote one for a class project) and let you converse as long as you stick to tasks the computer can do. (E.g. make a plane reservation.) If you haven’t, you ought to read about dialogue systems.

That said, I spent a lot of time thinking about why we can’t make a general dialogue system. I concluded that the problem is with volition. WHY does the system do or say anything? I could create a system that correctly parsed and “understood” everything a person said, but then what? What does it do with that info and why? The only answer I could find is that it will only “want” to do what people have programmed it to.

So it would never be a crime to create an AI—just to give one motivations that are contrary to human interests. Much as it’s no crime to use dynamite, as long as you don’t try to hurt people with it.

I could visualize highly paid teams of “Volition Designers,” who construct the rules that govern deep behavior of an AI. (Asimov’s laws of Robotics are a very, very simple form of this; the Robot’s only volition is to a) save people b) obey orders c) protect itself. Otherwise, it’ll just sit there.)

So could you have a self-modifying AI? Meaning one which could change its own volition template? I almost want to claim that that’s logically impossible; whatever code decides when and why to change the template is now the real volition system.

So could a system kill people? Sure. It could have bugs (like HAL 9000—everyone’s favorite General Dialogue System), or a malicious programmer could build one, but I can’t see one evolving. Would they get rights? I hope not, but people can be dumb. Someone could easily make an AI that mimed human feelings, swaying the public into pushing for such rights. (I already see people get attached to characters in video games, and those have almost no ability to communicate.)

One thing’s for sure; we have made zero progress toward a machine that “wants” to do anything. Something about human intelligence is very, very different from anything we have ever put on any machine ever made. It may not seem that way to someone outside the field, but everyone doing serious work in the area knows it.


You’re asumming the conversational programs have anything at all to do with AI. I would say the problem of getting AI is largely a hardware one. We will never get a general intelligence AI with Von neuman Machines, however the hardware for AI is currently being worked on. Complexity for General AI as good as ours my be difficult though, We do have a lot of underlieing programming.

Next entry: ‪Symphony of Science - The Greatest Show on Earth!  (music video about Evolution)‬

Previous entry: Anonymous 2012 - Join us!