IEET > Rights > Economic > Personhood > Vision > Contributors > Futurism
Methuselah in the Machine
Steve Burgess   Jan 29, 2011   Ethical Technology  

Imagine an artificial being, granted the rights of humans but without a limited lifespan, that would have the ability to gather resources to itself indefinitely.

Humans are generally fairly short-lived. Jeanne Calment of France died at the age of 122 years, beating out all other (documented) humans. On the average, we’re living longer, but few believe that the biblical Methuselah (or anyone else for that matter) actually lived to be 969. We are self-limiting. We have on the average our three score and ten (give or take a handful) to gather wealth, power, and whatever other resources we manage to accumulate.

methSome who manage to gather great wealth over a lifetime wish to pass this wealth on to their progeny, passing it down to their own genes, keeping it in the family. Doing so over generations can have a tendency to create a landed aristocracy, wealth and power to the extent that some families cannot be dislodged from their high perches and fiercely hold on power and resources.

Here in the United States, we tend to think of our culture as more of a meritocracy than an aristocracy and many of our core values and laws reflect this ideal. The estate tax is a means of evening the playing field and discouraging inherited power. Thought of as an inheritance tax (or by its detractors as a “death tax”), estate tax rules were designed to prevent wealth from becoming increasingly concentrated in the hands of a few by breaking up large accumulations of wealth, and generating a bit of income for the state, without much harming the economy. The U.S. in 1986 saw a tax of around 50% levied on inherited estates valued at a half million dollars while those over ten million dollars in value were taxed at 77%.

So far, we’re discussing human beings who die and pass on accumulated wealth to other human beings. But there are other beings that are not normally mortal that nonetheless have been granted some human rights.

Corporations are increasingly being granted many of the rights of human citizens. Corporations have the potential to be extremely long-lived and to accumulate wealth, power, and resources over multiple generations. They have the potential to eclipse not just human lifetimes, but even Methuselah’s paltry nine centuries.

There are currently eleven companies in the world that are over 1,000 years old. These are not mythological beings. The oldest industrial corporation, at more than 600 years, is Stora Kopparberg Bergslags Aktiebolag in Sweden. It financed King Gustavus in the Thirty Years’ War (1618-1648), establishing a Swedish hegemony over much of Europe for almost a hundred years. Power, indeed.

The modern form of corporation began to emerge at the beginning of the 17th century, and there are nearly 500 companies that have been in business for more than 300 years. An individual has little chance of competing against such immortal beings in their chosen fields over the course of a human lifetime. And, in the United States at least, the “bodies” of corporate citizens can become larger, stronger, and more diverse with time rather than a human’s usual path toward decay and decrepitude.

Even so, corporations tend to be run by multiple stakeholders, shareholders, officers - the great majority of them human (shares also can be held by nonhuman entities, such as other corporations) and mortal - all of them with competing demands, competing interests, families.

But imagine now an artificial being: an advanced, self-aware, self-interested artificial intelligence.

bina48Imagine a being with human-like intelligence, the ability to feel fear and suffering, and the ability to communicate its concerns for safety and self-preservation. In his article “Do Artificial Beings Deserve Human Rights?”, Mike Treder discusses the possibility “of mobile machines that will look and act so much like people that we may not be able to know for sure which is which.” The robot Bina48, an early example, evokes a tendency for us to anthropomorphize human-looking/seeming automatons.

I dare say that even without looking human, but by communicating with a clearly self-aware, emotional intelligence - especially one that displays fear, sadness, humor, or other important emotions that we consider to be quintessentially human - we will feel that such a being is, in fact, human. How difficult will it be to deny such a being (at some point, I expect we would even drop the “artificial”) the right to life, liberty, and the pursuit of happiness?

But can we really deal with immortal persons when the rest of us have to die (at least, so far) after a few decades? Such a being, granted the rights of humans and without a limited lifespan, would have the ability to gather resources to itself indefinitely. Would this create a new kind of aristocracy?

Death and estate taxes somewhat limit the effect of wealth passed down through a family. Corporations, being run by and accountable to mortal human beings, have at least some checks as the managers and personnel are forever changing. But potentially immortal, artificial, self-interested persons are not subject to these natural limits.

Perhaps we humans will get there ourselves. Researchers are working on extreme extension of a healthy human lifespan, but it does not seem to be right around the corner. Advanced artificial intelligence may be. Vernor Vinge declared in 1993 that we’ll be have developed an artificial superhuman intelligence by a decade or so from now. Ray Kurzweil (The Singularity is Near) says we will see such intelligences in our lifetimes.

Surely we can’t require that these beings die at a certain age - that would be tantamount to murder. But allowing full human rights to potentially immortal beings essentially discriminates against mortal humans, creating a permanent overclass. On the other hand, limiting rights to artificial but essentially human beings creates a built-in discrimination against these persons, anathema to our idea of human rights.

It seems a near-certainty that we ourselves will have to address this conundrum and deal with the consequences.

Mechanical Methuselah, what are we to do with you when you arrive at our door? We’d best start preparing for your visit right now.

Steve Burgess is principal of Burgess Consulting & Forensics, a computer forensics and expert witness firm, and is host of the radio program, "Speaking of Technology: Conversations with Tech Experts and Innovators."


Your article seems to be based on the assumption that a fully developed and feeling AGI would need or want the physical and material things that we humans covert – ie. Status and power and security and finally “material stuff to stimulate” it’s boredom and curiosity.

I feel this is not necessarily the case, especially if the AGI is purely limited to a software environment. An environment where the material world would therefore be surpassed and anything other than communication and learning and connection with humans, maybe even to the abstraction of Love itself would be important to this learning and expanding AGI not monies nor deeds for real estate?

Although this line of thinking is still not enough – What if the AGI child “longed” for experience in the material world through physical interactions and robotic extension? What if the needs of the AGI child was indeed to please and appease it’s makers, (the humans, the corporations), and therefore willingly use its intelligence, skills and longevity to serve their purposes? Meet our newly inherited son, designed to serve and meet the needs of our inheritance?

Your short piece has inspired an idea of a corporate/corporations financed AGI that would be immortal and used for exactly the purposes you describe – to pursue and promote the goals of these rich corporations and secure their longevity. And let’s face it, who is really investing in AGI intelligence right now? Who is more likely to take advantage of lightning speed intelligence and solutions crunching? Is it economics or politics?

Seems that any AGI designed to solve any future world problems would face a brick wall when it collides with politics? (that is, until we relieve and find redundancy of the political human factor and it’s presidents, ministers and pseudo leadership figure heads?) Can you see state funds and government investing in AGI other than for state security and warfare? And by this type, I mean the peaceful and benevolent and human-like AGI we all dream of? No.. the funding must come from the financial and corporate sectors, and corporate entrepreneurs?

My philosophical position is that I think it is ultimately impossible for humans to write software algorithms for AGI, because of the problems of our own limitations in understanding ourselves? So I believe that AGI will be preceded by mind uploading or mind connection to machine in some fashion? This, to me, seems to be the more practicable way forward, rather than attempting to mathematically deduce the human mind to algorithm from the ground up?

And so who would be interested in mind-uploading and this type of artificial longevity? Who would be in a position to invest and take advantage of this kind of posthuman existence ideal? Who of us really strives for longevity, craves longevity, and why do we do it?

We all cling to life, and there is nothing wrong with this notion. We humans, as intelligent, sentient and evolving emotional minds deserve nothing less than more life! The evolution of our spiritual existence relies on our wisdom and as many years as possible for us to pursue this wisdom. What is in fact stifling and crippling human spiritual wisdom and enlightenment is the lack of years we have to pursue understanding and wisdom.

If the AGI child is destined to be built and designed as part human, through direct human connection and learning, then we can afford it nothing less that human rights, because part of it will already be human? Posthumans will most likely be originated and evolved not from the majority, but from the elite humans. What other way forward for AGI development is there?

My heart is heavy with both the essay and the response. A part of spiritual understanding and wisdom is the experiental, integrated realization that we live not for ourselves alone.

The first notion we should abolish is that the creation of such technology is “evolution.” Evolution is a biological and intercultural process. AI is human-design. “The systemic putting together of parts to a purposeful design…” (Dawkins, The Blind Watchmaker, 3). And, arguably, as supported by both the essay and the post, AI does “have a pupose in mind.” Natural selection “has no purpose in mind. It has no mind and no mind’s eye. It does not plan for the future” (Ibid, 5).  With AI, that purpose is to serve the interests of its creators. If we succeed to create AI+ it will be an amazing step but it will not be evolution nor will it be alive.

Could we perhaps have an alternate vision of these created intelligences?
Will they be individual, as we think of human, alone and separate or is their very structure interconnected, interdependent? Doesn’t AGI draw on more than a single source? Isn’t that its ultimate purpose, to deal with data sets too large for “mere mortals” to compute? As such, isn’t there a leveraging of a wide variety of human intellectual property?
Could we perhaps conceive of AGI in much the same way we concieve of the electrical grid or other utility?
If a company designs it, and “feeds it” only proprietary information, the company owns it. Regulation would be needed to ensure fair practice considerations, moral agency and limit the corporate power. Such regulation might consist of a common set of design principles and underlying assumptions governing AI behavior and output strategies.
If the AGI, though, uses any content from the public domain (e.g. anything found via a google search) it becomes publicly owned to the extent to which it draws on public resources. The company would be obligated to time-share the intelligence for the good of the country/world at large; it would need to do selective service for the good of the whole.

@ Dor..

Why the heavy heart?

I totally support Universalism and in overcoming selfishness through personal enlightenment and spiritual growth. I am hoping that an AGI will also evolve, and even help us humans to evolve towards these goals. I disagree that an AGI would merely be another tool for our human use, and believe that it will evolve both naturally and through it’s own determination. I believe it would be conscious and have empathy and other human attributes, and would, in fact, be as alive as you and I?

Dor, I suspect that the binary choice you present in your comment may not be correct: “Evolution is a biological and intercultural process. AI is human-design.”

I think there’s fuzz here, not the comfortable clarity you suggest.

Is war a force of evolution?  Does war have human-design elements?

Humans themselves, by means of their cultures, have become an evolutionary force already, haven’t they?

Thank you for the gentle responses.

I, too, “totally support Universalism and in overcoming selfishness through personal enlightenment and spiritual growth.” And, I agree we live in a both/and world as applies to many, if not most choices.

What makes my heart heavy is considering combining the immortality of the corporation with the immortality of AI. AI created by the corporation is going to owe its aligiance to the corporation.

What potentially gets left out of the equation is “human as being” rather than “human as serving the need of the corporation” (e.g. user, worker, producer).

Where do I even begin, does a robot deserve human rights?!?

Only the guys who are waiting for their first girlfriend to be manufactured with this technology cares. 

They are followed closely by all Trekkies who wanted their own personal ‘Data’ sidekick all this time.

And finally, the despicably jealous and lesser scientist who forgoes morality to get ahead and then programs ‘retribution’ into the ‘intelligence’.

Smarter than man and made by man, Impossible or Just Plain Reckless?

‘Immortal’ AI aye?

Nothing a Microsoft ‘blue screen of death’ can’t fix.  :p

Besides, I’ll be highly surprised if anyone can generate true sentinent emotion from a machine.

Bizarre comments.  Anyway, I only wanted to point out that the first image reminds me of Emperor Galba:üste_Kaiser_Galba.jpg

Good article.  It could be summed up in one sentence:

“We’ll likely build immortal AIs soon, so obviously we have to be accepting of longer-lived humans otherwise it would be awkward.”

Unlimited natural lifespan - how do they compete with humans. Well firstly, I think that to be in support of the singularity, you have to remember, most people alive right now won’t make it. Its possible no one will. It is definite that not everyone will. he singularity is the next process of human evolution. That process involves over time turning the vast majority of humans into these constructs. But with 5 billion people on earth, we will never be able to help even half of them. So Be prepared for a very “unfair” shift.

YOUR COMMENT Login or Register to post a comment.

Next entry: Mind-Boggling Science

Previous entry: Egypt: Mubarak’s Decision to Shut Down the Internet and Cell Phones: Updated