Ordinarily when I write articles I have some point that I’m trying to make. Not this time – this article is all questions. The broad questions are these: How should people who have a sincere, deeply held belief about a radically different future behave in the present?
How much weight ought one to give their own convictions, and how much ought a person to hedge their bets? What level of risk is acceptable? What weight should other people’s concerns be given, particularly those who might not have the same sincere beliefs in the future?
It is a truism that the future will not come about without action in the present. Indeed, we see people in the present staking their careers on the prospect of some future advance. But I want to sidestep the question of whether making a career of building the future is a wise idea for one very simple reason. Let’s assume that many of the things we think will happen in the near future in fact never occur. Aubrey de Grey, for all his work, is unsuccessful at ending aging. The Machine Intelligence Research Institute (MIRI, formerly the Singularity Institute for Artificial Intelligence) spends a bunch of time working on A.I., or warning about the dangers of A.I., when, in fact, it turns out A.I. isn’t possible for some reason or another. Even Ray Kurzweil ends up with a bad streak of wrong predictions towards the end of his career. While these things would be, from many of our perspectives, tragic, it seems that the individuals involved in the activities will do fine in their respective careers. Aubrey will still be a scientist who did some interesting work, the folks at MIRI will have pondered interesting questions, and Ray will have been a genius who, after all, was as fallible as most prognosticators.
In short, they will have been wrong, but still have had interesting careers.
Alternately, we might consider people who gamble high stakes on the future – perhaps foolish risks. I think most of us would think it a bad idea to abuse our bodies recklessly on the assurances that regenerative medicine is coming. We wouldn't necessarily want to quit our day jobs because the machines will be automating most everything soon anyway. Most of us likely think it’s unacceptable to waste water and other natural resources now on the assurances that desalination or some other technological fix will come along to clean up after us. Better to play it safe and take care of our bodies, keep our jobs, and conserve our resources until we can be sure, right?
The first scenario seems like a low risk / high reward situation. One can have an interesting, though perhaps ultimately unfruitful, career trying to build the future without risking all too much. And the second scenario seems like a high risk / low reward situation. If we’re wrong when we abuse our bodies, quit our jobs, or waste our resources there is much to be lost, whereas if we play the situation more conservatively any gain will still be realized. I think we can safely ignore decisions that are low risk / low reward as essentially trivial – by definition, not much is at stake.
That leaves a high risk / high reward situation. The kind of situation where not taking the risk precludes, or at least makes much more complicated, the reward. And the reward must be significant – something highly desirable – to warrant such a risk. Otherwise it’s a foolish choice like those discussed above. I’ve had one very particular question that I think falls into this category on my mind lately, though I’m sure there are many more.
Let’s talk about love.
At first blush, love seems like an odd topic to be talking about among transhumanists. After all, humans have been finding each other and falling in love since they first existed. That’s not to say we have the whole thing figured out, and maybe we could use some kind of technology to make the whole process a little easier (internet dating, anyone?) but at base love is a very non-technical subject for which two humans, generally speaking, are required to figure things out on their own.
Ah, but we are transhumanists, and as transhumanists we talk about all kinds of fantastic things. We –make- all kinds of fantastic things, or at least champion their making. We talk about machines that are able to read emotion. We talk about artificial intelligence that is able to pass as human. We talk about virtual reality becoming indistinguishable from reality. We talk about printed skin that can feel, intelligence that can adapt to user specifications, on-the-fly fabrication of objects through 3-D printing and the like … we even talk about machines that can be conscious. A good many of us go further than that – envisioning (not to be unduly grandiose) virtual worlds where we are our own private gods. A few of us even talk about becoming gods in the real world – at least those of us who don’t think the real world is actually a simulation.
After all that, let me make a claim I hope will be uncontroversial. It’s at least consistent with many of our beliefs that we will create artificially intelligent machines that are either actually conscious or close enough as to make little difference. It’s consistent with our beliefs to suggest that those A.I.s could be placed in very human seeming bodies, and might exhibit very human seeming emotions. Because of our ability to manufacture goods quickly and cheaply through 3-D printing in the like, it’s plausible we might have many bodies for that intelligence, or for many intelligences, to swap between. Or, if bodies are undesirable or too difficult, we ought to at –least- be able to make hyper-realistic virtual avatars. Probably even avatars that we could touch, or at least seem to touch, without too much trouble. And it all might happen rather quickly. Even if The Singularity doesn’t show up by 2045, wouldn’t we all be pretty surprised if the majority of this kind of scenario hadn’t become reality?
So, 1,000 some words of prelude later, let me get to the point. Using all this technology that we believe will arrive, it seems entirely possible to create whatever sort of love interest we might have, either digitally or physically. Such a creation could be tailored precisely to our specifications, or at least it would be smart enough to adapt over a short period of time. This creation could take any form we desire, or even several forms at once, with only those attributes, both physical and mental, that we want. It’s entirely plausible that within 30 years, you can create essentially anything your heart desires.
Now, humans are great and all, but how can they compete? Some people will say that machines could never feel emotion, or even simulate it well, and so love is the one thing machines will never be able to do. I say add it to the list, along with chess and Jeopardy!. Some people might say that even if it is possible, they would only ever want to love a human. That’s fine, but it’s not the high risk / high reward scenario I’m describing – for those people, the reward is not so great. No, I’m talking to those people who think “If that sort of thing were available today, I’d take it and never look back.”
What should those people do?
Let’s face it, the technology –isn’t- here yet. Yes, we think it will be, and yes, we have good reasons for those thoughts, but it’s still vaporware. And it’s not as if the technology will be here in time for Christmas, or even in the next five years. Thirty years seems about right for that level of sophistication (though some argue it will be much, much longer.) Fifteen years seems wildly optimistic. Ten years seems downright crazy. So, if you’re somewhere between born and, say, 40 years old, what are you to do?
Obviously you could just wait out the time and see how it goes. That’s really facing the risk head on – and it’s no insignificant risk. We’re still biological and, as a rule, wired for companionship. Putting that part of one’s life on hold for thirty years or so – particularly that early-to-mid portion of one’s life – is no small gamble. I think it’s safe to say something significant would be lost. And, even if one turns out to be right, there’s still a significant chunk of years sacrificed in the meantime. Even the best case scenario comes at a significant cost. Perhaps only if radical life extension and all the tech necessary to build these artificial companions both come to fruition is It possible for someone to look back at thirty years and think ‘no big deal.’
Or, one could decide that the reward is too far in the future, find a nice human mate, and settle down. It’s probably at least a little unethical to both settle and –want- that future to occur, even if it never actually happens. It would be a little like being in a relationship the whole time while secretly wanting someone else. Still, all that aside, what are the risks? Those thirty years or so would be as good as possible, and if the technology doesn’t come to fruition then no significant loss was incurred. But human relationships are messy things, and it seems unlikely that one can have their cake and eat it too, so to speak.
If one does settle, and if one builds a life with another human being for the next thirty years, how likely is that spouse to just accept being replaced by so many bytes or bits of silicon? Robots might outsource humans in the workplace, but in the home? It seems like it’s asking for trouble – at least as traumatic as being caught cheating and perhaps even more so since the third party isn’t human. Never mind that many people entering relationships will have children. How could one explain to them that daddy or mommy left for a robot? Is it worth the risk of tearing asunder a family just to avoid personal risk? Is that an ethical way to behave?
But suppose one settles and makes the opposite choice when the technology comes out. No human being is perfect, romantic notions aside. How might it feel to be married to someone who is good, even great for a human being, but so obviously deficient given the competition? Many spouses, of both genders, can’t remain with their partner when another human being who seems better comes along. How much more difficult must that decision be when your dream partner can manifest simply for the desiring? How much sadder to realize that the only one holding you back from the perfect partner you always envisioned is your own choices and lack of faith? How long might that go on if radical life extension becomes reality? We might be the last couple generations to believe, if somewhat optimistically, in ‘till death do us part’ – and death may be a very long way away indeed.
There’s no grand conclusion to this though. No words of wisdom I’ve kept secreted away for the conclusion. Just puzzlement and wonder. What is a person to do?
John Niman is an Affiliate Scholar, a J.D. Candidate at the William S. Boyd School of Law at the University of Nevada, Las Vegas. His primary legal interests include bioethics and personhood. He blogs about emerging technology and transhumanism at http://boydfuturist.wordpress.com.
(3) Comments •
(5832) Hits •