IEET > GlobalDemocracySecurity > Vision > Staff > HealthLongevity > Enablement > Mike Treder > Technoprogressivism > Innovation > Implants > Biosecurity > Cyber > Eco-gov > Military > SciTech
Meanwhile, People Are Dying
Mike Treder   Aug 18, 2009   Ethical Technology  

Fantasists ponder a future of superlongevity, superintelligence, and superabundance—as if wishing will make it happen. Meanwhile, people are dying.

In assessing the possibilities of a world more greatly enabled and impacted by emerging technologies, it’s tempting to consider all the various visionary dreams as equally likely. Reading a lot of science fiction (which I do, and which I heartily encourage) can lead a person to think that if something has been imagined, then it must be possible. This is one of the risks of enjoying speculative fiction, and it’s made more acute by engaging uncritically in a community of like-minded believers.

We live in an age of miracles when compared with the nasty, brutish, and short existence of our millennia of progenitors, and today’s heady pace of scientific and technological advance make almost anything seem plausible. Yet, if we are to make good use of our limited resources—including the most precious commodities of all: time and attention—we must learn to draw careful distinctions between the probable, the possible, and the fantastic. Some things will simply never be achieved, like perpetual motion machines or true immortality. That doesn’t mean, however, that amazing developments won’t occur that would have stunned our grandparents and will leave many in our own time astonished.

It’s useful, then, to think about emerging technologies on two axes: feasibility, and impact. “Feasibility” means how likely something is to be achieved, including whether there are major scientific or technological hurdles to be overcome and whether enough effort will made to accomplish whatever is envisioned. “Impact” describes how much a particular emerging technology (or result of a technology) is likely to change the world. Note that this does not necessarily imply for better or for worse, just different.

These are subjective assumptions, of course, but we have to start somewhere. Your opinion about a particular development might differ from mine, and that’s fine. The main point is that we try to determine where our collective efforts can best be applied. If something has high potential impact but almost no chance of being achieved, it should be assigned a lower priority than other work of possibly lesser impact but greater likelihood of success.

With that as an introduction, take a look at this initial assessment:

Moving roughly in order from most feasible to least, and from least impact to most, let’s briefly consider each one:

Robotics already is making inroads in manufacturing and also now on the battlefield, so its feasibility is basically unquestioned. Its impact, however, seems incremental to me; significant but probably not revolutionary. (Marshall Brain, an IEET Fellow, might disagree.)

Genetic engineering includes modifying not only the human genome but other animals, insects, plants, and even bacteria and viruses. Again, work in this area is well underway, so even if some of the possibilities sound exotic, it’s likely only a matter of time in most cases. Impacts will be large, but since they will occur somewhat gradually and not all at once, I’ve placed this just past the middle point on the vertical axis.

Geoengineering is the deliberate application of technical knowledge to manipulate Earth’s climate on a large scale. Many of the techniques being considered today are certainly practical, although their advisability is hotly contested. If they are attempted, which seems more and more likely, and especially if various interest groups—both public and private—end up competing with each other, the results could be catastrophic, one of those cases where the cure is worse than the disease.

Space colonization is a romantic idea from the golden age of science fiction: the 1940s and 50s. Writers then knew a lot less than we do now about the hazards of space, and they also made assumptions that far too often minimized the financial and political challenges involved. Still, it seems probable that we’ll eventually do this, although since it’s taking so much longer than once imagined, the impacts will be comparatively small.

AGI, or artificial general intelligence, has long been predicted but appears almost as far out of reach now as it did several decades ago. We know surprisingly little about how consciousness actually works. Someday someone probably will crack the nut, but it’s a hugely complex problem. Because it’s so difficult to determine when—or even if—AGI can be developed, it’s also hard to gauge the potential impacts.

Desktop nanofactories are the most significant near-term goal for advanced nanotechnology (also known as molecular manufacturing). It doesn’t appears that there is any major technical obstacle to prevent their development, although the task certainly will not be easy. Assuming that atomically-precise general-purpose exponential manufacturing can be achieved, the affect on society could be enormous, especially if it comes sooner rather than later.

Post-senescence refers to the defeat of aging, a time when death would be limited to accidents, homicide, and suicide. Transhumanists are sometimes accused of minimizing the incredible challenges involved, but on the other hand, skeptics can be criticized for failing to take seriously the possible impacts. Given how difficult it seems to overcome all the hurdles and how little is actually being spent on the effort, I’ve pushed this out past the midpoint for feasibility. And even if it is achieved someday, the impacts would appear to be minimal, at least during the next century or so.

Cyborgs—let’s call this the projected merging of humans and machines. Of course it’s already happening to a small degree, with artificial limbs and cochlear implants, but the ultimate vision of a half man half robot seems distant, partly because of the technical challenges and also due to the slowdown effects of religious resistance, economics, and politics. So, although it may happen and the effects could be large, it’s not around the corner.

Post-scarcity is the dream of many a futurist: a time when superabundance, unlimited energy, and enlightened (if not obsolete) governance bring an end to poverty and launch an unending era of peace and prosperity. Suffice it to say that although I admire the sentiment, the potential for its ever coming to pass seems marginal at best.

A technological singularity might not be the most probable of events, but by definition it would have greater impact than anything else on this list. I don’t fault anyone for thinking about it or for working in some small way to try to influence a beneficial outcome, but I do have a serious problem with those who talk and act as if it is a foregone conclusion.

Uploading of human consciousness or personality into the substrate of a supercomputer is another popular science fictional concept. We can’t rule out the possibility altogether that this might be achieved someday—and its impact if it ever comes to pass could be significant—but the whole idea is still entirely in the realm of the imaginary.

And now to my second point. Beyond assessing the feasibility and the effects of emerging technologies, it’s imperative that we also stay firmly footed in the real world if we hope to play a role in bringing about positive change.

In the real world, people are dying. Right now, today, children are wasting away from starvation and suffering horribly from preventable diseases. Women and girls (and some men and boys too) are in slavery at this very minute, being used for the profit or pleasure of others. Species are disappearing all around the world, blinking out of existence before we know a thing about them as biological diversity decreases and the potential for severe ecological collapse looms larger. Glaciers are melting, methane is bubbling up out of the permafrost, and coal-fired power plants keep belching more and more CO2 into the atmosphere. The struggle for energy, for water, for arable soil, and for political advantage brings the devastation of major wars closer every day.

We need to have an overlay on our thinking, a recognition that while it can be fun and valuable to spend time thinking about or working on futuristic possibilities, in the real world life goes on. Or not, because it stops, it ends, for many thousands of people every day. That is a certainty, and its impact is unquestionable.

Technoprogressives are in a unique position to bridge the gap between understanding the potential power of emerging technologies—modulated by a sober and realistic assessment of feasibility—and finding workable solutions to the real problems we face today and will face tomorrow.

Mike Treder is a former Managing Director of the IEET.



COMMENTS

>Right now, today, children are wasting away from starvation and suffering horribly from preventable diseases… water shortages..coal plants

So the most assured near term technologies for impacting air pollution:

A lot more nuclear energy. Nuclear energy is now 2600 billion kwh worldwide it is displacing 2 billion tons of CO2 if it had been coal and not nuclear plants for the last few decades. I have an article on deaths per TWH. Nuclear energy looks great relative to coal (50% of all current electricity) and oil (oil wars, air pollution) and natural gas. Nuclear thermal heat can be used on a larger scale for desalination and for biofuel production. Aggressively moving to develop annular fuel/dual cooled fuel would allow existing reactors to be uprated by 50%. 3900 billion kwh without building new reactors. Making all container ships (7000) into nuclear powered vessels (like the existing 250 military nuclear ships would reduce the air pollution equivalent to half the cars in the world. Plus the ships would be 50% faster (better economy) and would be cheaper to operate than current ships and could use the money to pay for security crews and insurance to fight off pirates. More security not less. Deep burn nuclear would handle the waste.

Starvation: Need more genetically modified crops. New Rice project. International Rice Research Institute trying to change photosynthesis in rice. also, projects for drought and salinity resistance.

Preventable disease: would need to be willing depose or do something about the corrupt nations that have conflicts that allow the spread of such disease. Malaria vaccines and other treatments are being developed and deployed. Cleaning up the energy sources does help a lot with this area too.

Vehicles: as mentioned the nuclear powered commercial shipping. Aerodynamic retrofits of all existing cars, mixed diesel/gas fuel injection for 20% more efficiency could be programs that would make a big impact on overall oil usage and pollution reduction.

Follow China to massive electric bike program. Especially foldable electric bikes. Problems with market acceptance.

climate change: I have an article with ten ways for rapidly achieving and offset of 1 billion tons of CO2. better cement that absorbs CO2 etc…

Robotics: more impact if you use it right. several thousand UAVs exist. Use UAVs to make flying commuter vehciles (can be all electric planes) Robotic cars can also do more if applied right. Certain things will work technologically but because people cannot understand that it can work technically it there is no acceptance and deployment. Semi-robotic - variable cruise control with GPS awareness. Use it to enforce hypermiling acceleration. UK Sentience software has demonstrated it.

I believe you underestimate the impact of robotics. Automated resource collection, such as farming and mining and such, would enable a much larger amount of resources to be collected and distributed. Automation would also be a key proponent in geo-engineering, which you have very high on the list.

Ah, when I was just emerging from teenagehood, I too had a rather typical leftist identity. I thought proselytizing about climate change was the most important thing I could do (too bad it wasn’t quite yet as fashionable as currently), and I had a habit of donating a significant portion of my income to Amnesty International, since the torture/etc going on in the world felt like the worst thing.

Some years later, I became acquainted with the concept of “existential risk”, and accepted the argument that they are in a category of their own in importance. At that time, I was working in animal rights activism, since that’s the field where there currently exists the most massive amounts of (systematic) torture of sentient beings, and potential for semi-soonishly getting rid of a substantial portion of it. But since I’ve never been as good as most in rationalizing conclusions that best serve my own e.g. social needs (or at least I more commonly notice when my mind tries to generate excuses), I had to admit that animal rights activism—and also human rights activism etc.—really get nowhere close to existential risk mitigation in terms of their importance. This was a matter of doing the math: There currently is a large amount of suffering going on, but if all life gets destroyed, a vastly larger—almost infinite, though probably technically not—amount of possibility for worthwhile life would be lost, which makes even small chances of existential risk outcomes vastly more worthy of attention than traditional do-gooder projects. For more details, see e.g. http://www.nickbostrom.com/astronomical/waste.html

So I moved away from working on typical leftist projects (thereby essentially stopping being a leftist, in the view of typical leftists, and later also in my own), which held a significant social cost for me, since I knew most of my comrades would be incapable of anything like that. People don’t really have a habit of choosing their do-gooder projects through rational deliberation. If they do think about it, it’s usually “what activity/group would offer best social benefits for me” (and usually an unconscious process). Then, after people have determined what for them is the most advantageous form of being a do-gooder, they when necessary will come up with ad-hoc arguments for why their choice is also ethical.

For people like Mike Treder and James Hughes, I’m not sure if their activity stems more from a genuine need to get something ethically positive done, or from the need to seek political power, position and appreciation, but in any case, they find it necessary to operate within the leftist orthodoxy, and strive to come up with ways to disparage those who don’t. This is useful for the goals of trying to attain political power, position and appreciation within the leftist orthodoxy, but otherwise it is harmful.

They will probably never manage to admit that e.g. singularitarians don’t tend to hold any of the views they most like to accuse singularitarians of holding. Singularitarians are not part of the leftist orthodoxy (instead, people such as me classify most of what the orthodox left does as “doesn’t affect existential risks, thereby not sufficiently ethical to spend effort on”), therefore the orthodox left finds it necessary to attack us. Our talk of the superlative importance of existential risks is a threat to the orthodox left—it might pry away some of their more thoughtful members and potential members.

Rest assured, however, that you will always (or at least for a long time) have many potential members who want the appreciation of the sizable orthodox leftist establishment (though IEET folks themselves at least yet don’t have high status within it), and won’t dare to affiliate with unorthodox folks. You may even continue to lie to them about what it is that those unorthodox folks think (some of Mike Treder’s recent writings are among the best examples of this, even though this particular piece has it’s main focus elsewhere), and they won’t question you.

Oh well, I might as well comment on some of the more specific sillinesses in this article:

“If something has high potential impact but almost no chance of being achieved, it should be assigned a lower priority than other work of possibly lesser impact but greater likelihood of success.”

Sounds good if you’re trying to look good and ethical in the eyes of the unthinking mainstream, but how responsible is this really? If there’s a 0.1% chance of an existential risk (i.e. everyone dying, if not something worse), and a separate 98% chance of a million people dying, which is the problem more deserving of attention?

For those wanting to achieve political power and appreciation within the leftist activist orthodoxy, the latter is the fashionable choice. Meanwhile, those who are serious about ethics do the math and focus on the first, as exemplified by the thinking laid out in the Bostrom paper I linked to in my earlier comment.

These people then get demonized by the leftist orthodoxy, who will probably never admit that e.g. singularitarians like myself aren’t acting as if “the technological singularity is a foregone conclusion”. What is a foregone conclusion is that even somewhat small chances of existential risks are more important than your everyday genocides and cute starving children, but populists will never dare to say something like this (at least not consistently, and not act accordingly).

@ Aleksei…

“What is a foregone conclusion is that even somewhat small chances of existential risks are more important than your everyday genocides and cute starving children, but populists will never dare to say something like this (at least not consistently, and not act accordingly). “

Wow… Are you sure about this?

Fear is the key, not the practicality - How is Yellowstone park these daze, and can you do anything about it anyway?
I guess at the end of the day, its every man or cyborg for himself!

Since it is man that poses the greatest risk to himself, Instead of your focus on existential risk, perhaps you should take a peek at existentialism?

However you correct with one point, there is rather a lot of political focus and debate at the IEET.

“Wow… Are you sure about this?”

I urge you to take a look at the article by Nick Bostrom I linked to in my comment. You’ll see that he, the Chair of this organisation (even though this organisation is practically run by populists instead of serious people like him), fully agrees with me here.


“Since it is man that poses the greatest risk to himself, Instead of your focus on existential risk, perhaps you should take a peek at existentialism?”

You sound very confused. Existential risks are not included in “risks man poses to himself”? (Well, a few of them aren’t, but all the most important ones are, including the ones that I’ve in any way alluded to.)

Perhaps I should have presented a definition of existential risk: http://en.wikipedia.org/wiki/Existential_risk

Aleksei: I assume you mean “Existential risks are  included in ‘risks man poses to himself’” instead of “Existential risks are not included.”

There was no typo, since the sentence of mine you are referring to actually ends in a question mark.

It is a (rhetorical) question I am asking CygnusX1:

> Existential risks are not included in “risks man poses to himself”?

...as in does he really think they are not included?

“Post-scarcity is the dream of many a futurist: a time when superabundance, unlimited energy, and enlightened (if not obsolete) governance bring an end to poverty and launch an unending era of peace and prosperity. Suffice it to say that although I admire the sentiment, the potential for its ever coming to pass seems marginal at best. “

I have to agree with some of Aleski’s obesrvation about the need for some people to be perceived by their peers as focusing on the pressing problems of today.  But looked at mathematically, this might not be the optimum course of action.

Treder’s piece reminds me of people who are in love with “poverty.”  Sure, they like to speak about its evils and work with the unfortunate in their free time.  But what are they really doing to end poverty?  We only need to look at the level of military expenditure in the USA, to realize that poverty doesn’t have to exist today.

The people who are most in love with poverty and the unfortunate are most vehemently against the idea of superabundance.  The entire idea of superabundance threatens the very basis of their existence.

Many of the problems facing humanity at this time are political, and can be solved with current technology. The possibility for an end to poverty is very real and not marginal.  But it will require politcal action.  Who is willing to do what is required and who knows what is required? 

Folks, remember that history is not linear, we are either moving exponentially to post-scarcity or destruction.

I think using the term “feasibility” is ambiguous. It is not clear whether Mike refers to feasibility with current technologies and financial resources, feasibility in the short term, or feasibility in principle. For example I certainly agree with Mike that AGI and mind uploading are hardly feasible with today’s resources, or in a few years, and perhaps not within our lifetimes, but I am persuaded they are feasible in principle: this is the only assumption compatible with the scientific worldview. To claim otherwise, is to fall into vitalist and mystical positions. No, there is nothing “sacred” or “forbidden” in biology and cognition. Our bodies, brains and minds are machines, and it is within the capabilities of our species to engineer better ones.

Having said this, I mostly agree with the letter of Mike’s article, and I mostly disagree with (my interpretation of) its spirit.

Yes, beyond assessing the feasibility and the effects of emerging technologies, it’s imperative that we also stay firmly footed in the real world if we hope to play a role in bringing about positive change. Yes, if something has high potential impact but almost no chance of being achieved in the short term, it should be assigned a lower priority (as far as the allocation of public resources is concerned) than other work of possibly lesser impact but greater likelihood of success in the short term. Yes, I think when we allocate public resources we should give top priority to finding workable solutions to the real problems we face today and will face tomorrow. Yes, in today’s world, people are dying. I agree with Mike on these points, and this is why I call myself a technoprogressive.

But I think also those who choose to spend time thinking about or working on futuristic possibilities play a very important role. The world is big and complex, and different people with different skills, interests, inclinations, sensibilities and personalities, can give a useful contribution to making the world a better place. Don’t demonize those who choose to focus on far future speculations and cosmic visions: these are not incompatible with finding workable solutions to the real problems we face today and will face tomorrow, and these two different attitudes can co-exist in the same person and mutually reinforce. This is why, besides calling myself a technoprogressive, I also call myself a transhumanist. If the intended spirit of Mike’s article is to demonize transhumanist dreamers, I most certainly disagree. Focus on what is more important to you, let other focus on what is more important to them, and let’s try to work together for what is important to all.

I have written and studied plenty on what is being done now to prevent deaths and what is possible. It starts with an analysis of what is killing people now in detail.

http://nextbigfuture.com/2009/08/avoidable-deaths-worldwide-scope-of.html

The part about politics and other factors like local corruption, like not enough money being spent is a feature that will be unlikely and slow to change. That is why more technology or new plans that can circumvent those factors is important to make more progress. We could have prevented polio deaths and did some with iron lungs, but it was not until vaccines were developed that big progress against polio was made.

Spend 100s of billions on sanitation and clean water or create Diahrrea vaccines so that people can tolerate dirty water. Still work on clean sanitation but a vaccinations could be 20 times cheaper and save 80-90% of the lives.

@ Aleksei…

Existential risks, are these not included in the risks man poses to himself?
Why yes, yet I do not believe that man will ever, ever be that stupid as to cause his own demise. And I do not believe for example, that global warming is a player in the demise of mankind either : it will be problematic maybe, yet even in a worse case scenario, we will adapt, and we will overcome.

There is still a far greater risk from Yellowstone, so have a think about what to do after that one? (Much poverty and many more cute starving kids, and many ethical problems to overcome).

What does hold the evolution of mankind back is selfish attitudes and political views, and selfish political attitudes, (of nations). And since man is a political animal by nature, the only way to resolve this is by attempting to align these differences that divide us, with the common motivations such as freedom and equality and survival that bind us, and hopefully by using values such as wisdom and ethics. We can all take a personal responsibility for our collective cultural and spiritual evolution, and try to be more compassionate and mindful : thus my point regarding existentialism.

Just imagine how much easier it would be to achieve any of our technological goals, if the worlds nations were party to agreement about the benefits of them?

I will checkout your Nick Bostrom link.

The easiest things we can do immediately to cease generating negative utility is to become vegan and stop driving cars.

Also, to echo Alexsei; why worry about a lack of focus or commitment to traditional leftist projects?  Their current support is immense.  Why do people need you to tell them to support them?

It is, by the way, quite telling that folks like Mike Treder and James Hughes *don’t* find it necessary to promote veganism, even though they start on this “meanwhile, there is suffering” line.

It’s probably because they like the sort of “ethics” that mostly just means writing on the internet how other people are evil, but they don’t like the sort of ethics that would have actual implications for their own real day-to-day choices.

Humans were not evolved to be vegans and there are disadvantages to such a diet.  Also, if we can grow meat in labs, why become vegan?

http://www.nytimes.com/2009/05/27/books/27garn.html

http://www.freepatentsonline.com/y2006/0029922.html

Humans also are not evolved to sit in front of computer screens, and there are disadvantages to such a habit. It can lead to poor posture, make you fat because of lack of exercise etc. Should you therefore not use computers? Or with a bit of effort, is it possible to use computers so that the benefits outweigh the costs?

I’m not gonna get into a long discussion about the challenges of veganism here, suffice it to say that it should be easy for anyone who’s interested to find lots of vegans who pull their diet off very well.

And I’m very much for growing meat in labs, but you’re not eating lab meat yet. You’re still eating the old-fashioned poorly treated dead sentient beings, while having veganism as an immediately feasible option that you choose to ignore (as I myself also ignore, part of the time).

Are you sure you really understand how singularity “works”?
Why are Uploading, Singularity and AGI bubbles so far apart?
As soon as we have AGI there won’t be any problem speeding it up thus leading to disruptive progress.

@Mark

I agree that there are some disadvantages to being vegan, but for most people who do it with a bit research and common sense health is not one of them

As Aleksei points out whether we ‘evolved’ to be vegans or it’s natural is pretty much irrelevant as to whether we ought go vegan unless that’s the only information we have. 

There is good evidence it can be healthy:

American Dietetic Association recent position paper on Vegetarian Diets http://bit.ly/2EERvk

Also like Aleksei I’m very much in favor cultured meat, see:

Why Cultured Meat: http://bit.ly/kTtW9

YOUR COMMENT Login or Register to post a comment.

Next entry: Conservatives Want to Keep Your Genes Pure

Previous entry: All Hell Breaks Loose (The End of Science My Ass 2.0)