IEET > Rights > HealthLongevity > Personhood > GlobalDemocracySecurity > Vision > Fellows > Patrick Lin > Futurism > Technoprogressivism > Innovation > SciTech > ReproRights
What If Your Robot Is the Devil?
Patrick Lin   Jun 15, 2011   Ethical Technology  

Should we regulate the creation of autonomous robots? If yes, then why not also regulate the creation of autonomous humans?


In robotics, one source of ethical hand-wringing is the increasing autonomy we are giving to machines. Literature and film warn us that it’s a bad idea to let robots make their own choices, since we may be harmed by some of those choices. So, the argument goes, we need to restrict or regulate efforts to endow robots or artificial intelligence (AI) with autonomy.

botdevThis position seems reasonable at first glance, but I suggest that it may be inconsistent with how we act in the real world.

Never mind other harmful technologies that we’ve developed, from anthrax to the atomic bomb, there’s something about machine autonomy that gives us extra pause. For instance, in one possible future, the helpful robot servants we employ in our homes may decide that humans are a pox on the world and want to murder us. Or, the military robots we create to defend our society could turn against us. This is a kind of treachery we don’t see with other inventions.

Yet, we create autonomous things all the time without this moral anxiety, don’t we? Consider the organic machines we create: children. The worst evils in history have been caused by humans, all of whom start out as children to the parents who created them.

Except for the parents in The Omen and Rosemary’s Baby, we usually don’t struggle with the risk that our kids might turn bad, when we decide to bring them into the world. This risk, though very real, is simply ignored.

So how do we explain this ethical schizophrenia? We worry about creating autonomous robots, but not autonomous humans. Robots have very little history of harming innocents or the ecosystem, but we know with certainty that humans—though impossibly cute as babies—have a limitless capacity for destruction. As Ralph Waldo Emerson put it, “A child is a curly, dimpled lunatic.” And some adults are too.

Is there a difference that makes a difference?

My suggestion is this: If creating children is morally unproblematic, then so is creating autonomous robots, unless we can identify morally relevant differences between the two acts.

Of course, we instinctively want to defend our right to have children and show that kids are different than autonomous robots. But what exactly is the moral issue with creating robots that is avoided when we create human beings? Or, in other words, when we’re talking about autonomous beings, why is the responsibility of the parent seemingly less than the responsibility of an inventor?

We could perhaps reply that humans offer greater benefits than robots do, and that these benefits outweigh the risks, so having kids doesn’t elicit the same moral panic. But this position is difficult to maintain, as robots are and will be used for equally valuable roles, such as difficult surgery, hunting terrorists, devising scientific theories, caring for our elderly and our children, and even as surrogate partners or friends. Further, the odds that any individual child will generate benefits to society seem to be less certain than the odds of a robot generating benefits, since the robot would be designed to do exactly that.

Again, many things can (and do) go wrong in raising children, so the risks in creating autonomous humans are significant and well known. Some individuals—for example, Adolf Hitler, Pol Pot, Saddam Hussein—are directly responsible for millions of deaths and countless suffering. Meanwhile, a single robot might be responsible for tens of deaths, or hundreds or thousands at most. So a risk-versus-reward analysis likely will not do the job in explaining why we ought to be precautionary about robots but not children.

OK, then perhaps we could point out that a robot is artificially created and children are naturally created. But how exactly is this difference ethically relevant? Moreover, not all children are “naturally” created, when we consider fertility treatments, and there is presumably no concern related to the autonomy of those “artificial” beings we bring into existence. Likewise, we could point to an inorganic/organic difference to explain our moral schism, but while that difference is real, it doesn’t seem to make a moral difference. If robots someday could be biologically grown, then this position commits us to treating bio-robots and humans the same, at least on the considered issue. 

Another defense could be to assert that we have a natural right to have children, perhaps given a divine command to “be fruitful and multiply.” But if we take that command seriously now, it would cause an unsustainable population boom, if everyone were to have a dozen or so offspring; and this is unethical to the extent that it would lead to suffering and a sharp drop in quality of life, given severe pressure on family budgets, natural resources, jobs, and so on.

In any case, such a divine command might be a prescription for human procreation (as if we were waiting for permission), but it is not a proscription on creating autonomous machines. The reasons in support of a natural human right to procreate also seem to support a right to create technology and tools; for instance, like having kids, making tools is fulfilling, it helps us better survive, and so on.

Still another defense could be that we cannot help but reproduce, given a biological compulsion to have sex and, in some, a deep-seated need to become a parent; in contrast, we have a choice to make robots or not. But this reply makes a virtue out of necessity, or the well-known mistake of deriving “ought” from “is”: Just because we are biologically driven to do something—whether it’s to reproduce, fight wars, cheat on spouses, or overeat—is not a good reason to declare that act as ethical. Anyway, we also seem to be hardwired to develop tools and technologies, in which case the same defense would imply that it is ethical to develop autonomous robots without restriction.

totdevIf it appears that we are continually running into dead-ends here, we could “bite the bullet” in at least a couple of ways: We could deny any obligation to tread carefully in developing autonomous robots or AI. But if this implies we are never obligated to avoid manufacturing dangerous or risky products, then we’d want to resist this highly counterintuitive position.

Or we could concede that our unrestricted practice of human reproduction is hypocritical and must be changed—that is, we should start to seriously consider the risks posed by creating any autonomous beings, whether children or robots. And this may imply an obligation for greater education and parental supervision to ensure children don’t cause harm to individuals or humanity, including possibly the state’s regulation of procreation. It could also imply the need for a social insurance market against the contingency of wretched offspring. But most of us would want to resist those implications as well.

Ultimately, it could be that there is a defensible moral difference between creating children and autonomous robots. But it is not obvious what that difference is, despite our taking it for granted. Our search for that answer can illuminate our ethical responsibility in developing autonomous robots, especially as some fears about robots seem to be a projection of fears about ourselves—we know what kind of devils we can be.

Dr. Patrick Lin is a former IEET fellow, an associate philosophy professor at California Polytechnic State University, San Luis Obispo, and director of its Ethics + Emerging Sciences Group. He was previously an ethics fellow at the US Naval Academy and a post-doctoral associate at Dartmouth College.



COMMENTS

There is a slight difference however:

Machines are potentially far more powerful than humans.

While it’s true that autonomous humans can cause great tragedies, autonomous machines potentially could cause extinction. They will simply be better than us in any way that matters to our survival (smarter, faster, stronger).

There’s also the danger of turning our planet into a collection of perfect paper clips.

That said, I think that the safest option is to become one with the machines as soon as possible.

At the same time, I highly doubt that any kind of regulation would be effective.

Humans spread ideas slowly and imperfectly.  Robots may eventually have the ability to either transfer their personalities wholesale to each other, or at least the ability to overwrite portions of personality.  Much moreso than with humans, one bad apple can spoil the bunch.

Also, children are largely ineffectual compared to adults.  Death-dealing super-computing robots less so.  It’s the same reason we keep cats as pets but not tigers.  The overarching personality is much the same, but cats can’t kill us accidentally.

Put the two together and we have fundamental extra worries that we don’t have today.

I agree with both iPan and Aaron about the difference between humans and machines: it’s a difference of scale and power. The stakes are just higher.

iPan will not be surprised to know that I don’t entirely share his doubts that “any kind of regulation would be effective”, but let’s not get back into that debate just yet. smile I’m more interested in exploring the idea that the safest option is to become one with the machines as soon as possible. I guess my questions are: what does that mean exactly, and how do we do it?

Essentailly, Cybernetics Peter.

The idea is that there won’t be any competition between us and machines (whether this is intentional on the part of the machines or accidental - terminator or paperclip factory), when we are machines. When we become the SAI through BCI (brain computer interface), we simply remove the us vs. them conflict because we are them.

I think part of the fear the derives from autonomous machines that is not there with other potentially dangerous inventions, is that machines have their own intelligence (super-intelligence) and therefore we feel out of control.

I’d agree that the possibility of superintelligence is worrisome.  Not only could we not control it, we likely couldn’t stop it (as Nick Bostrom has pointed out before), since we’re less intelligent by definition.

But I think the worry about scale and power doesn’t come into play until later.  Robots are certainly force-multipliers—that’s a big part of why we want to develop them.  In small numbers, it seems that a human with a gun can do more damage than a robot with a gun: the human is more agile/elusive, can recruit/coerce other humans in causing mayhem, and so on.

It’s only when there’s a critical mass of robots that the issue of scale and power is relevant, no?  If that’s so, then the issue might speak to the need to control the population of autonomous robots, but it doesn’t seem to be much of an argument for why we shouldn’t develop a single fully-autonomous robot in the first place.  (It’s also too big a leap to say that the first such robot will lead to self-replication of a robot army, since they’d need to win control over production lines as well as raw materials, energy, etc.)

Also, note that human offspring have their own intelligence too—even if not superintelligence—and this also leads to an out-of-control feeling.  (If you have kids, you’ll know what I mean.)  Yet this feeling doesn’t prompt a serious discussion about regulating procreation, like it does for regulating autonomous robotics.  So this is another example of what I mean by “ethical schizophrenia” or an inconsistency in our beliefs.

@Patrick Lin:

You said:

the human is more agile/elusive

Not for long.

High Speed Robotic Hands
http://www.youtube.com/watch?v=bfdHY26E2jc

can recruit/coerce other humans in causing mayhem

It’s not the robotic chassis (at least merely one of them), but the AI that is potentially dangerous. Recruiting = copying.

It’s only when there’s a critical mass of robots that the issue of scale and power is relevant, no? If that’s so, then the issue might speak to the need to control the population of autonomous robots, but it doesn’t seem to be much of an argument for why we shouldn’t develop a single fully-autonomous robot in the first place. (It’s also too big a leap to say that the first such robot will lead to self-replication of a robot army, since they’d need to win control over production lines as well as raw materials, energy, etc.)

Except that those automation lines themselves are already automated/electronic (driven by computers). I imagine it wouldn’t be that difficult for an autonomous AI to take control of an assembly line. It might be more difficult to defend that assembly line in the early stages, and to collect the raw resources to build a sufficient army (all of this also takes mining, refining, and most factories don’t produce all of the parts for a machine in a single factory - they all rely on infrastructure, so there would be time to bomb a factory that had been taken over). UNLESS it used easy to find resources, like carbon, and had a large enough stockpile of the rarer elements it needed (gold, germanium, CNT’s, etc.)

I guess it would come down to how fast it could get this factory running at full production pumping out new copies of itself, so there are a lot of unknown factors. But I find that at least in taking command of such a factory (hacking all of it’s electronics), I see no difficulty in that.

Also, note that human offspring have their own intelligence too—even if not superintelligence—and this also leads to an out-of-control feeling. (If you have kids, you’ll know what I mean.) Yet this feeling doesn’t prompt a serious discussion about regulating procreation, like it does for regulating autonomous robotics.

I seem to recall a heavily debated article by Hank Pellisier recently on Parent Licensing smile

Actually, regulating procreation is as old as…..um….agriculture? religion? older? yeah, definitely really, really, old

I think the fear is that a robot might be fundamentally opposed to humanity in a way that a human child, through biology and socialisation would not. It is true that one robot on its own would have little chance of accomplishing any goal of destroying all humans, it’s unlikely such a robot would be developed unless with a long-term goal of creating numerous copies.

We basically fear the genocide humans visit on humans would be visited on all humanity. And given the robots would have little or no interests in common with us, I’m not sure it’s that outrageous a possibility.

“All technology should be assumed guilty until proven innocent”
~David Brower

“If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.”
~Omar Bradley

“Technological progress is like an axe in the hands of a pathological criminal.”
~Albert Einstein

Wiser folk than me have said it before and far better…

Good point, Cathyby.  It’s reasonable to think that the average human baby will be a social and political animal who has some stake in humanity, whereas the robot doesn’t seem to have any natural allegiances to our world.

But even if the odds of giving birth to a psycho/sociopath are low, given that there are so many of us, this still translates into a significant number of evil people.  Again, think about the damage that Hitler or another individual had brought unto the world—this is a real worry, because it has happened many times before.  Worse, I’m sure there are some individuals today who’d love to see the end of the world.

The same risk with robots is only theoretical at this point, and the creation of autonomous robots isn’t and wouldn’t be as widespread as human reproduction.  So the odds of any particular robot going berserk are comparatively much lower. 

But you still make a fair point: We have some reason to believe that humans will continue to be trustworthy (at least on average or to some degree), while we have no reason to think that of autonomous robots.

Peter Wicks writes

iPan will not be surprised to know that I don’t entirely share his doubts that “any kind of regulation would be effective”

And I’d like to point out the increasing proliferation of DIY and/or open source, and/or cheaper and cheaper robot kits

(I was reminded of this by the following articles)

High-precision robots available in kit form
http://www.physorg.com/news/2011-06-high-precision-robots-kit.html

(this link is to a thread I started at kurzweilai.net about open source)
http://www.kurzweilai.net/forums/topic/ten-fold-increase-in-open-access-scientific-publishing

There are also several projects to create universal and open sourced robotic operating systems.

In other words, how far can regulation go? Maybe we can regulate some big corporations, maybe we can even throw a handful of hobbyists in jail when they’re caught.

I made the following argument at Michael Anissimov’s site, Accelerating Future (which is basically an iteration of the ‘if you criminalize it, it will merely go underground’ argument applied to sentient machiens):

http://www.acceleratingfuture.com/michael/blog/2011/06/steve-wozniak-a-singularitarian/#comments

comment by iPan
There is some probability that some AI hobbyist somewhere in the world is capable of making a self-improving AI in their garage.
With each passing year, hardware performance gets better, and general knowledge about AI increases (especially due to open source projects).
We can only control those AI projects that are public (government or university funded) – in plain sight.
How do you plan to stop everyone who is secretly trying to make their own super-AI? There must be thousands of them now, and I can only imagine that as interest increases, available computing power increases, and general knowledge about AI increases, that at some point, someone, somewhere, is bound to create one.
I guess we could try and ban it, but does anyone seriously believe that would stop people?

The “if you criminalize it, it will merely go underground” argument (as well as the “regulation won’t eradicate the problem” variant) is a reasonable consideration.  But it can’t be a debate stopper: 

For instance, no amount of regulation will ever prevent murders or rapes—but that’s not a reason to not make them illegal.  So if regulating AI research (or procreation) is the right thing to do given serious risks, then it should be considered, even if it only postpones the inevitable…

Also, criminilisation is not the only kind of “regulation”. In fact we could maybe do with clarifying exactly what we mean by “regulation”. Does it include basically any kind of policy intervention (such as awareness raising, for example)?

Ultimately this comes down to the extent to which we think we can conscious influence events. Policy intervention - whether using law, financial incentives, awareness raising/communication, or something else I haven’t thought of - is basically a collective (and hopefully democratic) form of conscious decision-making. As long as we still believe in the power and/or desirability of nation states or similar political entities (such as the EU) - and I know there are different views on this - then we have a form of collective decision-making that we call “policy” and in some cases “regulation”.

One view would be that technology wants superintelligent AI, and there’s nothing we can do to stop it. Personally I find this disempowering. I prefer to believe that we can if we want to. Equally, I prefer to believe that we can create benign AGI and avoid pathological AGI.

On merging with machines, there is an issue of identity. To what extent do we/should we identify with future selves that are significantly different from our current selves, e.g. because they are cyborgs? Or with a future world in which “natural” humans have been replaced by cyborgs? I care about the here and now, and the world I currently inhabit, and I care about where we are going, but there comes a point where it just because all too weird to really care, let alone want it to happen. Maybe we really want to preserve the world more-or-less as it is, without all these cyborgs? Or if we want them to be around, we at least don’t want to BE them. I know that’s not very technoprogressive…

great article Patrick! 
As iPan noted, there are ethical similarities
in it to the article I wrote last month—
“Ban Baby-Making Unless Parents Are Licensed”

I agree with you, that if we’re so concerned about our “creations” -
it makes sense to regulate human reproduction as well

I also agree with your note in comments that regulating AI is a wise thing to do—thanks for pointing out the illogic of the opposition.

Agree with Peter.

As for the rate of psychopathy I mentioned earlier, apparently it’s about 1 in 100, which is a lot.  In a nation (US) of 300M, that’s 3M psychos out there.

http://www.nytimes.com/2011/06/19/books/review/im-ok-youre-a-psychopath.html?_r=2&partner=rssnyt&emc=rss

Thanks, Hank.  Somehow, I missed your article the first time around (I must have been traveling), which is here: http://ieet.org/index.php/IEET/more/pellissier20110420

I’ve mentioned in previous posts that there are certain “sacred cows” in a liberal democracy that are not to be questioned, and an unrestricted right to have kids seems to be one of them.  But if we want an intellectually honest discussion, we can’t be afraid to at least ask *why* sacred cows have that status, i.e., what justifies them, given their overall risk profile?  So thanks for helping to move that conversation forward.

@Peter
I agree there are ‘softer’ forms of regulation that do not require enforcement, and that leads to my solution that I’ll get to in a moment.
However, in general, ‘regulation’ sort of implies enforcement. What’s the use of saying “Stop! Or I’ll say Stop! again!” Without enforcement, it’s just a suggestion or advice.
I don’t think anyone can argue with that.

I am convinced that if we take an authoritarian path with emerging technology, we are certain to enter an arms race that will result in exactly the technolopalypse that people fear.

In fact, it’s already started. Stuxnet. Anonymous. LulzSec. NSA and it’s proposed exaflop machine/datacenter (wonder what that’s for…hmmmmm?)

If we continue this escalation, the consequences will likely be worse than the nuclear arms races.

I’ve determined that the only solution is to transition to an egalitarian society, yesterday.

We’re already late to the party, so now we have to double time, which is where Anonymous comes in. Nam-shub the hierarchy. Infect the dominant memeplex.

Welcome to the center of the information war.

@Peter
On your point about identity and cyborgization. In my view, biology is already cybernetic, so it looks to me like the next natural step in evolution. This is really no different than the Cambrian Explosion, for example.

@Hank

I also agree with your note in comments that regulating AI is a wise thing to do—thanks for pointing out the illogic of the opposition.

Tsk tsk wink

I’ve observed a really fascinating, and at the same time horrifying behavior in humans.

If I were to give a human two choices, let’s label them A and B…

Choice A is doom, but it is certain. There is no probability in it, it is set in stone. Not immediate doom, but certain doom.

Choice B is unknown. It contains a probability that you might be doomed, and a probability that you might not be, but those probabilities are unknown, and impossible to figure out before the choice actually occurs.

Guess what the majority of people on this planet will choose?

Choice A.

Most people gravitate towards certainty more often than hope.

Here’s the choices when it comes to trying to enforce any kind of regulation on AI (or IA):

Choice A. Authoritarianism. Enforcement of laws and policies designed to punish and/or imprison people who pursue these technologies without a license/supervision.

Result: Certain AI arms race between the power players, with simultaneous underground proliferation in illegal basement ‘labs’.

Choice B. Transition to an global egalitarianism. Probability of survival, but also probability of death.

@Patrick Linn
While there are large numbers of psychopaths, there is a vanishingly small number of people who want to destroy all of humanity. Most psychopaths content themselves with being as successful as their native talents and disregard for others can allow.

Neither do all psychopaths have an end in common. If one decides to destroy humanity, he’s one in billions. It’s plausible that many robots built on the same design, developed in the same environment would think more alike.

But what would they think? I’m not convinced that robots would want to destroy us all, as an idea they’d come up with themselves. Really depends on what an autonomous robot might want. Could you make an autonomous entity that didn’t have its own desires?

And what those are affects how hazardous it would be. A robot intelligently fixing underground pipes, which has positive feelings doing the work and positive feelings when all the pipes are in good order - hard to imagine it wanting anything else. All depends on the robot’s programming. Is that type of robot autonomous though?

Re regulation, wouldn’t you want to restrict the building or owning of a robot programmable with the desire to kill people, purely for the same reasons government controls weaponry? Even if it’s not autonomous it could be very dangerous smile

@ iPan
You really think people prefer certain doom to a chance of survival? I think that’s absolutely not the case. The problem arises in trying to convince them they are doomed.

The difference is that children have hundreds of thousands of years of genetic programming to ensure a reasonable level of confidence that they will be compatible with society.  Robots are an unknown in their programming.  Creating a robot is much more like creating a new species, and the innate revulsion is similar.

@Cathyby

What I’m getting at is that people would choose a Pyrrhic victory over a choice where there is equal change they could win or lose.

Between 2 choices, 1 where failure is certain, and 1 where failure and success have equal chances, then yes, I believe people would choose the first, because it appears to me that certainty in knowing (even if the answer is not good) is chosen more often than uncertainty.

I think people fear uncertainty more than failure.

You need look no further than Mutually Assured Destruction, but this is also backed up by game theory, in which players will punish ‘cheaters’ even when it costs them more to do so.

@iPan

If people preferred certain loss over uncertainty, should the vast majority not be committing suicide? Certain death over possibly (not certainly) happy lives?

I disagree with your interpretation of your examples.

Mutually Assured Destruction came about because neither side wanted to lose. If the USSR had only a few nukes and the US many, the US could cone out top from a nuclear exchange, and vice versa. That lead to an arms race and the paradoxical peace caused by neither being able to use nukes for fear it would lead to their own destruction (massive retaliation). If people craved certainty, surely this knife-edge would have been unbearable - yet neither side preferred certain destruction over uncertainty.

In the scenario in games theory you outline, the person is offered an inadequate amt coupled with a large amount for his/her partner or nothing fir both of them. Both outcomes are certain. What the scenario shows is that punishing others who make deals seen as unfair trumps the benefit of getting something for nothing. This makes evolutionary sense - in early man encounters with those outside the immediate group were rare. Most encounters would be one of a stream of interactions. Punishing a cheater meant giving him/her a reason not to cheat in your next (inevitable) encounter.

I think iPan has a point about people choosing certain (but long-term) doom to uncertain hope. Think of the frogs in the saucepan. We’re just wired to respond to immediate threats, and uncertain hope feels more like a threat at this instinctual (stone-age) level than certain but long-term doom.

The questions I would have for iPan are (i) is the choice really between authoritarianism and global egalitarianism? (ii) what does global egalitarianism mean, what would it look like? (iii) why shouldn’t regulation be a means to get there?

Hi Peter

(i) Yes, I think so. To me, authoritarianism contains a fundamental contradiction, which multiplies through complex society. I’ll try to illustrate with a simplification.

The emergence of authority, in ancient times, goes something like this:

Predatory humans kill and rob each other. Today, we would call this a “crime”. The essential nature of these acts is a theft of autonomy. For example, if someone stabs you, they are doing something that you certainly wouldn’t choose to do to yourself. If they rob you, they are doing something to your “property” (an extension of your “self”) that you would not choose to do. In either case, when stripped of circumstances, we can see that underlying everything we label as “crime” boils down to someone changing reality in a way that is oppositional to your own will. Exploitation.

So, after much time, humans learned to invest ‘authority’ into others to prevent those few who acted this way. We created ‘protectors’ - ancient police and other guardians - and it came at a price. Those who were good with the sword spent their time doing that, and not learning the plough. So those who knew the plough traded the products of their work for the skill of those who knew the sword. And everything else is a complex structure built upon this relationship.

In the exchange of energy, between those who trade the products of sword-skill for plough-skill, there is an imbalance. To over-simplify again, all ‘protection’ is bought at a premium. It’s as if we were being robbed on average for $10 every month, but we pay our ‘protectors’ $100 every month to prevent the thieves from taking $10. In modern society, this cost is spread so wide and amongst so many people, that we don’t notice. We’ve been anesthesized by the leeches.

It’s become systemic. Many people enter authoritarian professions (whether this be police, military, or government) thinking idealistically. They are charmed by the idea of being a noble protector.

However, due to the imbalance in the energy exchange, systematically all authoritarian constructs must become corrupt. Some will do this faster or slower than others.

The American system for example, is a pretty good hold out for personal freedom. In this world, there are still places where feudalism holds. There are places that are a living hell on earth.

And yet, even in places like America, power corrupts. It takes a lot more brains to exploit people in America (the methods of exploitation are more complex - you can’t just point a gun at someone - you have to control the media, control the polls, etc.)

(ii) Look at social media. Check out some of Clay Shirky’s videos at TED.
It’s still in development right now, so I’m just going to throw a laundry list of proto-ideas at you.

Whuffie
Reputation/Social Capital
Manfred Macx of Accelerando
Open Source
Transparency
Real Time Web Based Politics
Voluntary Taxation (Donation Taxation)
Pay What You Want
Abundance based economy as opposed to scarcity based economy
Sustainability vs. Exploitation

to answer (iii)

Here’s a visceral example:

The difference between a ‘Stop’ sign, and a ‘Yield’ sign.

The difference between Stop Lights, and roundabouts.

The difference between Tai Chi, and Boxing.

The difference between ‘do as I say’ and ‘do as I do’.

The difference between the carrot and the stick.

The difference between harm reduction and the war on some drugs.

Anarchy, it’s not the Law, it’s just a good idea.

It has been said that democracy is the worst form of government except all the others that have been tried.
Sir Winston Churchill
British politician (1874 - 1965)

I apply the same thought process.

“Capitalism is the worst form of economy, except all others….”

and so on.

My main point is that it’s time to try anarchy, because the paradigm is stale.

We can argue over whether our current paradigm has also brought benefits, but I think it’s fairly clear that out dated models are simply outdated.

What I find most encouraging, is that I don’t have to convince a lot of people:

Technology is forcing it. Social media is disrupting the status quo. It’s no longer a matter of convincing enough people to go for it, it’s more a matter of helping people adjust.

Transparency and open source are coming whether we like it or not, it’s driven by accelerating technology. The choice lies within how we traverse the transition. How many people will we jail, how many will we kill, how many will we impoverish, as we make the transition?

Evolution is demanding a new paradigm, and we can surf these changes, or we can fight the current.

My goal is to try and illuminate it, so people will put up less resistance to it, make the transition smoother, and hopefully avoid some suffering. But regardless, we will make the transition.

Not to take this thread too far off topic:

Death/entropy is also very stable too, but that’s not a reason to rush toward it, right?

I’ve heard you (iPan) push for anarchy before, and I confess I’m not clear on your vision.  But isn’t anarchy simply libertarianism taken to its logical conclusion or extreme? If so, then the same criticisms of libertarianism (anarchy-lite) would apply to your vision too, right?

@Patrick

Yeah, more or less.

Libertarians, in my not so humble opinion, are basically pussy Anarchists.

They just aren’t quite ready to go all the way.

YOUR COMMENT Login or Register to post a comment.

Next entry: 10 Future Technologies That Already Exist

Previous entry: Euthanasia, Immortality, and The Natural Death Paradox