IEET > GlobalDemocracySecurity > Vision > Contributors > Khannea Suntzu > SciTech
System safeguards for A.I. are unlikely to make any difference
Khannea Suntzu   Feb 25, 2016   Khannea Suntzu  

A decade ago AI research wasn’t as hot as it is now. But right now, in 2016 AI is very much a profitable endeavor. Many now argue that with regards to AI there is a risk for: (a) mass unemployment, (b) mass political destabilization (for instance mass-abuse of intelligent drones by terrorists), or even (c) a hard take-off of self-improving AI triggering a so-called “singularity”, which (in very short) is something we might simplified describe as “a point beyond we don’t have a clue what happens next”.

Specialists in the field agree – (a) is already happening and will get much worse, likewise (b) is already here or no decade removed, whereas we shouldn’t be overly concerned (yet) that (c) might happen the next 10 years.

Many people in the AI field are nonetheless ringing alarm bells we should do something. The most commonly quoted argument (in particularly by MIRI and The future of Humanity institute) is we should invest a lot of resources in socalled “friendly AI”. I tend to agree, but looking at the world as it exists right now there is ample evidence that even safety mechanisms designed to protect the very most vulnerable completely and publicly fail.

Here’s a very sad example:

– a situation where everyone agrees there is a serious problem

= a situation where clearly and openly the bad actor conspires against the legal system or framework

– a situation where all parties conspire to not deal with the problem

– a situation where the problem is widely exposed in the media

– a situation where you express anger at the problem can get you jailed

– … and even then nothing happens and we all proudly declare “the system works” while everyone else sees the system completely failed.


The problem with AI systems is that they are extremely profitable in the short run, and their profits tend to accrue to people who are already obscenely powerful and affluent. That essentially means we enter in a Robocop scenario where corporate control will almost certainly implement protections against loss of revenue. Take for instance TPP and article (a) above – it is conceivable an Automation corporation that offers other corporations vast benefits from robotization (and consequently – laying of most workers as redundant) and in the light of the TransPacific Partnership legal framework that if a country protects such automation in favor of more employment of its citizens the corporation that does the automation service can sue country for lost revenues.

I conclude there are next to no reliable ways to protect against major calamities with AI. All existing systems are already openly conspiring against such a mechanism or infrastructure.

I suppose we’ll know before 2030 how things go, but looking at just how corrupt academia, legal systems, governments and NGO’s have become world-wide in the last few decades I am not holding my breath.

And in case you were wondering in the above interjected example, look here (Lousy grammar warning). Let them know what you think – and in doing so there’s a clear indication how to address the Hard Take-off scenario – more people should know and start giving a damn and speak out against it. Apathy can get us all in to major trouble.

Khannea Suntzu is a politically left leaning futurist and activist in the Netherlands.


We certainly Don’t want to be apathetic. We also Don’t want to be in a state of learned helplessness, where the double edged sword each new technology brings threatens to mock that atomic apocalyptic clock. Still, as a cosmist I choose hope, if simply for the increased likelihood I live in a portion of the multiverse where guys like me do too.

Anyway, I think the biggest thing moving forward we need to be concerned with is agency. Power corrupts certainly, as the saying goes, but what are the ends of the power? If we’re talking about building gods, I’d rather build ourselves into gods than some artilect.

I talk to young people that are getting into machine learning. Most of them know nothing of transhumanism generally, let alone the Friendliness problem.  “So have you seen iRobot with Will Smith?”  Uuaaaghh.

The ones who are trying to put AGI in a lockbox are truly lost. Your pet project might be safe, but how about the existential risk of another project? How about some form of AGI you hadn’t planned for?

Anyway, eventually, power is going to centralize.  But there are enabling technologies that Don’t.  We should celebrate these technologies. I for one am going to start a hydroponic garden this spring, and get some more healthy food into my diet. Sure, when we eventually allow everyone in the world to meet the unsustainable quality of life of the 1st world, without some serious innovation, we will go extinct. Maybe one day there’ll be an insurmountable problem, but I Don’t think it’s Friendliness.

YOUR COMMENT Login or Register to post a comment.

Next entry: Startup Societies: Laboratories of Innovation

Previous entry: H+ Clinic Fights Child Malnutrition to Improve Ugandan Future Brain Power and Economic Production