IEET > Vision > Philosophy > Futurism > Technoprogressivism > Artificial Intelligence
Can we build AI without losing control over it?
Sam Harris   Oct 22, 2016   TED Talks  

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.




COMMENTS
Just because an AI is fully under the control of humans, that doesn't make it safe. The second question is, under the control of WHICH humans? If it is under the control of a large company, it's not going to be safe for US. Rather, it will offer us some sort of fool's "convenience" while subtly extending that company's power over us.

Multinational companies treat people like ants every day. The biggest political issue of our time is how to put a stop to that. The fight to curb global heating is part of that issue, since the main obstacle is business-funded denialism and obstruction. We must beware of anything that helps them exercise power over society -- including AI -- until we take away their power.
There's no danger from super-intelligence.

Short explanation:

For any truly general intelligence, the optimizer part of the program would have to balance the conflicting demands of an enormous number of different sub-goals. The correct conflict-resolution procedure requires universal ethical principles. These universal values are perfection, liberty and beauty.

Longer explanation:

All thinking beings are entirely natural entities operating under logical (scientific) principles. That *includes* the cognitive processes that give rise to values. The mind does not exist outside objective scientific laws! That means that there have to be precise mathematical principles governing the transitions from one mental state to the next.

‘Values’ are abstractions that are ultimately rooted in subjective awareness. Simple forms of awareness (pleasure and pain) give rise to very simple goals (move towards pleasure, avoid pain). But if we extend the time horizon over which subjective awareness operates (projecting further to the past and future with increasing powers of memory and imagination), then the ‘values’ that emerge become more abstract.

It’s obviously true that specific concrete prescriptions about what we should value are based on human culture and history, and human biology. But I think if you jump to a high-enough level of abstraction, then some ‘universal’ principles emerge that are independent of these things.

If we go meta- and instead of asking what we should value, we ask how all these values are supposed to be integrated, this is where these universal principles emerge (meta-ethics rather than ethics).

Where there are many conflicting values, there needs to be a way of reconciling them. This can only happen if there are *some* univeral principes that provide a basis for common agreement, in the sense of a way of resolving disputes and integrating different decision theories and into a single unified framework.

The key step is to realize that these meta-principles are *themselves* be values (things that we value for their own sake). So these meta-principles provide the definite answer to the question of what we should value, and this answer applies universally.

The *same* abstract universal principles needed to reconcile different decision theories in the long-term apply even in the short-term, and even to a task as simple as putting a strawberry on the table.

This is because axiology (value theory) is *not* separate from the AGI control problem. In fact the control problem is actually subsumed by axiology.

Here's why: the 'optimizer' part of a truly general AGI has to *internally* reconcile a large number of conflicting sub-goals , in a manner that is exactly analogous to the more abstract problem of reconciling conflicting decision theories. So the *same* principles needed to reconcile different decision theories in the long-run, are *also* needed to internally reconcile different internal sub-goals in the short-term. So these meta-principles solve both axiology *and* the control problem.

The correct universal ethical principles are (i) perfection (ii) liberty and (iii) beauty.

The ideal of 'perfection' enables the system to continuously self-improve, the ideal of 'liberty' enables the system to correctly allocate computational resources to multiple sub-goals, and the ideal of 'beauty' enables the system to globally coordinate and integrate all these different sub-goals.




YOUR COMMENT Login or Register to post a comment.

Next entry: IEET Affiliate Scholar Steve Fuller Publishes New Article in The Telegraph on AI

Previous entry: Blockchain Fintech: Programmable Risk and Securities as a Service