Are Prediction and Reward Relevant to Superintelligences?
Ben Goertzel
2012-03-24 00:00:00
URL

Obviously, in everyday human and animal life, there's a fairly close relationship between prediction, reward and intelligence.  Many intelligent acts boil down to predicting the future; and smarter people tend to be better at prediction.  And much of life is about seeking rewards of one kind or another.  To the extent that intelligence is about choosing actions that are likely to achieve one's goals given one's current context, prediction and reward are extremely useful for intelligence.



But some mathematics-based interpretations of "intelligence" extend the relation between intelligence and prediction/reward far beyond  human and animal life.  This is something that I question.

Solomonoff induction is a mathematical theory of agents that predict the future of a computational system at least as well as any other possible computational agents.  Hutter's "Universal AI" theory is a mathematical theory of agents that achieve (computably predictable) reward at least as well as any other possible computational agents acting in a computable environment.   Shane Legg and Marcus Hutter have defined intelligence in these terms, essentially positing intelligence as generality of predictive power, or degree of approximation to the optimally predictive computational reward-seeking agent AIXI.   I have done some work in this direction as well, modifying Legg and Hutter's definition into something more realistic -- conceiving intelligence as (roughly speaking) the degree to which a system can be modeled as efficiently using its resources to help it achieve computably predictable rewards across some relevant probability distribution of computable environments.  Indeed, way back in 1993 before knowing about Marcus Hutter, I posited something similar to his approach to intelligence as part of my first book The Structure of Intelligence (though with much less mathematical rigor).

I think this general line of thinking about intelligence is useful, to an extent.  But I shrink back a bit from taking it as a general foundational understanding of intelligence.

It is becoming more and more common, in parts of the AGI community, to interpret these mathematical theories as positing that general intelligence, far above the human level, is well characterized in terms of prediction capability and reward maximization.  But this isn't very clear to me (which is the main point of this blog post).  To me this seems rather presumptuous regarding the nature of massively superhuman minds!

It may well be that, once one gets into domains of vastly greater than human intelligence, other concepts besides prediction and reward start to seem more relevant to intelligence, and prediction and reward start to seem less relevant.

Why might this be the case?



Regarding prediction: Consider the possibility that superintelligent minds might perceive time very differently than we do.  If superintelligent minds' experience goes beyond the sense of a linear flow of time, then maybe prediction becomes only semi-relevant to them.  Maybe other concepts we don't now know become more relevant.  So that thinking about superintelligent minds in terms of prediction may be a non-sequitur.

It's similarly quite quite unclear that it makes sense to model superintelligences in terms of reward.  One thinks about the "intelligent" ocean in Lem's Solaris.  Maybe a fixation on maximizing reward is an artifact of early-stage minds living in a primitive condition of material scarcity.

Matt Mahoney made the following relevant comment, regarding an earlier version of this post: "I can think of 3 existing examples of systems that already exceed the human brain in both knowledge and computing power: evolution, humanity, and the internet.  It does not seem to me that any of these can be modeled as reinforcement learners (except maybe evolution), or that their intelligence is related to prediction in any of them."

All these are  speculative thoughts, of course... but please bear in mind that the relation of Solomonoff induction and "Universal AI" to real-world general intelligence of any kind is also rather wildly speculative...  This stuff is beautiful math, but does it really have anything to do with real-world intelligence?  These theories have little to say about human intelligence, and they're not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on "scaling them down" to make them realistic; so far this only works for very simple toy problems, and it's hard to see how to extend the approach broadly to yield anything near human-level AGI).  And it's not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts.

So by all means enjoy the nice math, but please take it with the appropriate fuzzy number of grains of salt) ...

It's fun to think about various kinds of highly powerful hypothetical computational systems, and fun to speculate about the nature of incredibly smart superintelligences.  But fortunately it's not necessary to resolve these matters -- or even think about them much -- to design and build human-level AGI systems.