IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Fellows > Ben Goertzel
Aspects of Artificial General Intelligence - AGI 13 Interview
Ben Goertzel   Sep 16, 2013   Adam Ford  

In this video Ben Goertzel talks with Adam Ford about his paper “Probability Theory Ensues from Assumptions of Approximate Consistency: A Simple Derivation and its Implications for AGI” at AGI-13@PKU conference (Beijing) on July 31 to August 3 of 2013.

* Probability theory as a key tool in building advanced, rational AGI systems
* Mind-body integration as key to human-like intelligence
* The importance of integrative design and dynamics in AGI systems
* The systems theory of mind, reflecting the integration of mind, body and society
* Deep learning algorithms as a particular reflection of the system-theoretic principle of hierarchical pattern composition
* Lojban as a potential tool for easing human-AGI communication in the early stages
* OpenCog AGI architecture as an exemplification of systems based AI

First, it is noteworthy that much emphasis was given to the human body, which is somewhat unintuitive when talking about AGI, which is a form of AI.

Second, since the difference between AGI's model of reality vs. reality is indeterminate, probability theory (or approximation) is literally necessary, which makes such an observation trivial.

Third, the idea of using multiple AGIs combining their work to achieve better results seems rational, but abstractly combining them just means one AGI, not multiple ones (i.e. just as a human looking at something from different angles just results in a [theoretically] better single view of something). I might add that combining different "views" is non-trivial.

For instance, there is visual intelligence, natural language intelligence, and situational awareness programs that are quite advanced, but joining them together into a gestalt is non-trivial.

Fourth, I doubt that the best AGI will emanate humans, but instead will mimic them, just as the best candidates to pass the Turing test are programmed to fool the testers, not to wear a human skin (which would be inefficient and laborious and in some ways redundant).

Finally, I would like to add that reality is not necessarily hierarchical, but human's perception of it is. SAI may very well not use the hierarchical pattern recognition of our neocortex as it's blueprint, but likely AGI will. In other words, AGI appears to be a stunted form of AI. As I am in favor of SAI, I wish the focus could be on proprietary AI algorithms whose optimization is directed at function, not mimicry. In other words, mimicry ought to be an afterthought, not the main focus of AI development.

This whole posting is very abstract - am I making any sense?
YOUR COMMENT Login or Register to post a comment.

Next entry: Betting Against The Transhumanist Wager

Previous entry: Utopia in Exile