IEET > GlobalDemocracySecurity > Vision > Fellows > Ben Goertzel > Technoprogressivism > Artificial Intelligence > SciTech
OpenAI - My Quick Thoughts
Ben Goertzel   Jan 18, 2016  

Generally obviously OpenAI is a super-impressive initiative.   I mean —  a BILLION freakin’ dollars, for open-source AI, wow!!

So now we have an organization with a pile of money available and a mandate to support open-source AI, and a medium-term goal of AGI … and they seem fairly open-minded and flexible/adaptive about how to pursue their mandate, from what I can tell…

It seems their initial initiative is toward “typical 2015 style deep learning”, and that their board of advisors is initially strongly biased toward this particular flavor of AI.   So they are largely initially thinking about “big data / deep NN” type AI …   

This should have some useful short-term consequences, such as probably the emergence of open source computer vision tools that are truly competitive with commercial systems.

However, it is worth noting that they are planning on spending their billion $$ over a period of 10 yrs or more.

So — Right now the OpenAI leadership is pumped about deep learning NN, in part because of recent successes with such algorithms by big companies.   But their perspective on AI is obviously broader than that.   if some other project — say, OpenCog — shows some exciting successes, for sure they will notice, and I would guess will be open to turning their staff in the direction of the successes — and potentially to funding external OSS teams that look exciting enough..

So, overall, from a general view obviously OpenAI is a Very Good Thing.

Open source and AI Safety

Also, I do find it heartening that the tech-industry gurus behind OpenAI have come to the realization that open-sourcing advanced AI is the best approach to maximizing practical “AI Safety.”    I haven’t always agreed with Elon Musk’s pronouncements on AI safety in the past, but I can respect that he has been seriously thinking through the issues, and this time I think he has come to the right conclusion…

I note that Joel Pitt and I wrote an article a few years ago, articulating the argument for open-source as the best practical path to AI safety.   Also, I recently wrote an essay pointing out the weaknesses in Nick Bostrom’s arguments for a secretive, closed, heavily-regulated approach to AGI development.   It seems the OpenAI founders basically agree and are putting their money where their mouth is.

OpenAI and OpenCog and other small OSS AI initiatives

Now, what about OpenAI and OpenCog, the open-source AGI project I co-founded in 2008 and have been helping nurse along ever since?

Well, these are very different animals.   First, OpenCog is aimed specifically and squarely at artificial General intelligence — so its mandate is narrower than that of OpenAI.   Secondly and most critically, as well as aiming to offer a platform to assist broadly with AGI development, OpenCog is centered on a specific cognitive architecture (which has been called CogPrime) created based on decades of thinking and prototyping regarding advanced AGI.

That is, OpenCog is focused on a particular design for a thinking machine, whereas OpenAI is something broader — an initiative aimed at doing all sorts of awesome AI R&D in the open source.

From a  purely OpenCog-centric point of view, the value of OpenAI would appear to be mainly: Something with a significant potential to smooth later phases of OpenCog development.

Right now OpenCog is in-my-biased-opinion-very-very-promising but still early-stage — it’s not very easy to use and (while there are some interesting back-end AI functionalities) we don’t have any great demos.   But let’s suppose we get beyond this point — as we’re pushing hard to do during the next year — and turn OpenCog into a system that’s a pleasure to work with, and does piles of transparently cool stuff.   If we get OpenCog to this stage — THEN at that point, it seems OpenAI would be a very plausible source to pile resources of multiple sorts into developing and applying and scaling-up OpenCog…

And of course, what holds for OpenCog also would hold for other early-stage non-commercial AI projects.   OpenAI, with a financial war-chest that is huge from an R&D perspective (though not so huge compared to say, a military budget or the cost of building a computer chip factory), holds out a potential path for any academic or OSS AI project to transition from the stage of “exciting demonstrated results” to the stage of “slick, scalable and big-time.”

Just as currently commercial AI startups can get acquired by Google or Facebook or IBM etc. — similarly, in future non-commercial AI projects may get boosted by involvement from OpenAI or other similar big-time OSS AI organizations.   The beauty of this avenue is, of course  that– unlike  in the case of acquisition of a startup by a megacorporation — OpenAI jumping on board some OSS project won’t destroy the ability of the project founders to continue to work on the project and communicate their work freely.

Looking back 20 years from now, the greatest value of the Linux OS may be seen to be its value as an EXEMPLAR for open-source development — showing the world that OSS can get real stuff done, and thus opening the door for AI and other advanced software, hardware and wetware technologies to develop in an OSS manner.

Anyway those are my first thoughts on OpenAI; I’ll be curious how things develop, and  may write something more once more stuff happens … interesting times!! …

Ben Goertzel Ph.D. is a fellow of the IEET, and founder and CEO of two computer science firms Novamente and Biomind, and of the non-profit Artificial General Intelligence Research Institute (


As soon as I heard about OpenAI, I went to their website, to see how I could get involved.  There was a email form, but nothing more.  In general, their site, and the concept, seems very hollow.  It really feels like so many of our H+ ventures, that have a pretty drupal/wordpress page, a great idea, but no actual content.

There aren’t a lot of transhumanists out there, and most of us are already working on projects, even if only part time.  With $1 billion, I’d think there’d be more movement toward coordination, even at launch. 

We’ve all more moved away from a closed door approach.  I remember talking to an SIAI fellow and being completely ineffectual due to Friendliness fears, and we weren’t even discussing algorithms, just the Friendliness problem itself.  In hindsight, it seems entirely irrational, as I’m still on the outside. 

Here’s to hope OpenAI will eventually get people like me (well, all of us, really) into the fold. 

I don’t knock the practical success of deep learning in certain areas of machine learning, but deep learning has some pretty severe limitations.  Deep learning is based on neural networks, which were originally a method of pattern recognition for sensory data. 

Firstly, there’s more to intelligence than just pattern recognition, which is the detection of correlations.  But detecting correlations, although very useful and powerful, cannot fully capture intelligence, since it cannot by itself account for causal relationships – just how smart can a so-called ‘deep learning’ method possibly be when it adds 20 years to your age just for wearing glasses (the Microsoft app guessing ages from photos), or when it thinks dumb-bells all come with arms attached (Google nets)?

Neural networks can only be really effective for sensory data.  It is inevitable that ‘deep learning’ will fail miserably when the attempt is made to apply neural networks to abstract world-models.  For one thing, the more abstract the concept, the more complex and ambiguous the training examples will be.  Worse, deep learning simply cannot grasp the semantic *meaning* of abstract concepts, because this requires an understanding of what algorithms actually *do* (causality relationships), which cannot possibly be obtained merely by identifying ‘patterns’.

Deep learning of course, is entirely rooted in the severely flawed probabilistic philosophy of epistemology, the prime modern example of which is Bayesianism.  It’s just a fancy, souped-up version of the old Humean/logical-positivist idea that all of science can somehow be reduced to correlations between sensory data.  There’s a reason logical positivism was abandoned in the 1960’s. 

Sorry Ben, but your own OpenCog architecture is also deeply flawed, since it too is based on the same fallacies I described above - the philosophy that everything is ‘patterns’ (which is false) and the philosophy that ‘probabilistic reasoning’ is powerful enough to fully handle all forms of reasoning under uncertainly (which it isn’t).  The very notion of ‘probability’ itself cannot possibly be adequate by itself to handle reasoning under uncertainty in general, because it fails to account for priors and the problem of logical omniscience (the uncertainty in our own logical reasoning).

HIV/AIDS has ruined many lives, I once had HIV, it was the help of Dr Oziegbe that I was cured. I have referred many people to him, and they have all been cured, kindly contact him today so that you can also get your healing, it was the help of the testimonies I saw, I contacted him, so I also pledge to write testimonies about him (Dr Oziegbe), you can contact him today on email address, DROZIEGBESPELLHOMECURE@GMAIL.COM

YOUR COMMENT Login or Register to post a comment.

Next entry: Should we stop killer robots? (2) - The Right Reason Objection

Previous entry: We Need to Teach Kids Creative Thinking