IEET > Vision > Fellows > Ben Goertzel > Artificial Intelligence
Humans for Transparency in Artificial Intelligence
Ben Goertzel   Mar 17, 2016  

Recent dramatic progress in artificial intelligence (AI) leads us to believe that Ray Kurzweil’s prediction of human level AI by 2029 may be roughly accurate. Even if reality proves to be a bit different, still it seems very likely that today’s young people will spend most of their lives in a world largely shaped by AI.

this article was co-written with Bill Hibbard, Nick Baladis, Hruy Tsegaye, and David Hanson

The rapid advent of increasingly advanced AI has led many people to worry about the balance of positive and negative consequences AI will bring. While there is a limit to the degree anyone can predict or control revolutionary developments, nevertheless, there are some things we can do now to maximize the odds that the future development of AI is broadly positive, and the potential for amazing benefits outweighs the potential risks.

One thing we can do now is to advocate for the development of AI technology to be as open and transparent as possible — so that AI is something the whole human race is doing for itself, rather than something being foisted on the rest of the world by one or another small groups. The creation and rollout of new forms of general intelligence is a huge deal and it’s something that can benefit from the full intelligence and wisdom of the whole human race. Specifically we need transparency about what AI is used for and how it works.

For this reason we are gathering signatures on a petition in support of Transparent AI.  Please sign if you agree!

Transparency will help in multiple ways. First, as with cyber security, there are many complex technical vulnerabilities with AI. Experimental AI systems have found ways to accomplish their goals that violate unstated assumptions by their designers, resulting in undesirable behaviors. Cyber insecurity could enable hackers to damage AI systems, also resulting in undesirable behaviors. Transparency in how AI works will allow the world’s computer scientists to search for such vulnerabilities and propose fixes.  We can see the power of this approach in the way the Linux community deals with operating system vulnerabilities.

Second, AI is increasingly employed by providers of Internet services to build predictive models of users and to persuade those users to purchase products and support political candidates and positions. When AI language ability is equal to human ability, persuasive messages may be subtly embedded in conversations with AI. When AI can model human society, persuasion may subtly employ peer pressure and shape human culture. Transparency in what AI is used for can make such persuasion visible to people so they can take individual or political action to resist. Transparency would also inform people about military uses of AI, generating social pressure for treaties similar to those banning biological and chemical weapons.

AI is a tool of military, economic and political competition among humans. In the heat of competition, groups feeling themselves behind in race to develop AI may expose the public to risk in order to gain advantage. Transparency would inform the public of the risk so they could act to prevent it.

On the more positive side, transparency will also bring a far greater diversity of human minds to bear on the numerous difficult scientific, engineering and creative problems involved in creating advanced AI systems.  Even the greatest of companies or government labs cannot match the breadth of culture, background and expertise of the people involved in a large open-source project (Linux being a case in point).  A transparent approach naturally lends itself to involvement by a wide assemblage of people from different parts of the world, different cultures, different professions, different perspectives, different variations of core human values.  The result of this kind of rich diversity tends to be a more robust, more nuanced and multidimensional product that is more able to deal adaptively with complex real-world situations.

The Internet, and Science itself, are leading examples of technical developments that have grown via cultures marked by significant transparency and massive creative diversity.  It is largely because of the transparency at their foundations that they are among the more powerful and robust entities in our world today.  It is desirable that our advanced AI efforts meet and then exceed the level of transparency, creativity and robustness demonstrated by Science and the Internet!

Just as the transparency and openness of the international scientific community tends to foster global cooperation and can help militate toward peace, so a transparent and open AI community will be more likely to foster broadly beneficial AI developments, and less likely to foster adversarial ones.  When AI researchers everywhere are working together and studying and correcting and improving each others’ code, there is likely to emerge a sense of community that leads to more of a focus on creating mutual benefit.  The “global mind” of the community of AI researchers, application developers and other related workers is likely to become more unified and less divided.

Further, transparent AI is more likely to be used to help people in the developing world, rather than just the economically privileged.   When a technology is closed and proprietary, applications generally have to wait to pervade the developing world until the corporations that own them figure out a sufficiently lucrative way to profit from deploying them there.   A transparent technology can be taken up by enthusiastic early adopters in the developing world and then adapted to serve local needs, in ways that may be beneficial even if not immediately financially profitable at a scale interesting to large developed-world corporations.  Often this sort of local adaptation has been achieved via bypassing international law — e.g. software piracy; the creation of low-cost knock-off imitations of electronic devices; or the emergence of small rural farmers who carry out illegal but creative and productive cross-breeding of GMO crops with natural local crops.  But of course the leveraging of advanced technologies like AI to help the developing world will proceed much more rapidly and smoothly if it can be done other than by circumventing the law.

More speculatively and deeply, one can see transparency regarding AI as a potentially powerful tool for dealing with the profound fear toward AI that is evident in so many AI-related science fiction movies, and also in the recent statements of various science and technology pundits. Different individuals’ worries about the future of AI stem from different causes, but alongside various rational concerns, there is often a significant aspect of basic fear of the unknown.  Fear of, and bias against, the unknown is part and parcel of human nature; it is not always wrong and there is no simple “cure” for it when it is wrong.  However, we feel transparency can be a valuable tool for counteracting some of the fears people experience and express regarding AI.  In the ideal case it can aid in replacing reflexive fear with detailed rational consideration. The more transparent AI is, the less it falls into the category of the “worrisome-or-worse unknown”, and the more likely it is that people can deal with it in a reason-based and emotionally-balanced manner.

Transparent AI is not a new idea, but nor is it (yet) the norm in the AI research and development world. For instance, there has been significant discussion in the media recently about OpenAI, an amply funded AI project founded by Elon Musk, Sam Altman, Peter Thiel and a number of their colleagues, with an initial orientation toward open-source software development. There is also a variety of open-source projects focused on Artificial General Intelligence, including the OpenCog project founded by one of this article’s authors, and many others such as (to name just a few) OpenNARS, MicroPsi, the (largely Japan-based) Whole Brain Initiative, and the Hanson Robotics intelligent robot control framework. Google and Facebook have also released open source versions of their AI systems.

We believe these projects are excellent steps in the right direction. But at the present time, closed and opaque AI development is much more generously supplied with financial, computational and human resources — especially when it comes to scalable practical AI development, rather than pure research. We would like to change this, and would like to see the balance shift in favor of transparent AI.

All this is why, in collaboration with Ethiopian AI firm iCog Labs, we have created an online petition in support of transparent AI.  If you agree with us that transparent, open development of advanced AI technology is — based on our current state of knowledge — the best option for the future of humanity and other sentient beings, please add your signature to the petition.

Also, as a seed for a broad-based movement for transparency in AI, we have created a Google group for initial organizing. People without Google accounts can join the group by sending email to All people are welcome in such a movement but young people especially have an interest in making AI transparent. Please forward links to this article widely.

Remember: the amazing, transformational future is not just something that is happening to us — it is something that is being created due to all of our actions.   There is meaning and potentially great impact in what you choose to do, and what you choose to advocate.


About the authors: Bill Hibbard is an Emeritus Senior Scientist at the University of Wisconsin-Madison Space Science and Engineering Center, who has written and spoken widely on the ethics of superintelligence, alongside his technical work.  Nick Baladis is a second year student at the MIT Sloan School of Management. Ben Goertzel is an Artificial General Intelligence researcher involved with multiple projects including OpenCog, Hanson Robotics, the AGI Society, iCog Labs and Aidyia Limited. Hruy Tsegaye is a writer and software project leader who works with Ben Goertzel at iCog Labs in Addis Ababa, focusing on applications of AI and robotics to help African education. David Hanson leads cutting-edge robotics firm Hanson Robotics, working to create human-like and compassionate robots to usher in a Friendly Singularity.

Ben Goertzel Ph.D. is a fellow of the IEET, and founder and CEO of two computer science firms Novamente and Biomind, and of the non-profit Artificial General Intelligence Research Institute (


While “I agree that advanced AI technology should be developed in a transparent and open way. Given our present state of knowledge, this seems the route most likely to maximize the benefit and minimize the risks for humanity and other sentient beings” (the petition’s text), I have one question:

The recent AlphaGo victories against human Go champions have highlighted that narrow AI is becoming as good as people for more and more complex tasks. Therefore, there’s money to be made and the big companies have noticed.

Google, Facebook, IBM etc. are spending a lot of money on AI. Transparency may be in the best interest of humanity, but I’m afraid the big tech companies will put their own interest first and pursue closed, proprietary AI developments. Since they have the money, they can buy all promising startups like Google did with DeepMind. It can be argued that only closed proprietary “secret” developments are sufficiently appealing to motivate the big tech companies.

Thoughts? Ben? Others?

I’m sure you meant well Ben, but I just don’t think this is realistic.  The first big problem is that it only takes a few ‘defectors’ to ruin it for everyone else; the teams that proceed in secret will have a big advantage over the teams that share, because they will take all the information in, and give nothing out.  And there is simply no realistic way to get a majority of teams to sign-up to this, given the world we currently live-in.

The second big problem is that the world we currently live in just isn’t benign.  AGI will attract big attention from (a) corporations and (b) military.  Neither of these institutions has humanities best-interests at heart.  Corporations are out for profits, and military are out for power.  Open-source will only result in teams being cynically manipulated and exploited for profit and power. 

Close to the very end, when it becomes clear that AGI is close, open-source researchers are very likely to be putting both themselves and their familities in harms way;  the dangers are: in the case of (a) corporations - having all their ideas stolen by others and safety considerations completely thrown out the window in order to make a quick buck, or (b) military, getting ‘thrown in the slammer’, and/or forced to work on weapons applications.  You will be crushed between the two nasty anvils of profit-motive (corporations) and power-motive (military).

You can’t hope to work within the current system and win.  It’s far too late for that now.  Look at politics in the USA right now.  Just horrible: you are witnessing the ‘heat death’ of the republic: a massive choking bureaucracy pandering to special-interests and blocking any real innovation (personified by Clinton), combined with a mindless reactionary plutocracy (personified by Trump).

I think you AGI researchers know what you really need to do.

(Marc shakes his head sadly)

‘When in the course of human events it becomes necessary for one people to dissolve the political bands which have connected them with another….’

Giulio, my reply to your comment is simply that not everyone can be bought by big companies, either because

  they are already rich due to non-AI stuff, like the folks behind OpenAI
  they are “crazy / idealistic”, like those of us behind OpenCog

So if some folks who cannot be bought make enough progress to seed a transparent and open source AGI movement that takes off fast enough, it may beat the big tech companies. Linux provides an inexact but meaningful analogue; in some important ways it’s beating the big companies at OS development. And many of the individuals behind Linux are too crazy/idealistic to be bought by big companies.

I have included this point and some others in the following post:

Ben, I hear you. I am one of those crazy / idealistic types that are mainly motivated by things other than money.

But I think your argument is naive. It would work if AI could be developed by one or two mad scientists in a garage, but real operational AI requires an army of scientists, engineers, hardware specialists, programmers, managers, and coffee boys. You need money to pay for that, and money is what the tech giants have.

OK, perhaps one or two mad scientists in a garage could develop the big original unexpected conceptual breakthrough that opens the way to operational AI. Perhaps they are crazy idealists like us and refuse to be bought off by Google, Facebook or IBM.

But they wouldn’t continue the project in secret, first because they don’t have the money for the army of support staff, second because they are crazy idealists who don’t like secret projects.

So they would publish their result, probably in an open access journal, and keep a website with detailed information and hints. Google, Facebook and IBM wouldn’t even need to buy the inventors off, because they can just copy the research and make it operational.

(comment copied to your new post)

However, since crazy idealists should unite and lose their chains and all that, I signed the petition and joined the mailing list!

YOUR COMMENT Login or Register to post a comment.

Next entry: When Einstein Challenged Bohr, a New Universe Was Born

Previous entry: Exploring Transhumanism