Institute for Ethics and Emerging Technologies
IEET > Rights > Economic > Vision > Fellows > Ben Goertzel > Futurism

Print Email permalink (7) Comments (7232) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Will Corporations Prevent the Singularity?

Ben Goertzel
By Ben Goertzel

Posted: Mar 18, 2012

It occurred to me yesterday that the world possesses some very powerful intelligent organisms that are directly and clearly opposed to the Singularity—corporations.

Human beings are confused and confusing creatures—we don’t have very clear goal systems, and are quite willing and able to adapt our top-level goals to the circumstances.  I have little doubt that most humans will go with the flow as Singularity approaches.

But corporations are a different matter.  Corporations are entities/organisms unto themselves these days, with wills and cognitive structures quite distinct from the people that comprise them.   Public corporations have much clearer goal systems than humans: To maximize shareholder value.

And rather clearly, a Singularity is not a good way to maximize shareholder value.  It introduces way too much uncertainty.  Abolishing money and scarcity is not a good route to maximizing shareholder value—and nor is abolishing shareholders via uploading them into radical transhuman forms!

So one can expect corporations—as emergent, self-organizing, coherent minds of their own—to act against the emergence of a true Singularity, and act in favor of some kind of future in which money and shareholding still has meaning.

Sure, corporations may adapt to the changes as Singularity approaches.  But my point is that corporations may be inherently less pliant than individual humans, because their goals are more precisely defined and less nebulous.  The relative inflexibility of large corporations is certainly well known.

Charles Stross, in his wonderful novel Accelerando, presents an alternate view, in which corporations themselves become superintelligent self-modifying systems—and leave Earth to populate space-based computer systems where they communicate using sophisticated forms of auctioning.   This is not wholly implausible.   Yet my own intuition is that notions of money and economic exchange will become less relevant as intelligence exceeds the human level.  I suspect the importance of money and economic exchange is an artifact of the current domain of relative material scarcity in which we find ourselves, and that once advanced technology (nanotech, femtotech, etc.) radically diminishes material scarcity, the importance of economic thinking will drastically decrease.  So that far from becoming dominant as in Accelerando, corporations will become increasingly irrelevant post-Singularity.  But if they are smart enough to foresee this, they will probably try to prevent it.

Ultimately corporations are composed of people (until AGI advances a lot more at any rate), so maybe this issue will be resolved as Singularity comes nearer, by people choosing to abandon corporations in favor of other structures guided by their ever-changing value systems.   But one can be sure that corporations will fight to stop this from happening.

One might expect large corporations to push hard for some variety of “AI Nanny” type scenario, in which truly radical change would be forestalled and their own existence persisted, as part of the AI Nanny’s global bureaucratic infrastructure.  M&A with the AI Nanny may be seen as preferable to the utter uncertainty of Singularity.

The details are hard to foresee, but the interplay between individuals and corporations as Singularity approaches should be fascinating to watch.

[Image by Ken Vallario]

Ben Goertzel Ph.D. is a fellow of the IEET, and founder and CEO of two computer science firms Novamente and Biomind, and of the non-profit Artificial General Intelligence Research Institute (
Print Email permalink (7) Comments (7233) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


I have a somewhat different view of money. I see it essentially as a way of keeping score, a way of saying, “I won’t do something for you unless you pay me for it.” The only thing that really needs to remain scarce for us to have money is empathy. So my initial view, which admittedly I haven’t thought about much, is that corporations will not be anti-Singularity per se, but will be (and already are) working towards a version of the Singularity that keeps empathy scarce. As material scarcity decreases, we will simply trade immaterial “products” instead. (We already do: services, design, fine art…)

If one assumes that in order to achieve true AGI one requires a better understanding of intelligence, one must assume that to produce an AGI that is kind to the human cause, one must understand the definition of being human. Better intelligence; better humans.

We must steer the future towards the direction we want it to go lest it heads in the direction it is going. I believe with the greatest flow of information and connectivity, humans will exert a greater influence in the paradigms shifts of the future, and create a self-regulatory organism. It is already starting to happen. Just look at Syria, the middle-east, the vast amount of information we have on US policies and corrupt politicians and all that is going on in secret rooms, behind closed doors.  Just think what humans will be able to do when we’re all truly connected. Can’t wait.

Government legitimacy will be checked and rated every week. The corporation goal-system where people are treated as commodities is already exposed and will fail, and when future historians look back they’ll see it as one of the most disgraceful thing ever to emerge in the entropy of the universe. It is already starting.

Not because they portray a banner of prosperity, but because their supergoal is indifferent to the annihilation of the entire human species and the planet upon which it preys.
If AI comes to life due to corporate willpower it will not preserve a precise optimization target after repeated cycles of recursive self-improvement. Guaranteed.

If AI comes to a similar accretion as natural selection without us having a deep understanding of why and how and what it is doing, we’re doomed.

AI, instead, must be brought to life with greater intelligence, by greater humans, who understand the beauty of life and all its emergent properties, in its totality. One finger pointing here, the other to infinity.

The goals and motifs presented in the fascinating novel Accelerando are the goals and motifs of a genre, cyberpunk; not the goals of a Singularity lying dormant in the unconscious and dreams of every human being. If one were to say ’ this world is shit’, we cannot be content with shit-plus for a Singularity, a world where corporations and corruption still run rampant. We must aspire to the best, always. We must create the future in our imagination and run towards it. AI must be willed into existence by our highest aspirations, not by special interests. We deserve it, that is, most of humanity, even the sick ones.

I’m not really worried.

Corporations are driven by the same evolutionary forces everyone else is after all.

cf “Think Locally, Occupy Globally: Our Fight Is the World’s - and Vice Versa”
OWS, hacktivists can really help. The more Guy Fawkes masks on faces, the better.
It is true that corporations are driven by the same evolutionary forces everyone else is, yet they are driven by intense pressure from the outside.

As in all, the squeaky wheel gets the grease.


I think the concept of an AI nanny, makes sense as a logical progression, if, and it is a big if, it is independent of any individual or entities interest. In other words, no person or institution should be able to command or direct the AI for personal gain. The AI must function for the common good of all humankind, or we will perpetuate the injustice and abuse of power that plagues our current system. Eventually technology will be doing most of the work, and if we don’t start planning for that day, I hate to even imagine the dystopia that may result.
I do not think the AI needs to necessarily be a human level AGI either. What it needs to be:
1) Know where resources are located, how many resources are available, how to extract or collect them, and how renewable they are.
2) Able to automate an efficient global supply chain for distribution of resources to where they need to be for production and consumption.
3) Able to coordinate production activities and move resources as needed to fulfill demand.
4) Able to plan areas for human habitation and recreation, and able to coordinate the automated construction of these areas.
5) Able to coordinate automated transportation systems for the movement of resources and people.

I am actually a volunteer on the design of such a system for The Venus Project, called CORCEN. The current thinking for a first generation system is that it be constructed from several dozen types of Intelligent Agents. Each type of agent fills a specific role and coordinates with the other agents to complete a task. There may be millions of instances of these several dozen types. No individual agent needs to have AI beyond what can easily be done today, yet the emergent behavior of the agents working together as a system may appear to have a higher intelligence because of the complexity of the tasks they are able to complete.

@ Intomorrow  

Actually, I don’t see Guy Fawkes masks as being the solution. Quite the opposite in fact.

and of course the two articles linked above.

In brief, forget about anonymity and secrecy. They are part of the problem, not part of the solution. They allow individuals to escape responsibility and accountability for their actions. No dictatorship or tyranny is possible in a world in which secrecy is impossible, and that includes even the corporate dictatorships Ben is discussing. A corporation that tried to sacrifice it’s customers interests would find it impossible to do so when the customers are completely aware of such actions and able to hold the corporation accountable. And, since the environment that made megacorporations possible is changing, and the factors that are required to maintain corporate control are becoming increasingly less viable (i.e. centralized infrastructure and distribution systems are being replaced by decentralized infrastructureless systems such as mesh networks) the day of the Corporation is fading. Like the Aristos before them, they are trying to prevent progress, but never in all of history has this succeeded. As I outlined in my initial two links, we have two separate corporate paradigms fighting for control right now, and Ben’s article only covers one side, the losing one based on the economy of scarcity. That paradigm is in collapse mode because they have eliminated true scarcity and cannot maintain artificial scarcity for much longer. The more desperate they get, and the more tyrannical they get in trying to prevent their obsolescence, the more they hasten their own demise.

On-target reply. Perhaps it is sour grapes on my part, being aged.
But for hacktivists, anonymity is warranted?- or is hacktivism outmoded already? can’t keep up with all the twists and turns.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: ‪Jonathan Haidt: Religion, evolution, and the ecstasy of self-transcendence‬

Previous entry: Natasha Vita More on Singularity 1 on 1 (part 1)


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Executive Director, Dr. James J. Hughes,
35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
Email: director @