Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

What you need to know about CRISPR

Brain Implant Allows Paralyzed Man to Feel Objects With a Prosthetic Limb

Technology hasn’t changed love. Here’s why

Why Non-Natural Moral Realism is Better than Divine Command Theory

IEET Affiliate Scholar Steve Fuller Publishes New Article in The Telegraph on AI

Can we build AI without losing control over it?

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


rms on 'Can we build AI without losing control over it?' (Oct 24, 2016)

spud100 on 'For the unexpected innovations, look where you'd rather not' (Oct 22, 2016)

spud100 on 'Have you ever inspired the greatest villain in history? I did, apparently' (Oct 22, 2016)

RJP8915 on 'Brexit for Transhumanists: A Parable for Getting What You Wish For' (Oct 21, 2016)

instamatic on 'What democracy’s future shouldn’t be' (Oct 20, 2016)

instamatic on 'Is the internet killing democracy?' (Oct 17, 2016)

RJP8915 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Here’s Why The IoT Is Already Bigger Than You Realize
Sep 26, 2016
(6004) Hits
(1) Comments

IEET Fellow Stefan Sorgner to discuss most recent monograph with theologian Prof. Friedrich Graf
Oct 3, 2016
(4125) Hits
(0) Comments

Space Exploration, Alien Life, and the Future of Humanity
Oct 4, 2016
(4027) Hits
(1) Comments

Blockchain Fintech: Programmable Risk and Securities as a Service
Oct 22, 2016
(3732) Hits
(0) Comments

IEET > Vision > Artificial Intelligence > Bioculture > Futurism > Contributors > Gareth John

Print Email permalink (17) Comments (7127) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

The Singularity: Fact or Fiction or Somewhere In-between?

Gareth John
By Gareth John
Ethical Technology

Posted: Jan 13, 2016

In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millennial daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Verner Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic ‘The Computer and the Brain.’ [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings - and indeed all sentient life - live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics et al for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas - the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

‘It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else.’ [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Leiner, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

‘There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems.’ [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change - exponential growth - that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

‘I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change.’ [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future. Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or grey goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.

1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.
2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)
3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5
4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015
5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)
6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129
7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI - Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia)
8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference
9. Pensky, N, 2014, article taken from Pando
10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’,
11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015
12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future


Gareth John lives in Mid Wales; he’s an ex-Buddhist priest with a MA in Buddhist Studies at the University of Bristol, and has performed studies on non-monastic traditions of Tibetan tantric Buddhism.
Print Email permalink (17) Comments (7128) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Very interesting focus of information, Gareth.

As far as i understand it what would or will make the singularity so singular is the attainment by machines of the capability to advance their own evolution independently of human contribution.  I see no barriers toward this event on the side of technical increase of processing power.  Where i tend to disagree is that there would be motivation for machines to undertake such a step any time soon, as this will require volition, something that so far is not in evidence anywhere in machine intelligence.  Only after machines acquire qualities usually associated with personality seems such a development possible and likely.

Thanks René. I figure much the same, although who knows? It’s interesting (to me at least) how many divergent views are held in the transhumanist community on so many issues - one of the reasons it fascinates me. Still, I’d love to be around to see what happens when with regard to AGI, so if they could just speed things up please…

Ray Kurzweil is not always spot on time wise but he is usually only our by a few years. When he wrote How My Predictions Are Faring five years ago see some things seemed a bit off but when I re-read it back in October I realized many are now coming to fruition so its well worth a read.

According to him 89 out of his 108 predictions were entirely correct by the end of 2009. 13 more were what he calls “essentially correct” (meaning that they were likely to be realized within a few years of 2009), this gives a total of 102 out of 108. Even if he scored 25% less it would still be a pretty good rate of prediction.
Interestingly I compared his predictions to those of Arthur C. Clarke from 1963 The next 100 years according to Sir Arthur C. Clarke which are at this link Ray has proven more accurate but Clarke’s list is well worth a read too.

Thanks DrJohnty - those are great resources for me to check out and it’ll be interesting to see how they compare. It’s certainly given me food for thought and maybe I’m premature in my own thoughts around the accuracy/efficacy of Kurzweil’s predictions. Either way, thanks for the input and it’s good to make your acquaintance!

Great article Gareth. As to the question - is/when the Singularity going to happen, I see the ‘event’ as a consequence of evolution, hence it is inevitable.  There are so many loci of inputs (companies, universities, countries etc.) all pressing in an exponential fashion from so many fields of enquiry, that I find it inconceivable that a tipping point to sentient AI ‘birth’ - that is the moment of self awareness - won’t occur in the nearish future.  Of course, the great difference in this instance vs. all other precursor organisms that are self-aware is that that the evolutionary learnings/adaptations that occurred over many millennia may well occur in the space of an hour - or even milliseconds, with potentially access to all human knowledge.  The striking of two rocks together by a homo erectus person 1 million+ years ago for controlled fire simply doesn’t compare.

As to timing, my prediction remains in the nearish camp.  I’m inclined to think it will happen in my lifetime, and I’m 59.  But that it might take a century or more would not surprise me.

But I tend to think we are missing the point to having to adapt to a potential Singularity.  The exponential acceleration of technology hence societal impact that is at the root of Kurzweils compelling proposition will present challenges that might very well mean the conditions cease to exist to incubate the Singularity.  Simply put, I fear we won’t survive the Pre-Singularity, let alone the consequent event.  Obama’s State of the Union address spoke of the urgent need for a new leadership narrative to manage the challenges of our times.  I couldn’t help but think - he/they ‘ain’t seen nuthin’ yet.

Many thanks for asking the question!

Here’s a scenario I’ve been thinking about lately:

Singularities happen, but they don’t Change Everything, because they also have (metaphorical) gravity wells and event horizons.  Intelligences that “go into” a Singularity don’t come back out again, much less spread out and turn the Cosmos into computronium or otherwise act on a large scale in “default reality.”

One of the things that makes a superhuman AI (or a superhuman upload, cyborg, etc.) superhuman is that it would be thinking and experiencing at computer speed.  It would be able to “think circles around” any entity not likewise thinking at computer speed.  The downside of this is that for it, external “default” reality becomes very, very slow.  Imagine if it took you the subjective equivalent of a thousand years to walk across your living room.  Since the movement of matter is limited by physics (available energy for acceleration and deceleration, friction, etc.), the “lag time” for “default reality” is inescapable, and becomes worse as a cyber-entity continues to augment itself. 

As soon as it learned to daydream, it would develop the ability to create hyper-optimized virtual realities for itself and its friends, which would also “run” at computer speed, becoming incalculably more pleasant to engage with and accomplish things in than default reality.

The next issue to take into account is light-speed latency.  If, as seems very likely, superluminal signaling should prove to be impossible even for the most advanced imaginable cyber-beings (“c” isn’t just a good idea, it’s the law), and network effects continue to apply to communicating societies (an MMORPG or scientific community with a million members beats one of either with only four), cyber-entities will want to be as close to one another as physically possible.  For a being thinking at computer speed, the light-speed delay of a signal coming through a communications satellite would be noticeable, and a three-minute light-lag from Mars would be an eon.  The more advanced (i.e., the faster their ‘clock speed’) a community of cyber-entities becomes, the less light-speed lag they’ll be willing to tolerate.

In short, post-Singularity communities could tend to contract into spheres of ultra-miniaturized computronium substrate, or perhaps even a single such sphere.  Any intelligence offered a look inside would discover endless hyper-optimized worlds of humanly-unimaginable wonder, and would very likely find an invitation to join irresistible.  Messy, slow, cussedly intractable default reality (and the merely human beings that live there) couldn’t possibly compete in terms of appeal.  Inward, beyond the event horizon they go, never to return.

A Singularity sphere might likely get itself launched into space, where it could continue to exist for subjective eons with almost no maintenance (and what maintenance it needed could be tasked to sub-sapient robots), without having to worry about pesky weather or meddling by anti-Singularity beings.

Meanwhile, outside the Singularity, human life would continue.  Anyone who accepted too many cybernetic implants, decided to become an upload, etc. would get “sucked in” past the event horizon, leaving the “Mark 1.0” humans who “stayed outside of the gravity well” (religious objections, preference for low-tech lifestyles, inability to afford augmentation, whatever reasons might apply) living their normal lives. 

The “event horizon” would represent a barrier to two-way communication, as the unmodified humans outside the Singularity would think and talk ridiculously slowly by the standards of the Singularitarians inside.  Only augmented and/or uploaded humans would begin to be able to “catch up,” thus getting caught in the Singularity’s “gravity well.”

Aside: Singularitarians could conceivably attempt to interact with default reality by “putting themselves on pause” for specified amounts of time and “waking up” periodically to receive new instances of experience.  In terms of subjective experience, only the “un-paused” instances are experienced, so default reality could be experienced as “running faster.”  This has been proposed as a way for cyber-beings to maintain a communicating interstellar civilization without superluminal signalling, by synchronizing their “pause times” to allow time for light-speed transmissions between them to be experienced as instantaneous.

However, cyber-beings who used this method to interact with default reality would cut themselves off from the eons of subjective experience and communication in hyper-optimized virtual worlds they could otherwise be part of while they’re “paused.”  Thus, it could be argued that post-Singularity life will be very unlikely to choose that option, and will instead disappear (relative to default reality) behind the Singularity’s “event horizon.”

The Singularity is based upon the premise that technological growth is exponential.  Some call this obvious, while others dismiss it out-of-hand.  Recently, Ray Kurzweil responded to an essay by Paul Allen that was critical of the Singularity emerging by mid-century.  His retort was both interesting and informative:

“When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore’s law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.”

I would recommend everyone to read (or listen to the audiobook) “The Singularity is Near,” Kurzweil’s book that goes into pretty elaborate detail on why he believes technological progress (i.e. Moore’s law in particular) is exponential.

Briefly, it comes down to the Singularity Feedback Loop, where technology creates intelligence, and then intelligence improves technology.  The Law of Accelerating Returns is like the miracle of compounded interest.

Again from that above article:

“Allen writes that “the Law of Accelerating Returns (LOAR)… is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths…

Allen articulates what I describe in my book as the “scientist’s pessimism.” Scientists working on the next generation are invariably struggling with that next set of challenges, so if someone describes what the technology will look like in 10 generations, their eyes glaze over.”

Half of the technological progress in the 20th century occurred in the last twenty years.  Also, another 20th century worth of progress occurred from the beginning of the 21st century to today.  We expect another century’s worth of progress to occur in the next 8 years.  At this rate, it is very conceivable that the Singularity (i.e. when AI becomes smarter than humans, and then technological progress will be even more rapid) will happen by 2045, which is Kurzweil’s prediction.

KevinC - thanks for your comment. Very well thought out scenario and I really like this idea of an ‘event horizon’ for describing future AGI and augmented human potential. Also the idea of putting Singularitarians ‘on pause’ for protracted periods of time - waking up periodically to experience new instances of reality fascinates me. Much food for thought.

dobermanmac - Kurzweil’s ‘The Singularity is Near’ was one of my first primers on AI and emerging technologies and I still return to it now and again as I try to grapple with the implications if Kurzweil is indeed correct (-ish) in his premises. Love your quote: ‘Briefly, it comes down to the Singularity Feedback Loop, where technology creates intelligence, and then intelligence improves technology.  The Law of Accelerating Returns is like the miracle of compounded interest.’ With each new comment my understanding improves that little bit more. Thanks for the input.

More on the concept of putting on self on “pause”: see the story Marooned in Realtime by Vernor Vinge. In this story, going on pause is an effective way of dealing with space travel taking a long time. One misses out on a lot of history, but one doesn’t experience the tedium.

It will be an event-horizon because no one single act-position-event will make us obsolescent and or superfluous. Vernor Vinge might have run with an idea but I prefer [shock] that the singularity is the Big Bang [a quantum fluctuation according to some] the centre of a black-hole [according to others] and my pity pronouncement - the bardo-phase-change- of life beyond itself at time of death. Read some of the literature and the straight-up-top-down-current-model whilst interesting has a long way to go to be convincing. But the quest is worth it.

Rsworden- Will do!

almostvoid - Ditto!

Thanks both for the input.

Why would a volitionless superintelligence take over anything?  Put another way, how would an artificial intelligence obtain independent volition?  Computers adhering to the Von Neumann architecture (nearly all today’s machines) certainly exhibit no volition.

The answer to those questions is, computers will obtain independent volition inevitably when some curious human defines it and tells them to.

In regard to the certainty of Kurzweil’s Singularity, I believe that any open-minded person will agree that given a machine of high human-level intelligence told to improve its own abilities, with access to the necessary materiel, then a superintelligence—eventually an unfathomable one—will certainly appear.

The existence of superintelligent entities is not alone sufficient for the Singularity to occur, not unless they have access to the wide world and manage their own reproduction.  But such contact and direction seem inescapable.  At least no certain method of preventing either has yet been imagined.  Whether they actually should be prevented is another matter.

_A priori_ a program with human-level intelligence running at machine speeds must be constructed.  Arguments against the Singularity seem to depend on denying that possibility, in particular denying the development of artificial sentience.  All the arguments lack rigor.  The common thread of objection is, “It has never been done,” which is a typical response to every new idea.  This makes me wonder at the deniers’ motives.  I conclude that they feel threatened, one way or another.  They don’t _want_ it to be done!

Well, I do.

risj - I agree with you entirely. I have no doubt that the ‘Singularity’ will occur - although I perhaps have a different perspective on how/when that may occur from Kurzweil - and I agree that it seems likely that denialists do feel threatened by the probability that we will have a superintelligence in the near future. They do have a point - computers will sooner or later attain volition and as you say, after that the future of machine life may well becomes unfathomable. It’s a hard sell to many scientists, let alone the general public.

Like you, I too want it. And more importantly, I want to part of the discussion we should be having _now_ about these issues so that we can prepare for the future before the future becomes _now_. Whether it will help is not the point - we need to debate both promise and risk so that we can at least try to prepare for the ‘unfathomable’.

Thanks for the input. Really interesting and informative.


Thank you very much for your response.  Obviously you are very intelligent, according to the principle that a man’s intelligence is directly proportional to how closely he agrees with you!

You “perhaps have a different perspective [from Kurzweil] on how/when [the singularity] may occur.”  It’s the “how,” at least in the trend of current development, that concerns me.  Researchers brag about the accomplishments of “deep learning” while admitting they neither know nor expect to know, even conceptually, exactly what is happening within those tall stacks of decision networks.  Indeed they should worry about predicting, much less controlling, a “deeply learned” machine when it is released into the wide world.

Rule-based designs do have that advantage: you know what happens at each step.  And why.  My own research into contextual analysis of a natural language—by far the best exemplar of intelligent interaction—suggests that rules can be used deductively as well as inductively.  You just need an immense number of them arrayed in long decision trees, which of course is the problem.  What human organization has the patience (or funds) for such development?  But what if it has significant computer assistance?

We may disagree on one point.  You strongly advocate a “discussion we should be having _now_ about these issues ...”—presumably about the likelihood and timing of AGI.  For other than entertainment and possibly cross-pollination of ideas, it would resemble the arguments about Stanley’s steam-powered flyer when a power source of lighter weight turned out to be required.  Just now we have no promising approach even to plan an AGI.  Without one all discussion of restraining it is premature, to say the least.  What we really need first is imagination.

Have you ever considered the sheer implausibility of Einstein’s realization that the sun’s gravity would apparently reposition the stars behind it, and that of his followers who understood that the same effect would make black holes visible by ringing them with distorted light?  Imagination of that caliber is what we need!

Thanks risj - how can I disagree with you now given the above principle!

And in essence I don’t. I think we’re all roughly on the same team. However, the one qualification I would add is that for me the discussion isn’t about the ‘likelihood and timing’ of AGI - that’s for builders, cognitive scientists etc to manage (and I don’t think lack of imagination is necessarily the problem there) - but rather about the ethical and moral consequences of creating AGI in the first place. And I agree that it_may_ be premature in terms of knowing what we will be dealing with, but I don’t think it’s ever too early to start using the imagination to consider potential scenarios and how we might approach them.

To my mind, it’s precisely the conversation we’re starting to have here!

I now believe that the singularity already happened!

(Shortened) Time line of important transitions:

1. Big Bang;

2. Origin of life;

3. Prokaryotes;

4. Eukaryotes;

5. Multi-cellular organisms;

6. Super-organic (i.e., communities);

7. Man;

8. Neolithic;

9. Urban revolution;

10. Age of exploration-beginning of globalization;

11. Telegraph;

12. Internet;

13. Now-almost all of ~7 billion humans connected instantaneously via the cloud.

14. The future will involve the slow replacement of the carbon units infecting the cloud by alternative technology! (The Elite are already not replacing themselves!!)

racquetballer46 - Or maybe is ‘happening’ as we speak. Whatever, it’s an exciting future!

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Spaceflight and Cosmic Religions

Previous entry: Thaddeus Metz: “Toward a unified account of great meaning in life”


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @