IEET > Rights > HealthLongevity > Vision > Contributors > Lincoln Cannon
Speculating Around Speculative Roadblocks to the Singularity
Lincoln Cannon   Jul 31, 2012   Lincoln Metacannon  

In his book “Physics of the Future”, Michio Kaku outlines six roadblocks to the Singularity. The roadblocks are at least as speculative as the technological singularity, and we can reasonably speculate our way around them. Below are Michio’s proposed roadblocks, followed by my thoughts.

Michio: “First, the dazzling advances in computer technology have been due to Moore’s law. These advances will begin to slow down and might even stop around 2020-2025, so it is not clear if we can reliably calculate the speed of computers beyond that . . .”

Lincoln: The idea that advances in computing tech will slow down (or stop) in 2020-2025 is no more warranted than these skeptical and luddite ideas. The facts of the matter are: (1) we have observed an exponential advance in many aspects of computing tech for many years; and (2) different persons point to different reasons to predict the trend will continue (or not) in the future. I suspect the trend will continue for a long time, but even if it continues only for a few more decades, we’re in for an unprecedented wild ride.

Michio: “Second, even if a computer can calculate at fantastic speeds like 10^16 calculations per second, this does not necessarily mean that it is smarter than us . . . Even if computers begin to match the computing speed of the brain, they will still lack the necessary software and programming to make everything work. Matching the computing speed of the brain is just the humble beginning.”

Lincoln: Even if humans calculate slower than computers, this does not mean humans are smarter than computers. Speed of calculation is an aspect of intelligence, as are quantity of calculation and algorithms. Computers already are beating humans in each of these areas in various ways and to various extents. Did you watch Watson win Jeopardy?

Michio: “Third, even if intelligent robots are possible, it is not clear if a robot can make a copy of itself that is smarter than the original . . . John von Neumann . . . pioneered the question of determining the minimum number of assumptions before a machine could create a copy of itself. However, he never addressed the question of whether a robot can make a copy of itself that is smarter than it . . . Certainly , a robot might be able to create a copy of itself with
more memory and processing ability by simply upgrading and adding more chips. But does this mean the copy is smarter, or just faster . . .”

Lincoln: A robot doesn’t need to make a copy of itself that is more intelligent. It only needs to become more intelligent and make copies of itself. Robots can already make copies of themselves. Robots can already learn. We should expect their abilities in these areas to continue to improve. In the least, if we (and robots) continue to improve algorithms, robots will make themselves smarter the same way that humans became smarter: evolution. The principles and creative power of natural selection apply to technological evolution, and the inheritance-variation-selection cycle is orders of magnitude faster than biological evolution.

Michio: “Fourth, although hardware may progress exponentially, software may not . . . Engineering progress often grows exponentially . . . [but] if we look at the history of basic research, from Newton to Einstein to the present day, we see that punctuated equilibrium more accurately describes the way in which progress is made.”

Lincoln: Software is already progressing at faster than exponential rates. Despite bureaucracy, even the government reports algorithms are beating Moore’s Law.

Michio: “Fifth, . . . the research for reverse engineering the brain, the staggering cost and sheer size of the project will probably delay it into the middle of this century. And then making sense of all this data may take many more decades, pushing the final reverse engineering of the brain to late in this century.”

Lincoln: Progress in reverse engineering the human brain will probably match our experience with mapping the human genome: slow beginning to fast ending, as tools continue to improve exponentially. Because of the nature of exponentials, even if estimates of the complexity of the human brain are off by a few orders of magnitude, the additional amount of time required will be measured in decades - not centuries or millennia.

Michio: “Sixth, there probably won’t be a ‘big bang’, when machines suddenly become conscious . . . there is a spectrum of consciousness. Machines will slowly climb up this scale.”

Lincoln: Machines may already be conscious to some extent. How do you know I’m conscious? Prove it.

Lincoln Cannon is a technologist and philosopher, and leading advocate of technological evolution and postsecular religion. He is a founder, board member, and former president of the Mormon Transhumanist Association. He is a founder and advisor of the Christian Transhumanist Association. And he formulated the New God Argument, a logical argument for faith in God that is popular among religious Transhumanists.


Lincoln, I agree with all of you rebuttals to Micho. While I am pro technology, and make my living from it, I have one concern.

As automation and AI improves, they are going to increasingly replace the bulk of the populations’ jobs. I don’t think it’s feasible to turn every person on the planet into an entrepreneur. At some point, the negative economic pressure may lead to a slowing of investment, or a public rebellion against technology.

Do you think, as Peter Diamadis does, that technology will solve its own problems, and we just need more innovation, or do you believe change is needed to our core institutions to permit an advanced technology to coexist with humanity in a beneficial way?

It may be a popularist stance that Michio Kaku has decided to opt for in his continual scepticism of timescales. I must admit he used to frequent UK BBC documentaries, which were more optimistic in the past than are presently - to every man, (and scientist), his own, (opinions)? His career path as theoretical physicist was in seeking a unified theory, and it appears he is more of a science protagonist and presenter these days, which explains a lot to me.

Moore’s Law may not be so important nor specifically applicable towards computer science and technology in the very near future with the forthcoming advent of quantum computing which will not rely upon customary computer processing/processors and manufacturing.

Processing speed is everything? - For a system which employs high data crunching speeds and negative feedback, creativity and intelligence may be merely an emulation, yet as you highlight, what more are we human’s anyhow but biological machines with a high sub-conscious speed of processing, (and slow formal ideas of free will and idea creation)? - What more is creativity but in choosing the best and worst of trial and error and mistakes, experience, and “swiftly” learning from these and committing to memory storage?

What then is mind even, merely more that aggregate of functionality, processing, and memory? Intelligence is not “a thing in itself” and refers to nothing substantial or real without mind.

The success of any CEV would seem to rely upon sufficiently high processing speed, and any global crowd-sourcing online network which would aide and source biological “Human creativity” on a global basis would also need to process at very high speeds - and I do believe that any global problem may be solved in this manner and with this methodology?

A machine of the near future should certainly be able and with absolutely no free will, be “willing” to excel as the saviour and servant of all our technological woes - yet only if we Humans so choose?

“Do you think, as Peter Diamadis does, that technology will solve its own problems, and we just need more innovation, or do you believe change is needed to our core institutions to permit an advanced technology to coexist with humanity in a beneficial way?”


Moral Enhancement
“Julian Savulescu and Ingmar Persson argue that artificial moral enhancement is now essential if humanity is to avoid catastrophe.”

“Do you think, as Peter Diamadis does, that technology will solve its own problems, and we just need more innovation, or do you believe change is needed to our core institutions to permit an advanced technology to coexist with humanity in a beneficial way?”


I was thinking more about the problem that as automation increases productivity, the amount of labor required will decline until there is a disruption in the demand for goods.

Additionally, there is an inherant pressure in Capitalism to resist abundance because it has a negative impact of profits. Supply must be kept scarce to maintain a price.

Kelly, I’m sure technology will continue to change how we work, but it won’t take away the need for work. Technology itself is work, and it’s a prosthetic, so we’re working through our technology. Of course, a particular prosthetic tends to decrease in cost over time, so persons will pay and be paid less for it (less for a particular kind of work) over time. However, we will develop new prosthetics that persons will pay for and be paid for, at least so long as the present notion of payment persists.

That said, I reject the idea that technology solves its own problems. Technology is simply power, to be used for good or evil, and expanding the possibilities for both. As you suggest, we also need to improve our political and cultural organizations (technology can help, but only in a feedback loop) to define and mitigate risks and to define and pursue opportunities.

CygnusX1, I agree that moral enhancement is essential to positive futures (as outlined in the Benevolence Argument of the New God Argument), but moral enhancement can only ever be done in a feedback loop stemming from “unenhanced” morality. Technology alone is not the solution. Technology will only empower our esthetics and ethics, whatever they may be.

Seems to me there is only one real “roadblock” regarding the singularity, the assumption that brains and minds are inherently computational. Computation is a dynamic arrangement of matter and/or energy whose highly constrained set of states encode representational content. Thinking the mind/brain is representational is problematic because it seems to imply two possibilities: (a) There is a homunculus that interprets these representations, but the homunculus can’t be a representation itself. (b) Brain states (if there are such things) are causally linked to the world and susceptible to determinism. In short, the mind/brain is simply a mechanical system whose every state was determined the moment of the big bang. If you do subscribe to determinism then the representation of a mind == a mind, neither has any free-will or causal power that is not reducible to inputs.

If you do believe that our minds have the ability to make choices (and not just select actions based on random variation or determinism), then a representation and reality can never be confused, as noted by Searle, just because a weather model says its raining, does not mean that it is actually raining, just because a computer models a mind, does not mean that it thinks, let alone is conscious.

B, the determinism issue may be a red herring, since it may be that nothing (not even the stereotyped computer) is deterministic in our universe, even if some things approximate determinism for most practical purposes. Beyond that, have you looked at Hoftstaster’s books, such as “I Am A Strange Loop”? He has some interesting things to say about feedback loops in minds representing minds.

@b.: I’m not sure why you think that mental representation entails homuncular infinite regress. If my brain state reliably correlates to a given stimulus, then it can be reasonably said that that brain state “represents” that stimulus in some sense. Maybe we mean different things by representation.

Thanks for your comments Lincoln and SHaGGGz,

1. Determinism: Personally, I’m not a determinist. As far as I understand the alternative is quantum randomness, which has already been argued is no better than determinism in terms of “free-will”. Now while I don’t think that free-will is really ‘free’ (unconstrained), I do think that one cannot predict the behaviour of an animal even if you control all the input variables. In short, I think there is something special about agency/consciousness/will that cannot be reduced to the causal factors that impact an organism over its life.

2. Feedback: Thanks for the book reference, I’ll put it on my (long) list. If I understand the premise, it seems to be about consciousness as a special kind of feedback? Seems to me, even if consciousness is feedback (or reentrant according to the Tononi and Edelman proposal), there is still no causal power there (all behaviour can still be predicted on the basis of input variables over the organisms life). If you do think that consciousness is just an “illusion” and all the real work is done by unconscious cognitive processes, then the thought that consciousness would have any causal power on its own does not make any sense. I don’t think that consciousness is nothing more than an attentional buffer with little functional value beyond constraint.

3. Representation: You are totally right SHaGGGz, all the brain-reading stuff is just that: A correlation between an input stimulus and the state of some region of the brain. Perhaps we agree if we think of brain states as just states and not necessarily as having “meaning” but rather only a causal role in the brain. I think the problem lies in when we assume that these correlations have meaning for some part of the brain itself (which seems to imply a homunculus), rather than simply a causal state. This only works for sensor and motor parts of the brain though, because no correlation is possible for thinking states, or mental images, because there is no objective measure of those phenomenological experiences to correlate with brain states.

There are causal conceptions of meaning, that X out there means X because X impacted the brain in such a way. This assumes a deterministic and mechanistic mind whose behaviour can be reduced to its inputs, because all meaning comes from the world and causally impacts the system: the mind/brain simply mechanically responds. Randomness and feedback do not change that behaviour would be reducible to inputs (if randomness is considered one of the input variables). As stated above, even if variation arose from randomness, I don’t think that is any improvement from determinism, as the agent/consciousness/mind/brain would have no causal power beyond the sum of its inputs.

I prefer a Merleau-Ponty conception of embodiment, where meaning is not a result of the world causally impacting the agent, but where both the agent and the world engage in a mutual action that results in meaning. (We construct the world while the world constructs us).

So I put to you, is consciousness simply an illusion? Is the mind/brain a (however re-entrant and/or random) mechanical system whose behaviour is reducible to its inputs (including randomness)? Does consciousness have no causal power and is it just a spot light on mechanical goings-on?

I’m trying to think through if its possible to believe in the singularity while answering “no” to any of these questions, because it seems to be illogical to think a mind could be computational unless a mind is considered a mechanical system that is predictable as long as all the variables are controlled. If you think that its idiotic to think that consciousness could be anything else (have its own causal powers beyond the response to inputs), then my points here will make no sense at all.

@b.: Yes, I do think that consciousness is an epiphenomenon. However, I fail to see how your preferred Merleau-Ponty conception differs from the description of consciousness as determined (either by non- or random antecedents), save the flowery prose of mutual construction.

I’m not sure I follow your logic regarding the possibility of the singularity as it pertains to the truth of elimininative materialism.

SHaGGGz: Since you’re on the “consciousness is an epiphenomenon” side, I don’t see how any argument I could make would be seen as logical in the context of that position. Perhaps I am reaching a bit with making a link between Merleau-Ponty phenomenology and determinism. My central aim is thinking through an escape from the dichotomy between materialism and solipsism.

Regarding the singularity, I admit this is a new area for me. I think of it in two ways, and I’m open to correction:

1. The transhumanist notion of uploading our consciousness into machines.
2. The total merging of body and cognition into technology systems.

It is 1 that I take issue with (from where my comments above arise). As for 2, I think that discussion is totally moot. Simply because of our implicit learning of technologies, and our ability to be conscious of the content over the interface, technologies are already merged with us. It has even been argued that we owe even our evolution to our use of technology (see “The Artificial Ape”). Under this view, the singularity has already passed, or even more interesting, we have always been both natural and technological, and therefore the singularity has always been here (as long as we as technology creators/users have been, however you measure that).

The view that the human brain is the penultimate computing device is absurd.  It was inadvertently engineering by trial and error over tens of thousand of years.  As has been demonstrated repeatedly, mankind can engineer better replacement parts for the human body than evolution designed.  Why not a better mechanical “brain?”

It is the same retarded view that led virtually all of my chess club predicting a few decades ago that no computer would ever beat the best human at chess.  I mean, there were already computers out that could beat most amateurs, so what psychological barrier stopped them from imagining what would soon happen?

The Singularity is coming, and it will be a surprise to most people.

dobermanmac, do you have any specific points of argumentation regarding simulation and modelling vs reality?

1. 10 thousand years? How different do you really think the macaque brain is to ours? How many years of evolution has the brain been through? The bottom line is that we are here only because of 100,000(+?) of years of beta-testing. The longevity of our technology is literally a drop in the bucket (and that is being generous). None of us know what will happen to us in 100,000 years, and you and I will not be around to care. Bioengineered stuff may last longer, but that kind of technology is a different issue.

2. Define “better”. Sure we can engineer parts for the human body, and they do perform better when measured against very specific criteria. There is no doubt that our constructions can perform better than evolved ones, but only if you consider a very precise sense of “better”. Do they require as little maintenance? Do they last as long? Do they require as much energy? Are they as flexible? How well do they function after failure? Do they self-repair? To what degree? What are their carbon footprints? How much water is used in their production? How many people were required for their production? How much do they cost? Who can afford them? (This is just off the top of my head)

I will be surprised, and very impressed, if in my lifetime we have prosthetic limbs that can be implanted in utero, use no more calories or weigh any more mass, grow and refine in flexibility and dexterity over the childhood of the person, and are functional for the lifetime of the person.

The whole modelling discussion was about criteria for validity (how you know you succeed), and all technical constructions are validated the same way. Without reflecting on what we mean by “better” (or “intelligent”, “conscious” or “alive”) we are at the mercy of those who stand to gain from our ignorance.

b., simulation and modelling vs reality, huh?  I am a functionalist, and as such I believe a thing ought to be judged by how well it works.  Why ought simulation and modelling be judged by how well it mimics reality?  Frankly, reality isn’t that great - it is boring and is filled with compromises.

1. The “longevity of our technology” is not an issue - evolution is imperfect trial and error.  Given the very large range of different options available to mindful engineering, a better design is not only predictable, it ought to be fairly easy.  Furthermore, evolutionary trial and error took place in an uncontrolled environment, whereas any artificial design ought to be judged based upon a more controlled and limited environment, and therefore can be idealized for those more limited parameters.

2.  I define better like the Supreme Court defines pornography: I know it when I see it.  You can be argumentative about what make something “better,” but it is much more cut and dry than the splitting hairs of a philosophical discussion.  By the way, have you heard of the artificial red blood cells (as an example) that is in animal testing now?  It is said if you replace just 10% of your natural red blood cells with these artificial ones, you can take a deep breath and stay at the bottom of a pool for over an hour!  That is “better” my friend.

To summarize, I wouldn’t idealized “natural,” nor would I obfuscate “better,” because it is just being argumentative.  Furthermore, I don’t agree that the whole modelling discussion was about the criteria for validity, because it isn’t really that hard to judge success all things being equal.  Don’t over think it b.

dobermanmac, You are right that we have to judge things by how well they work in a constrained environment, because that is all we have to work with, and its better than no judgment at all.

1. My point is simply that thinking that success in these constrained environments is the same as success in an evolutionary scale is folly. While it could be, there is no way of knowing without doing the testing, and I would expect the next 100,000 year test to fail. Looking at our use of resources and overpopulation alone indicates this.

Mindful engineering is the application of our concepts and ideas (culture) to the construction of useful artifacts. It is therefore limited by our culture. The very fact that (biological) evolution is uncontrolled is what makes it powerful. Rather than testing in a controlled and limited environment, which is dependent on cultural norms, money, pragmatics, etc., testing is done over an extremely long time-scale in an environment where almost anything can happen. This is the real test, anything else is a poor substitute, because of cultural/conceptual limitations.

Now there are computational evolutionary systems that build electronic components like circuits and antennas. These can perform better (of course in those limited constraints) than anything thought up with by a person (in some cases engineers can’t ever understand how they work). This is because rather than being dependent on cultural conceptualizations of electrons and current flow, etc., they are random but well simulated. The evolutionary algorithm does not care about how we think about electronic engineering, it just does what appears to work. It is not limited in terms of conceptualization, but it is limited in terms of the fidelity of the simulation. You don’t know if you threw away something important without real-world testing, the longer the testing the more sure we can be that the design is appropriate. The problem is that the function of a circuit or antenna is extremely limited, the function of living things are not so. What is the function of a living thing other than survival?

2. “Don’t over think it b.”, I thought thinking was the whole point! (other than survival perhaps). Ok, so my biases may already be slightly obvious from the above writing. I’m trained enough in cultural theory to know about the flaws and biases in ideas and concepts about the world. We are born in a certain context, are socialized to think a certain way, and we internalize ways of thinking and seeing the world. It has been very well established that how we think about the world (expectations) changes how we perceive it. How we see the world is highly flawed, objective methods do help, but our interpretations of measurements are always done in the context of how we think about the world. How we think about the world is not the same as how the world is, we are intensely fallible.

You may call it “over thinking”, or being “philosophical”, or “obfuscat[ion]”, but what I’m attempting to do is unpacking assumptions and doing a critical analysis. Sure it’s inconvenient for the ray of technological progress pointing out into infinity, but it could certainly be argued that there can be no innovation without argumentation (Hegel). I’m not saying that nature is always better, but I am saying that they very concept of “progress” is extremely complex and problematic, as it (a) requires a limited and artificial environment for validation, and (b) because there is no single way to define it (you can tell who the stakeholders are by how “progress” is constructed). Today progress seems to mean little more than more stratification, money and constructed objects. Not more ethics, happiness, fairness, stability, art, or knowledge. I can see why it may appear that I idealize nature, but it also appears that you are idealizing technological progress and human engineering ability. Both are problematic. (Actually if you look close enough you realize that everything is problematic! This is because we always see through our own biases, and our own biases are always fallible. Perhaps the only progress we can make results from unpacking and reflecting on our individual and cultural biases.

Great and well thought out posting b.  I had to re-read it multiple times to totally absorb it - something that rarely happens.

This is the problem (and I don’t mean to be insulting, since you are obviously very smart and sincere): people’s fetishism of evolution (i.e. Darwin’s natural philosophy).

Thank you very much for bringing up the concept of “computational evolutionary systems.”  I have had LONG discussions with a master computer programmer friend of mine who doesn’t believe that Darwin was correct about how life (majorly) evolved on Earth. Certainly evolution (i.e. ‘Darwin’) is correct to some extent - just a quick look at the micro level will convince.

OTH, looking at end products on Earth (i.e. what species are alive today), it isn’t clear there wasn’t more than just the “invisible hand” of evolution at work.  You are probably familiar with the “finding the wristwatch on the beach” example (i.e. you wouldn’t think that artifact “evolved,” but instead would naturally think it was designed and built with intelligence).

My friend instead uses “computational evolutionary systems” to prove the point.  Since they go through a tremendous number of cycles, they can (somewhat) mimic natural evolution.  Through viewing and analyzing the dynamics of (for instance) neural-nets he has come to the conclusion that natural evolution’s (i.e. Darwin) end product would be impossible without a visible intelligent hand.

I personally attribute “quantum entanglement” with the helping hand (i.e. our microcosm mirroring the macrocosm).  He wants to bring God (or extra-terrestrial life) into the equation.  OTH, it could simply be life is canny enough to self-evolve (which brings me to the point I wanted to address in the first place).

Design/engineering is almost always a compromise.  Better on water - worse on land.  Can fly - can’t run good and is too light and vulnerable for heavy battle.  Dumber - more psychologically hardy in horrible environments.  You get the idea generally.

For instance, sharks kick @ss, but when the ocean water turns bad they have nowhere to go.  Or earth worms have found a pretty good environment, but when it rains they can’t breath and they crawl onto the sidewalk and rot.

What I am trying to say is that you can’t engineer for every possibility, unless the organism is self-adapting.  If you posit a self-adapting organism, then you will have trial and error with a visible hand guiding/helping evolutionary improvement/adaptation.  This can surmount the main cause of extinction: a changed setting that the organism hadn’t evolved to deal with.

Yes, cultural bias, and other factors come into play when conceptualizing designs and alternations.  OTH, as you know, miles and miles of trials tends to gets the bugs out.  Furthermore, any design can get lucky or unlucky, and any result is only as relevant as the setting you are testing it in.

To summarize, evolution (i.e. Darwin) is an imperfect, chancy, and ultimately flawed way to produce an “ideal” design (what ever that means).  Better would be a computational evolutionary system, where the number of trials and the settings could be greatly increased and varied/refined.  But, in my opinion, the best would be a design that adds a visible hand to adaptation/refinement.

What I am ultimately saying is that evaluation of function is much more objective standard than if an organism survives a finite number of trials, and the best and most efficient way to design a highly functional “algorithm” would be to both expose it to more trials, and intelligently adapt it.  In other words, natural isn’t the best way to come up with the best design, because it is random and encompasses a limited trials under limited circumstances.

dobermanmac, thanks for your comments.

I think we’ve long since left the topic of this article, so I’ll attempt to keep this short.

Due to the limitations of the simulated environments, and the simplicity of designs that are evolved, I’m not surprised people find evolutionary algorithms so weak as to count against Darwinian evolution. I think this counts more for my point regarding simulation as a weak stand-in for reality.

Indeed, with multiple criteria for success (which all designs require in practise), compromise (or equilibrium) is central, for both evolution and for human-made design.

I have to ask what you consider the atom of life. You seem to be concerned with the success of failure of a specific organism, but what is an organism but a test-bed for the testing of genes? Yes the organism may become extinct, but many many life-forms contain the same genes, and thus those genes will survive in the most successful organisms. It does not matter that the organisms survive as long as the genes do. Perhaps the reason why we have so much genetic overlap with other creatures is because those have been deemed the most successful (at this moment in time), not in one organism/environment, but in many.

1. Everything is imperfect, natural or artificial. Perfection implies a lack of conflict, and therefore a lack of equilibrium. Evolution would not work without conflict.

2. Chancy: I’m not sure what you mean here, fallible, risky? Seems to apply to both artificial and natural “design” to me.

3. “Ideal”: If an ideal design is a design that is highly successful for a particular criteria for a particular context, then only artificial constructions can be ideal. There is no such thing as “ideal” in evolution, only very well tested.

4. “natural isn’t the best way to come up with the best design, because it is random and encompasses a limited trials under limited circumstances.” If you take a computational perspective, randomness is just highly distributed breadth first search. Rather than refining the tiny details of one thing, you refine the details of many things at once. This is really just parallelism.

I don’t see how natural evolution can be “limited trials under limited circumstances”. That is exactly what artificial design is, and what evolution is not. There are no “trials” in evolution, only *continuous* testing. Circumstances are limited in the natural environment only from the perspective of a single organism, but from the perspective of genes and multiple organisms then the circumstances range to all of those possible on earth (even beyond).

In summary:
(a) “Intelligent” design is necessarily short-sighted. A design needs to satisfy the needs of a particular context for a particular function. It is culturally specific and limited to the ways in which “intelligence” is structured conceptualized (as object) and structures and conceptualizes (as subject). Only this mode can construct ideal designs because “ideal” requires a contrived context of use.

(b) “Natural” evolution is necessarily conflicted and limited to equilibrium, never ideal. There is no context for a particular function (other than (gene) survival). There is no culture or conceptualizing, only biology. This mode allows the survival of genes, through massive parallel testing in multiple contexts beyond the life-time of any one culture. These 1000 years don’t matter, we’re aiming for 100,000 years.

This is not to say that our intelligent design cannot create something that could survive (like a new gene) this is certainly possible, but knowing if it would survive requires the kind of autonomous long term testing that evolution does best. (best because of a lack of (preconceived) constraint, unlimited trials and massive parallelism.) As in the only constructions that we could build that could survive would have to be alive (reproducing, autonomous, self-repairing, adaptable, diverse, and in equilibrium with the environment)

Yes we will absolutely change our lives in very significant ways due to our intelligent designs. These will change both us and the world around us. This is obvious, and may be the way things have always been. Our intelligent design has most likely contributed to our “natural” evolution. (See

My central thesis is (beyond representations vs reality) is that the singularity is moot because it has already passed. We are already both artificial and biological/natural. We already drive our own evolution through our constructions, we already think, communicate and live through and in our constructions. Maybe we have always been technological (changing the world) and natural (changed by the world). I think (a) and (b) are two perspectives on the same thing, both inseparable and integral to one and other. I’m not trying to elevate nature and drop technology (it appears that way only in my attempt to find equilibrium), I’m attempting to find a space in between this artificial dichotomy. By choosing one over the other, they both loose.

An example of “artificial” being “better.”

“My central thesis is (beyond representations vs reality) is that the singularity is moot because it has already passed.”

“Maybe we have always been technological (changing the world) and natural (changed by the world). I think (a) and (b) are two perspectives on the same thing, both inseparable and integral to one and other. I’m not trying to elevate nature and drop technology (it appears that way only in my attempt to find equilibrium), I’m attempting to find a space in between this artificial dichotomy. By choosing one over the other, they both loose.”

Yes, here is our central disagreement.  I AM elevating technology over natural.  The Singularity hasn’t already passed.  This isn’t an artificial dichotomy.

Again, I judge things based upon function.  The Singularity is creating an artificial mind that goes way beyond the natural in the dichotomy.  Even if you abstract the “dichotomy” to ‘changing the world vs. changed by the world’ (which I believe misrepresents natural vs artificial evolution), the Singularity is the penultimate changing the world, and bears no relationship with changed by the world (except in the sense of ying/yang, where changing the world is negated).

Obviously the artificial is by definition better.  Look at the above article.  To come full circle and bring it back to the above article: Micho is simply raising (sophist) objections to something that hasn’t happened yet.  Obviously, successful artificial designs have hurtles, but not roadblocks.  There is no middle ground.

BTW, I read and re-read your postings.  Thanks.  Sorry to be disagreeable, but the Singularity is coming much faster than many people think.

I’ve been away working for some time, sorry for my delayed response. I just wrote a draft post and deleted the whole thing, realizing we will not find a middle ground. I’ll attempt to take a difference approach:

For me there is a difference between simulation and reality is rooted in that simulations / models are collections of variables that are thought to be essential because under specific circumstances the model matches reality. I already talked about this being somewhat arbitrary (the degree of fidelity), and inherently reductionist.

“Function”, “betterness”, simulations and models are culturally constructed and limited by our conceptions. Reality is not limited by our conceptions (unless your a solipsist), even though if our perception of it is.

So that being said, what is the argument in support of the singularity?

The argument in support of the singularity is Moore’s Law, applied across the board to software, hardware, all sorts of technologies.  The rate at which computers and technology has “grown,” then plot an exponential curve - Kurzweil did it with great success so far.

I understand that the virtual is not as “rich” as reality, but I am a 80% guy - if you can put half the effort getting 80% of the job done, the why put in twice the time to finish it.  In other words, if virtual captures all the important elements, then it is good enough for me.

I swear some guys will get into an argumentative mood and fail to comprehend that what they are disqualifying is virtually the same as what they endorse.  I mean, yeah culture plays a role in evaluating and generating renderings of reality, but most functions are simply obvious, not nuanced.

I really appreciate this engagement, it has helped a lot in refining my own thought. You have clearly described your position, and being an “80%” guy has been quite clear, functional conceptions and simulations are close enough to reality for you. I accept this is your position.

I have one final question: 80% of what?

To know you have 80% assumes you have measured 100%, and you’ve ruled out that extra 20% as irrelevant. Of course you can’t measure 100% (every variable of any phenomena). So you have to guess at what 100% is. It’s a culturally limited concept of the phenomena, and therefore somewhat arbitrary in its fidelity. You may think its 80%, but that may only be 5%, because your concept of what 100% of the phenomena is turns out to be incorrect.

I believe the world is infinite, but our representations and conceptions are not. They are limited in scope and scale. 5% of infinity is still infinity, and therefor 5% of reality is still beyond the means of our conceptual capacities.

My final point is that any conception of the singularity is problematic because its construction is bound up in our cultural conceptions and our choices as to what is important to measure and what is not. All of this is only objective taking particular assumptions in mind regarding what is important and what is not, which are determined a priori and are, to some degree, arbitrary. (you can’t measure what you can’t conceive of).

If our measurements were actually objective (independent of our cultural biases), then no positive result (with proper methodology) could be disproved. Scientific knowledge is proved and disproved constantly as we develop of refine our culture, which determines what we think is important and measurable what is not.

The only thing we can be sure of is that we will always be wrong about something. A prediction may work in the short-term, like whether models or Ray’s tech predictions, but once the time-scale is large enough, we’re just stabbing in the dark, poking at things where we expect them to be.

Both of your points (i.e. 80% of what? and conception is bound up in our cultural) are wrapped up in the maxim: Don’t let perfect be the enemy of good.  Otherwise known as splitting hairs.

I could rephrase, and call myself a mostly guy.  I mostly complete the task with half the effort.

With the “concept” of singularity, we are talking about a positive feedback loop dynamic, where technology spurs further more rapid technological advancement.

I once wrote a paper on the concept of water.  When the ancients referred to “water” were they referring to the same thing as we do when we describe it as H2O?  Can you ever totally accurately describe (measure or enunciate features) something?

Again, the exponential nature of technological advancement implies the coming Singularity.  My opinion is that the Singularity is a physical rather than conceptual outcome, and therefore cultural conceptions of it are irrelevant to it, just as they would be to a hurricane or tornado.

I need to jump on the new post in response to Chalmers (computation vs physical process). We’ll see where this one goes!

“Can you ever totally accurately describe (measure or enunciate features) something?” Depends. If you’re a solipsist then the description is the reality. If you’re a materialist, then you believe there is actually something out there. I’m attempting to find a space in between. I believe there is a real world out there, but I also believe that our perception of it is clouded by our conceptions. I don’t think you could accurately (perfectly) describe anything, it would always be beyond the description. This is why I can’t accept simulations as reality.

Yes I understand that the singularity is bound up in a “positive feedback loop”. The problem is that all feedback loops that I know of (except for the whole universe, and on that the jury is out on that one) only exist in a limited time-scale. Exponential growth happens a lot, but something always happens to cause a crash. This idea of limitless exponential growth is a concept, it is cultural, the idea itself is not something real and out in the world and none of us has an experience of it (in fact unless you live forever you can’t have an experience of it). It can’t be anything but concept, and therefore cultural.

So you may argue that its physical, but you also can’t produce an example (is it real or a conception if there is no example?), because any proof would require measuring some property forever. We both sit there watching a number increase, while I wait for it to crash and you wait for me to concede. Because there is not proof of such a process (limitless growth) then this belief is faith, it is unprovable. I think if we think of exponential growth within a limited time-scale, the notion of the singularity breaks down. This is presuming that the growth is a vertical line, where it expands to infinity in infinitesimal time.

That being said, there is such a thing as a real feedback loop, those happen all the time, but they result not in limitless growth but in periodicity of growth and decay. Maybe we can get enough growth before the crash to get interesting, and I agree that things will get interesting in the next 100 years, but at our rate of growth (number of people, use of resources, etc.) there will be a crash well before the singularity.

I think the reason why we’re using so many resources and being so energy inefficient (things are changing, but changing slowly) is because of this cultural notion of “sustainable growth” which is really a euphemism for limitless growth. We think we can progress and grow forever, but that does not mean that we can. Thinking through concepts such as limitless growth changes how we think about our impact on the world and each other, and how we think about “innovation”.

What if we thought about stability and periodicity instead? What if we invested not in growth but in long term stability? What if out understanding of exponential growth was always couched in knowledge that there is a crash to follow any rapid and increased growth? Personally I think we’d be much better off.

Hey there,
B. referred me to this post to check out all of the comments.
I have only scanned these comments but I have read the article before (and have chatted with Lincoln over the years).
I do not have anything detailed to say at this time so I will read more carefully in my spare time.
The question of whether the brain/mind is ultimately computational reminds me of Penrose’s belief in quantum nanotubules…
Here is the generic wikipedia entry about this..
B, are your thoughts on the (in)tractability of human brain-states related to Penrose?

Hi Jeremy,

I look forward to your comments. The thoughts around mind and computation are not so technical, ie they are not provable either way. It seems to me that the word computation is changing meaning to some degree. I always thought of computation as a set of rules put forth for a particular purpose (ie algorithm) in the context of a specific task. That particular purpose puts them in the realm of representations, which makes them subject to cultural biases, and non-neutral. In the case of brains, this indicates to me that we use metaphors of computation to explain aspects of mind, but that is not the same thing as proving that mind is computational (even if it the theory does explain certain things) because more than one theory can explain the same evidence.

After some discussion with others, I found some people don’t agree in my definition, for them a computation is just a causal process. If that definition is accepted, there is no way to refute that mind/brain is computational, because it is obviously physical and causal. According to that definition, everything causal is also computational, and therefore the word seems to have lost any useful meaning (rocks and rivers are computational). I don’t think it is wise to conflate those technical artifacts we produce with those physical ‘natural’ processes in the world—- even though one is composed of the other. The former are embedded in our cultural biases (what we deem is important and what we deem is not) and their meaning arises from our interaction with them, while the latter exist independently of us, and their meaning is in themselves and their relations.

I may have repeated myself uneloquently here, as I did not read the whole thread again.

YOUR COMMENT Login or Register to post a comment.

Next entry: The Third Revolution - Merging Physical and Engineering Sciences with Life Sciences

Previous entry: The Biological Advantage of Being Awestruck