IEET > Vision > Interns > Kris Notaro > Futurism
Singularity Summit Coverage - Day 2
Kris Notaro   Oct 5, 2009   Ethical Technology  

Some words and photos from the 2nd day of the 2009 Singularity Summit in NYC.

Ray Kurzweil, Kurzweil Technologies

Ray Kurzwiel pointed out that there is a lot of criticism from those looking at his predictions of technological growth. He explained that our brains are hardwired for thinking in a linear fashion, but stressed that’s not how things work. We need to look at the evidence of exponential growth all around us. He gave examples of computer hardware speed that is still growing exponentially.

He also said that exponential growth can’t go on forever because resources would run out. However he said that information paradigms always go into crisis and then to the next paradigm. He also mentioned that researchers are working on 3 dimensional self organizing circuits which could in theory reach trillions of times the power of the brain. In response to critics predicting technological advancements through exponential growth factors he suggested that people don’t appreciate the growth of software and pointed out that if all “AI” were stopped tomorrow we couldn’t get money from ATMs, use transportation, etc. It was not like this just a few years ago. Also included in his reaction to criticisms he said that John Horgan, Author of The End of Science, believes that we would need trillions of lines of computer code in order to simulate the brain, but our knowledge of DNA and the cerebral cortex suggests that within even the most complex structures of the human body you can see massive amounts of redundancy.

Also mentioned was the need for a “rapid response system” for bio terrorism and the like so that within a few hours or days we can have the cure to such an attack or virus, he reminded us that programmers have been able to produce such a system for computer viruses. Soon enough computers will be able to simulate cells and their reaction to certain drugs or viruses and the computer will be able to rapidly find a cure for many biological problems.He also made the claim that un-friendly AI is the biggest threat to humanity because of the intelligence. We need friendly AI and also humans should integrate with the technology and intelligence.

An audience member asked how to handle some people’s fear of technology. Kurzwiel points to “bioLuddism”, but suggested that most people appreciate technology and use it, and most people use life extending technology as well. Most of the world had life spans of around 30-40 which has changed. He also said that 4 billion people have cell phones now. This means “knowledge gateways” in the hands of many whom need it.


      Brad Templeton, Electronic Frontier Foundation

Brad Templeton talked about the future of the automobile. He showed how smart cars will be able to park themselves to claiming that they will be 100 times better at driving then humans.

He pointed out that cars are one of the largest wastes of energy today because people use them impractically. He suggested that the day will come when we rent smart electric cars or rent out our own cars, each car being run on electricity and made to serve a particular function. For example he talked about how people who may ski once a year will buy a SUV, those who move heavy materials a few times a year buy a truck, and those who buy cars for themselves to go to work and the like end up driving a vehicle that could fit 5 people.

Gregory Benford, Genescient
During question and answer Benford pointed to a young man and suggested that he will probably live to be 150-200 years old.

He presented a chart which showed that electric "scooters" will be the best energy savers out of all future cars. Individuals who need to drive alone could use a proposed one person accident resistant scooter, telling it where to go like a taxi, while driving better then any human ever could.

He said that the downside of these proposed cars might stem from hackers, bugs in computer code, privacy, and freedom. He also said that the barriers so far are law, people’s irrational fear of letting a computer drive for you, liability, terrorists, Small Matter of Programming (SMOP), and software recalls. The benefits are clear: virtually crash free, safe for drunks, the elderly, teenagers, and run on purely "green" technology.       

Gary Marcus, New York University

Gary Marcus talked about the clumsiness of the human mind. From evolution emerged the human spine which he said is not a good design. Once evolution goes in a certain direction it can stay with it for many years. Human memory and cognition was compared to modern computers. From a design perspective, computers are efficient, biological memory is totally different, idiosyncratic and irrelevant contextual information is remembered he argued.

He discussed how our memory is “like a shoe box, all sorts of stuff gets in there.”  We have a system because it’s been around for a long time, from evolution, but if a bio engineer was to engineer the brain they would want to make it drastically different. He also talked about how in “confirmation bias”  the brain will notice what we already know and reconfirm what we already know.  It can be seen in consumer choices influenced by commercials, he added.  He gave the example of the human brain going into cycles, for example people who are depressed can be stuck in thinking about the last few minutes (because that’s how our brains tend to operate) and are thus stuck within a cycle of depression.

He did point out however that compared to a computer, human vision is very good.  In the question and answer session he was asked about savants and how their incredible memory relates to normal memory. Marcus responded by saying that he thinks their memory is not very different then anyone else’s because in his studying he has found that memory of these people doesn’t really help that much in the context of living a normal life.  However I think the question really was: how can we understand how savants have such good memory of specific things, and how can we understand the genetics and structure of it so we can apply it to what we know of as “normal” human memory.

Peter Thiel, Clarium Capital Management

Peter Thiel started his presentation by asking the audience which out of seven threats to humanities existence is the greatest and most likely to happen.  He mentioned Iran nukes, biotech terrorism, un-friendly robots, a one world totalitarian state, global warming, nano technology gone mad, or the singularity not happening soon enough.  I actually raised my hand for the singularity not happening fast enough, with all seriousness.  While some may think this is an absurd notion, take a moment and think about it.  What is the singularity? What are the positive consequences of such?  The answers I think are logically sound.  An explosion of hyper-intelligence would indeed be able to tackle most, if not all the world’s problems.

Many people do not think about the consequences of the singularity not happening fast enough. Humanity is faced with some of the largest issues it has ever had, and understands more about the world and universe then it ever has. While we know more about how just the universe alone can destroy the world in seconds, we also know how our own paradigm of war, greed, population explosion, and global warming can also destroy humanity.  A singularity, even if it means just the implementation and acceleration of science and technology would mean a world of difference because while the world has its problems we are capable of fixing them.

An audience member asked what happens if the rich get all these new technologies, but the poor do not.  Thiel replied that he is more concerned that these technological advances will not happen at all.  I am not sure if this is a proper answer.  I think if technological advances rapidly increases we must ask these questions everyday.  Who is making the products, how are people affected now, and how can we increase science education, technological innovation, while remembering that we live in a global superorganism where we need to respect basic human rights alongside of all 6.7 billion minds. He also suggested that capitalism in crisis doesn’t turn into freedom, instead would turn into totalitarianism.

Venture Capitalist Panel: Mark Gorenberg, David Rose, and Peter Thiel with moderator Robert Pisani

Aubrey De Grey, SENS Foundation



Aubrey discussed aging and how to eliminate it.

Eliezer Yudkowsky, Singularity Institute for Artificial Intelligence

Eliezer gave a great talk about intelligence and how our own mind’s, in our tech-savvy culture can make bad mistakes. Just by comparing data from psychological and sociological studies we can clearly see that intelligence doesn’t always mean rationality, however education can yield more rational thinking and acting in the world. I think this talk shows how if Artificial Intelligence becomes conscious it will be subject to some of the same stupidity as humans. Or perhaps it was meant to show that we can and should use technology and science to further understand the mind, consciousness, AI, and intelligence, but also the nature of ethical understanding so that we can adapt it to both to ourselves and to future AI.


Kris Notaro served as Managing Director of the IEET from 2012 to 2015. He is currently an IEET Rights of the Person Program Director. He earned his BS in Philosophy from Charter Oak State College in Connecticut. He is currently the Bertrand Russell Society’s Vice-President for Website Technology. He has worked with the Bertrand Russell A/V Project at Central Connecticut State University, producing multimedia materials related to philosophy and ethics for classroom use. His major passions are in the technological advances in the areas of neuroscience, consciousness, brain, and mind.


Thanks so much for the coverage! I wish that I could have been there myself. Do you know if videos of the talks will be posted online?

They said they will have all videos online eventually, but they did not say when. Too bad there was no online interactive webcast, this should be a standard for conferences (at least for futurist conferences) these days.

Great, thank you!

It amuses me that so many people assume that “hyper-intelligence” can solve the problems we care about. What if such an intelligence arises, thinks about these problems and then tells us that it can’t solve them, either?

The majority of the speakers and people in the audience at the Singularity Summit were white men. it was not really the diversity i was hoping for. If the “Singularity” is used for profit by greedy individuals instead of truly trying to fight poverty, ignorance, and aging for everyone on earth then it is simply a continuation of oppression, segregation, “gender” divide, “race” conflict, class, and a form of the “Matrix of Domination.”

There are ethical issues as well that go along with creating a “hyper intelligence” in that if it is conscious like we are it must assume the same rights as we do, I don’t think it would want to be stuck inside a computer or an artificial brain in some lab.  David Chalmers actually said in his talk that if any of the scientists actually do simulate the brain as well as they would like, that they should do so perhaps for one second, then turn it off, and go back to it a few years later after thinking really hard about it, etc. 

However, then you have the problem that global warming is a reality, nuclear war can still happen, resources are running out, etc etc.  While we know we might be able to fix these problems on our own, the singularity in theory would help drastically to fix issues that we are faced with today.  I don’t think there is much that cant be done by people, so if we increase our cognition and intelligence this will only speed up the rate of fixing problems.

On the IEET frontpage I currently see 18 pictures of people’s faces, associated with articles and so on.

17/18 are white men. Also, 5/5 of IEET Directors are white men.

It seems you have “continuation of oppression” threats to tackle a bit closer to home!

Actually 8-9 females participate in the IEET, I believe your numbers are off.  I do not want to speak for the IEET, but most organizations like this one probably need to address issues of race, class, gender, etc.

Once you have mature nanotechnology capable of reproducing itself, you have a situation in which everyone owns their own means of production, and will be able to produce anything they like.

Think of what that alone will do for us.  And then stop and think about your assumptions about future poverty, economic classes, oppression, etc.

You have not been thinking through the implications of the Singularity at all if you are still thinking in the mindset of Marxism.

Kris Notaro, are you sure you understood what my numbers pertained to?

I’ll also make an update:

Currently 18/18 of the faces on display on the IEET frontpage are white men.

So you look quite weird if you complain that the Singularity Summit had too many white men presenting their thoughts.

Myself, I didn’t find it problematic, but if people at the IEET do, you really should get your own act together on this front before complaining about too many white men elsewhere.

Why do you not find it problematic, I think after I understand your mindset I will be more able to respond to you.

YOUR COMMENT Login or Register to post a comment.

Next entry: Singularity Scenarios

Previous entry: Stefano Vaj and the Complicated Politics of Italian Transhumanism