Singularity Summit Coverage - Day 2
Kris Notaro
2009-10-05 00:00:00

Ray Kurzweil, Kurzweil Technologies

Ray Kurzwiel pointed out that there is a lot of criticism from those looking at his predictions of technological growth. He explained that our brains are hardwired for thinking in a linear fashion, but stressed that’s not how things work. We need to look at the evidence of exponential growth all around us. He gave examples of computer hardware speed that is still growing exponentially.

He also said that exponential growth can’t go on forever because resources would run out. However he said that information paradigms always go into crisis and then to the next paradigm. He also mentioned that researchers are working on 3 dimensional self organizing circuits which could in theory reach trillions of times the power of the brain. In response to critics predicting technological advancements through exponential growth factors he suggested that people don’t appreciate the growth of software and pointed out that if all “AI” were stopped tomorrow we couldn’t get money from ATMs, use transportation, etc. It was not like this just a few years ago. Also included in his reaction to criticisms he said that John Horgan, Author of The End of Science, believes that we would need trillions of lines of computer code in order to simulate the brain, but our knowledge of DNA and the cerebral cortex suggests that within even the most complex structures of the human body you can see massive amounts of redundancy.

Also mentioned was the need for a “rapid response system” for bio terrorism and the like so that within a few hours or days we can have the cure to such an attack or virus, he reminded us that programmers have been able to produce such a system for computer viruses. Soon enough computers will be able to simulate cells and their reaction to certain drugs or viruses and the computer will be able to rapidly find a cure for many biological problems.He also made the claim that un-friendly AI is the biggest threat to humanity because of the intelligence. We need friendly AI and also humans should integrate with the technology and intelligence.

An audience member asked how to handle some people’s fear of technology. Kurzwiel points to “bioLuddism”, but suggested that most people appreciate technology and use it, and most people use life extending technology as well. Most of the world had life spans of around 30-40 which has changed. He also said that 4 billion people have cell phones now. This means “knowledge gateways” in the hands of many whom need it.




Brad Templeton, Electronic Frontier Foundation

Brad Templeton talked about the future of the automobile. He showed how smart cars will be able to park themselves to claiming that they will be 100 times better at driving then humans.

He pointed out that cars are one of the largest wastes of energy today because people use them impractically. He suggested that the day will come when we rent smart electric cars or rent out our own cars, each car being run on electricity and made to serve a particular function. For example he talked about how people who may ski once a year will buy a SUV, those who move heavy materials a few times a year buy a truck, and those who buy cars for themselves to go to work and the like end up driving a vehicle that could fit 5 people.
Gregory Benford, Genescient
During question and answer Benford pointed to a young man and suggested that he will probably live to be 150-200 years old.

He presented a chart which showed that electric "scooters" will be the best energy savers out of all future cars. Individuals who need to drive alone could use a proposed one person accident resistant scooter, telling it where to go like a taxi, while driving better then any human ever could.

He said that the downside of these proposed cars might stem from hackers, bugs in computer code, privacy, and freedom. He also said that the barriers so far are law, people's irrational fear of letting a computer drive for you, liability, terrorists, Small Matter of Programming (SMOP), and software recalls. The benefits are clear: virtually crash free, safe for drunks, the elderly, teenagers, and run on purely "green" technology.





Gary Marcus, New York University

Gary Marcus talked about the clumsiness of the human mind. From evolution emerged the human spine which he said is not a good design. Once evolution goes in a certain direction it can stay with it for many years. Human memory and cognition was compared to modern computers. From a design perspective, computers are efficient, biological memory is totally different, idiosyncratic and irrelevant contextual information is remembered he argued.

He discussed how our memory is “like a shoe box, all sorts of stuff gets in there.” We have a system because it’s been around for a long time, from evolution, but if a bio engineer was to engineer the brain they would want to make it drastically different. He also talked about how in “confirmation bias” the brain will notice what we already know and reconfirm what we already know. It can be seen in consumer choices influenced by commercials, he added. He gave the example of the human brain going into cycles, for example people who are depressed can be stuck in thinking about the last few minutes (because that’s how our brains tend to operate) and are thus stuck within a cycle of depression.

He did point out however that compared to a computer, human vision is very good. In the question and answer session he was asked about savants and how their incredible memory relates to normal memory. Marcus responded by saying that he thinks their memory is not very different then anyone else’s because in his studying he has found that memory of these people doesn’t really help that much in the context of living a normal life. However I think the question really was: how can we understand how savants have such good memory of specific things, and how can we understand the genetics and structure of it so we can apply it to what we know of as “normal” human memory.




Peter Thiel, Clarium Capital Management


Peter Thiel started his presentation by asking the audience which out of seven threats to humanities existence is the greatest and most likely to happen. He mentioned Iran nukes, biotech terrorism, un-friendly robots, a one world totalitarian state, global warming, nano technology gone mad, or the singularity not happening soon enough. I actually raised my hand for the singularity not happening fast enough, with all seriousness. While some may think this is an absurd notion, take a moment and think about it. What is the singularity? What are the positive consequences of such? The answers I think are logically sound. An explosion of hyper-intelligence would indeed be able to tackle most, if not all the world’s problems.

Many people do not think about the consequences of the singularity not happening fast enough. Humanity is faced with some of the largest issues it has ever had, and understands more about the world and universe then it ever has. While we know more about how just the universe alone can destroy the world in seconds, we also know how our own paradigm of war, greed, population explosion, and global warming can also destroy humanity. A singularity, even if it means just the implementation and acceleration of science and technology would mean a world of difference because while the world has its problems we are capable of fixing them.

An audience member asked what happens if the rich get all these new technologies, but the poor do not. Thiel replied that he is more concerned that these technological advances will not happen at all. I am not sure if this is a proper answer. I think if technological advances rapidly increases we must ask these questions everyday. Who is making the products, how are people affected now, and how can we increase science education, technological innovation, while remembering that we live in a global superorganism where we need to respect basic human rights alongside of all 6.7 billion minds. He also suggested that capitalism in crisis doesn’t turn into freedom, instead would turn into totalitarianism.



Venture Capitalist Panel: Mark Gorenberg, David Rose, and Peter Thiel with moderator Robert Pisani



Aubrey De Grey, SENS Foundation


1


2



Aubrey discussed aging and how to eliminate it. http://www.sens.org/






Eliezer Yudkowsky, Singularity Institute for Artificial Intelligence


Eliezer gave a great talk about intelligence and how our own mind's, in our tech-savvy culture can make bad mistakes. Just by comparing data from psychological and sociological studies we can clearly see that intelligence doesn’t always mean rationality, however education can yield more rational thinking and acting in the world. I think this talk shows how if Artificial Intelligence becomes conscious it will be subject to some of the same stupidity as humans. Or perhaps it was meant to show that we can and should use technology and science to further understand the mind, consciousness, AI, and intelligence, but also the nature of ethical understanding so that we can adapt it to both to ourselves and to future AI.