Day 1 Afternoon of H Summit: Stephen Wolfram
Ben Scarlato
2010-06-13 00:00:00



Stephen Wolfram

Now the keynote, Stephen Wolfram. Alex Lightman is introducing him, and talking about all his accomplishments and how useful Wolfram Alpha.

Stephen Wolfram says he's going to do something unusual for him, and talk about the future in public.

The core framework of what he'll be talking about is computation. In the future, computation will be even more important.

In his life he's had three major projects, Mathematica, Wolfram Alpha, and A New Kind of Science.

There are some technical difficulties. Alex jokes the programs are coming self-aware and escaping.

Now Wolfram talking about Rule 30. He’s showing a grid of cells that display different colors based on a simple set of rules about the color of the cell next to it. But what happens if you change that rule slightly? And what if you look at all possible programs of this type?

Most of the programs produce straightforward outputs, but some don’t. He’s showing one output that has some regularity, but it’s complicated enough that it looks pretty random.

Wolfram says this kind of thing brings to mind Nature. Nature can just go out into the computational universe and pick anything. This can provide incredible complexity, that looks like it required some sort of god but it’s really not that complicated.

In MKS, we have the principle of computational equivalence. This principle says that after you pass a threshold in complexity of computation, everything’s equivalent. You don’t need very complicated machines.

The principle tells us even simple systems with simple inputs can do very complicated computations. We know the simplest machine Turing, and we don’t need something complicated to make a universal computer.

The precise sciences have always prided themselves on making precise predictions. That can be done for solar systems, but can we always do it? The principle of computational equivalence says we can never out simulate certain systems.

Therefore, it can be fundamentally difficult to predict the future. But we can predict some things.

Is step by step engineering the only way to create technology? No, we can identify a purpose, and then search the computational universe for an algorithm that serves that purpose.

Wolfram Alpha uses a lot of algorithms that humans don’t use and didn’t input, but which it found itself to be more efficient.

He’s talking about WolframTones, which plucks music out of the computational universe based on certain rules that create rich tones.

The question is, what use will we make out of all the technology in the computation universe?

What happens if you enumerate all the Turing machines to compute something? You’ll probably find some that will compute it in a seemingly random way, and those might be the fastest, but you wouldn’t get there by going step by step.

How do you infer purpose? What do you tell if a splash of paint is a mistake or modern art? What if we extend this to mathematics?

Mathematics is based on a certain set of axioms, but there’s an infinite set that could be used. What he’s concluded is that the mathematics today are really just an accident of history.

So what criteria could you use abstractly to infer if a thing has a purpose? Sometimes it’s easier to describe things by mechanism, and sometimes it’s easier to describe them by purpose.

Say we want to know whether a signal was purposefully produced by aliens. The principle tells us it’s possible to get very complex outcomes even from very simple systems. He gives an example of a fisher who heard signals that made him think there was life on Mars, but which actually had to do with Ionosphere.

What if we wanted to make a signal of our intelligence as a planet? We could signal our intelligence by doing things like sending solution to complex problems, but Wolfram’s never heard an idea for a signal that doesn’t breakdown. For instance, if you make drawings that are too precise, they look like natural phenomena again.

In the future what will limit us is not the evolution of technology, but the evolution of human purpose. If we were to have a world that’s entirely computational and digital, we’d still have no abstract principle to distinguish ourselves from a lump of matter generated by physics.

But in some sense that’s not true. We might not be abstractly special, but to us we have our history.

Computational irreducibility implies we need to live through our history to find the result.

They’ve been moving the goal posts for AI, it used to be playing chess, but AI’s reached all these goal posts and people keep moving them.

Wolfram Alpha solves problems in a very un-human like way. If it shows you the step for how it solved an integral, that’s completely fake. It solved the problem a completely different way and reverse-engineered the steps to be human readable.

Human language only expresses a small subset of possible computations. There’s a lot of things Wolfram Alpha could do if we could express ourselves better. It’s way too smart to pass a Turing test.

Computational irreducibility allows you to go from very simple rules to unpredictable behavior. He says this is like free will. There’s a “free will” in a lot of systems, it’s certainly not a uniquely human feature.

Going back to WolframTones, a lot of composers go their for inspiration. But how do you build a system that will pass the Turing test?

Now back to our future. The question is about the evolution of our purposes. Walking on treadmills and weird social networking things wouldn’t used to have made sense.

Is there progress in art? It doesn’t seem inexorable. But progress in technology does seem inexorable. Natasha Vita-More sitting next to me says that art is technology.

Human purposes have often been shaped by things like religion. There are all sorts of individual purposes, like wanting a quiet life, a strong social network, etc.

But there are limits to the diversity of purpose. If you get too afar afield, you won’t recognize the purpose.

Ages ago Wolfram was challenged to quickly think of an invention. He thought of apps for pets, like what would the purpose be for a cat using an iPad app?

Our bodies are a bit like operating systems, all these different pieces, drivers for this and that etc. Gradually all kinds of gunk builds up in the operating system, just like how in humans you can start a generation with new DNA.

Computational irreducibility is our big enemy when it comes to the body. But he’s helpful, perhaps we can have drugs that find the right algorithms. He thinks the first dramatic that will happen is succesful cryonics.

He thinks there’ll just be a breakthrough in cryonics that’ll make it work like with cloning and Dolly, and it’s a shame the field isn’t taken seriously.

But going back to looking for extraterrestials. Why haven’t we met any? He mentions some typical explanations for this. To find ET’s we don’t need lots of hardware, we could just find the right algorithms.

When our constraints our removed, our future selves will have a difficult time deciding what to do. But they can look back at what we did when we had constrains.

For the first time, most things our recorded. So perhaps when people look back, they’ll look back to our era.

Wolfram Alpha is trying to capture the computational knowledge of our civilization. His next project is finding the fundamental theory of physics.

This was a fantastic talk, if it sounds too far out there it could well be due to my transcription. Check out the video once it’s posted. Now Alex Lightman is going to talk to Wolfram.



Wolfram Q&A

Alex says there might be a time when not being able to program will be like not being able to read, but Wolfram disagrees. With Wolfram Alpha, you can essentially use natural language to specify a program. He’s spent a large part of his life trying to make programming as easy as possible.

There was a bit more, but now we break.