IEET > GlobalDemocracySecurity > Vision > Futurism > Cyber > SciTech
Is SETI Downloading an AI an X-Risk?

In this talk Alexey Turchin argues that the program of the search of extraterrestrial intelligence (SETI) is a source of extinction risk. The main idea is that passive SETI is much more dangerous activity than messaging to stars because we could download Alien AI (that is a scheme of a computer and program to it) which will use the Earth to send its copies further. The following is based on two premises:
First is that extraterrestrial civilizations do exist in the distances which allow radio communication, but do not allow interstellar travel (which is from one thousand to one million light years).
Second is that artificial intelligence is possible as a self-evolving classic computer program.



Alexey Turchin was born in Moscow, Russia in 1973. Alexey studied Physics and Art History at Moscow State University and actively participated in the Russian Transhumanist Movement. He has translated many foreign Transhumanist works into Russian, including N. Bostrom and E.Yudkowsky. He is an expert in Global Risks and wrote the book “Structure of the Global Catastrophe: Risks of Human Extinction in the XXI Century,” as well as several articles on the topic. Since 2010, he has worked at Science Longer Life where he is writing a book on futurology.




COMMENTS
I enjoyed this exploration, so much so that I wanted to expand on it, by asking What if the singularity is an alien stargate...

http://www.transalchemy.com/2010/06/what-if-singularity-is-alien-stargate.html
I think that this raises important issues but there are missing elements here that are due to Alexy's unawareness of research into technologies that are aimed at killing AIs. There are a number of issues here. Some are military, C4I related issues, and others are the way that this conversation about alien civilizations is developing in the science community which has long fostered the belief that we are alone in the universe, then slowly has begun to admit that there might be life but it's just basic, simple life forms with no intelligence. Now the worry is that we might be invaded at some point or download a virus like we did to the alien invaders in Independence Day.

I think the time has come to hold an open conference that will address this issue from all the applicable angles that are involved.
" Alexy's unawareness of research into technologies that are aimed at killing AIs."

The only way to battle an AI is with an AI, so they say. If there is any other novel approach to this problem I'm open to hear it.

Elaborate please, link to more information...
Yes there is. Researchers who anticipated this problem more than 20 years ago, developed the technological approaches that would eventually prove to have a potential effectiveness. Seeing as how this is highly proprietary research, and I for one think it will be needed some day, I won't be the one revealing what I know of how it works, so that no counter measures can be developed in the meantime.
While the presentation brought up some valid theoretical concerns, I found the conclusionary statements premature at best. Until proof is found of a technology that can cheat the laws of physics, I don't see the alarming reason in getting all worked up. More than likely, this will prove to be a fantasy of sci-fi nerds. Besides, AI-connected beacons doesn't seem ideal in a universe that may favor wormholes..
@Marshall Barnes: You think that if you discussed it here, then aliens would somehow pick up the signal? Or do you mean it being nullified by countermeasures made by more Earthbound AI threats (which are vastly more likely, I'd think, than an *alien* intelligence invading a computer)?

And does this "proprietary" nature of certain technologies tend to slow down or hinder humanity's advances? I think we need a world where such secrecy is no longer necessary, as it limits certain advances and who can work on them. Bars and shackles of our own making are the last things we need.
And if the research is so proprietary, *don't* expect anyone to be aware of it. And even if you can't disclose the technology itself, why not disclose the answers to the "issues" -- otherwise, how the heck is anyone else supposed to solve this?
@ Mike:

No. I think that if I discussed it here that humans would pick up on it and develop countermeasures against it to protect AIs. It makes no difference who creates the AI, there must be a way to kill if needed, to protect humanity.

If you think that there is no need for secrecy, then why do you use a handle instead of your real name? For that matter, what's your mother's maiden name, your SSI#, full birth name, current and past 3 former addresses, where did you go to school and who's your current employer and pay level? Secrecy requirements are in the eyes of the beholder. Many people feel that AIs will create additional "bars and shackles" that humanity will not be able to escape from.

As for your second message, what issues are you referring to? I said that if aliens tried to download an AI that this proprietary technology would probably kill it. What's left to solve?
@Marshall Barnes: For one thing, all the items you mention on the list (I could use a real name if I wanted to -- I have on other sites so it's "out of the bag" anyway) are not relevant to the interest of all humanity. Secrecy of items that would have little to no potential benefit for other people or humanity in general if they were widely known is acceptable to me.

Also, I wasn't saying go disclose now, I was saying that it would be better for humanity if we had a world where such secrecy was not necessary. Secrecy of this kind may be a "necessary evil", but it is still an "evil" nonetheless when it comes to the progress of humanity, and thus making a world in which it is no longer necessary seems like a good thing to try for. And I just wanted to comment that you can't expect the guy on this video to know when it is all secret, since you mentioned his "unawareness of research into technologies that are aimed at killing AIs" -- but if all that research is secret, of *course* he's not going to know, see?

As for my second message, you mentioned a "number of issues here. Some are military, C4I related issues, and others are the way that this conversation about alien civilizations is developing in the science community which has long fostered the belief that we are alone in the universe, then slowly has begun to admit that there might be life but it's just basic, simple life forms with no intelligence." I was wanting to know more about those issues.
In addition, I'm just a little kind of "tweaked" by extreme levels of secrecy, because of the potential of benefits being held back. And it also makes me wonder what kinds of *other* stuff is kept secret too, like if there's some kind of new energy generation system that's kept secret because of some fear of abuse, even though potential complete socioeconomic disintegration from mass oil shortages and even possible human extinction threat from severe global climate change if the technologies weren't disclosed before such things hit bad (thus leading to "business as usual" fossil fuel guzzling) would seem to vastly outweigh such concerns. I don't know if there is, but I'd really, *really* object to that being kept as secret as this thing apparently is.
I know, that might seem a little bit paranoid, but I'm just trying to point out where I'm coming from.
@mike3:

The purpose of secrecy is protect something. There will never be a reason for there to be no secrets. I'm also not expecting Alexi Turchin or probably anyone else on this site to know about the tech I'm referring to because it is proprietary.

As for the issues from my original comment, the details are too numerous to go into here which is why I was saying that there should be a conference about them. As for suppressed technologies, that's a given. Right now the tech exits to turn every state in the union that has large coal reserves into an oil producing state which would cut the deficit nationally and eliminate the need for an income tax in those states. It is being fought against by the big energy lobbyists. We could have $1 a gallon gas again, more jobs and a complete recovery but the powers that be don't care about that. And that's a fact.
@Marshall Barnes:

"The purpose of secrecy is protect something. There will never be a reason for there to be no secrets. I'm also not expecting Alexi Turchin or probably anyone else on this site to know about the tech I'm referring to because it is proprietary."

The type of secrecy that suppresses technologies and knowledge is the kind I really don't like, unless that knowledge is something like how to build a bomb of incredible destructive power.

"As for the issues from my original comment, the details are too numerous to go into here which is why I was saying that there should be a conference about them. As for suppressed technologies, that's a given. Right now the tech exits to turn every state in the union that has large coal reserves into an oil producing state which would cut the deficit nationally and eliminate the need for an income tax in those states. It is being fought against by the big energy lobbyists. We could have $1 a gallon gas again, more jobs and a complete recovery but the powers that be don't care about that. And that's a fact. "

And I presume you'd be _against_ the secrecy of those type of technologies, then?
@mike3

The technology isn't secret, but its use is being suppressed by the big energy interests.
@Marshall Barnes:

So then it's a different issue to the one you mentioned with the "justified" proprietariness/secrecy of the "AI killing" stuff?
Yes.
I would add that the full line of my argumentation could be found in my article

Risks of SETI
http://www.scribd.com/doc/7428586/Is-SETI-Dangerous

Alexey Turchin
YOUR COMMENT Login or Register to post a comment.

Next entry: Work and Play

Previous entry: A Revolution in Time