Fully-realized artificial intelligence has long been the holy grail for daydreamers and forward-thinking inventors alike. We aren’t quite there yet, but modern virtual assistants are making the case that we aren’t so very far off. Whether it’s a feature integrated into your smartphone or a standalone assistant like the Amazon Echo, digital assistants have shown great strides in the ability to recognize and parse your spoken commands and respond to them appropriately.
Lately I’ve been experiencing quite a bit of deja vu, and not in the least of a good kind. The recent bout was inspired by Ben Smith’s piece for BuzzFeed in which he struggled to understand how an Ayn Rand loving libertarian like the technologist Peter Thiel could end up supporting a statist demagogue like Donald Trump. Smith’s reasoning was that Trump represented perhaps the biggest disruption of them all and could use the power of the state to pursue the singularity and flying-cars Theil believed were one at our fingertips.
One of the more confusing characteristics of our age is how it trucks in contradiction. As a prime example: the internet is the most democratizing medium in the history of humankind giving each of us the capability to reach potentially billions with the mere stroke of a key. At the same time this communication landscape is one of unprecedented concentration dominated by a handful of companies such as Facebook Google, Twitter, and in China Baidu.
On the 25 of September Marcelo Rinesi published his article ‘The Price for the Internet of Things will be a vague dread of a malicious world’. With this response, I want to take on the implicit challenge he poses. How can we build an internet of things that will not fill us with dread? This article will present my ideas on a ‘transparent smart chargepoint’. Let me explain what I mean by this. ‘Chargepoint’ refers to the device that is designed for charging for electric cars. ‘Smart’ refers to the fact that the chargepoint optimizes the charging process on various variables – such as the price of electricity, the congestion on the electricity grid. ‘Transparent’ means that it is designed to be open as open as possible about the algorithms that run it.
[This the text of a talk I’m delivering at the ICM Neuroethics Network in Paris this week]
Santiago Guerra Pineda was a 19-year old motorcycle enthusiast. In June 2014, he took his latest bike out for a ride. It was a Honda CBR 600, a sports motorcycle with some impressive capabilities. Little wonder then that he opened it up once he hit the road. But maybe he opened it up a little bit too much? He was clocked at over 150mph on the freeway near Miami Beach in Florida. He was going so fast that the local police decided it was too dangerous to chase him. They only caught up with him when he ran out of gas.
Seneca was a wealthy Roman stoic and advisor to the emperor Nero. In the third of his Letters from a Stoic, entitled ‘On True and False Friendship’, he makes the following observation:
As to yourself, although you should live in such a way that you trust your own self with nothing which you could not entrust even to your own enemy, yet, since certain matters occur which convention keeps secret, you should share with a friend at least all your worries and reflections.
Advancements in virtual reality are not only technology driven, but actions within virtual environments implicate numerous issues in policy and law. For example, are virtual images copyrightable? Is the speech produced by a virtual avatar afforded rights under the U.S. and other Constitutions? How does criminal law relate to actions performed within virtual environments, or contract law apply to the lease and sale of virtual objects? These and other questions form the theme for this special issue. Legal scholars and practitioners from the U.S. and other jurisdictions are encouraged to submit.
‘Intimate Surveillance’ is the title of an article by Karen Levy - a legal and sociological scholar currently-based at NYU. It shines light on an interesting and under-explored aspect of surveillance in the digital era. The forms of surveillance that capture most attention are those undertaken by governments in the interests of national security or corporations in the interests of profit.
I’m going to start with a few brief opening remarks about what I think is the habit of thought that has made the United States #1 in the world in prisons and wars. And then I’ll be glad to try to answer as many questions as you think of. These remarks will be published online at American Herald Tribune.
Phil Torres’ new book The End: What Science and Religion Tell Us about the Apocalypse, is one of the most important books recently published. It offers a fascinating study of the many real threats to our existence, provides multiple insights as to how we might avoid extinction, and it is carefully and conscientiously crafted.
Some things in life cannot be offset by a mere net gain in intelligence.
The last few years have seen the widespread recognition that sophisticated AI is under development. Bill Gates, Stephen Hawking, and others warn of the rise of “superintelligent” machines: AIs that outthink the smartest humans in every domain, including common sense reasoning and social skills. Superintelligence could destroy us, they caution. In contrast, Ray Kurzweil, a Google director of engineering, depicts a technological utopia bringing about the end of disease, poverty and resource scarcity.
At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.
The field of Existential Risk Studies has, to date, focused largely on risk scenarios involving natural phenomena, anthropogenic phenomena, and a specific type of anthropogenic phenomenon that one could term “technogenic.” The first category includes asteroid/comet impacts, supervolcanoes, and pandemics. The second encompasses climate change and biodiversity loss. And the third deals with risks that arise from the misuse and abuse of advanced technologies, such as nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial intelligence.
I want to elaborate briefly on an issue that I mentioned in a previous article for the IEET, in which I argue (among other things) that we may be systematically underestimating the overall probability of annihilation. The line of reasoning goes as follows:
Marshall Brain (1961 – ) is an author, public speaker, and entrepreneur. He earned an MS in computer science from North Carolina State University where he taught for many years, and is the founder of the website HowStuffWorks, which was sold in 2007 to Discovery Communications for $250,000,000.
In February, 1996, John Perry Barlow (of Grateful Dead, Electronic Frontier Foundation, etc.) declared cyberspace to be independent of states and their industries, economies, and politics. He was wrong. “Cyberspace” (and we’ll use the same term here, for what it’s worth) is an expression of fleshly and natural and mechanical processes; it is derived from human politics and industry, and it cannot be independent of it.
Hans Moravec (1948 – ) is a faculty member at the Robotics Institute of Carnegie Mellon University and the chief scientist at Seegrid Corporation. He received his PhD in computer science from Stanford in 1980, and is known for his work on robotics, artificial intelligence, and writings on the impact of technology, as well as his many of publications and predictions focusing on transhumanism.
Das humanistische Menschenbild prägte die Entwicklung westlicher Gesellschaften. Doch inzwischen ist der Transhumanismus auf dem Vormarsch. Vertreter dieser neuen ideologischen Strömung beraten westliche Regierungen, Firmen und Entscheidungsträger. Sie streben eine Cyborgisierung des Menschen an. Doch was sind die politischen Folgen?
I’ve been playing catch-up since my tenure application and my class preps for the Spring semester, but I’ve finally been able to re-engage with my usual sites, and all of the fantastic content in my Google+ communities. One thing that’s been coming up in various iterations is the concept of the “internet of things.” In a nutshell, the term loosely (and, I think perhaps a little misleadingly) refers to a technological interconnectivity of everyday objects: clothes, appliances, industrial equipment, jewelry, cars, etc, now made possible by advancements in creating smaller microprocessors.
Ever since Congress passed Al Gore’s bill, around 1990, setting the Internet free to pervade the world and empower billions, repressive governments have complained, seeing their despotic methods undermined. And yes, democratic governments have often muttered: “Why’d we go and do that?” as their citizens became increasingly rambunctious, knowing and independent-minded!
Tal Zarsky’s work has featured on this blog before. He is an expert in the legal aspects of big data and algorithmic decision-making. He recently published a paper entitled “The Trouble with Algorithmic Decision-Making” in which he tries to identify, categorise and respond to some of the leading objections to the use of algorithmic decision-making processes. This is a topic that interests me too, so I was eager to see what he had to say.
Conspiracy theories abound. They erupt out of human nature, it seems, and your ethnicity or caste or political leanings only affect which direction you credit with devilish cleverness, secret power and satanic values. For sure, as a science fiction author I can concoct plausible schemes and plots with the best of them! Indeed, let me add that some real life cabals are so blatant and proudly obvious that you just have to admit – sometimes “they” are completely real and up to awful mischief.
Humanity faces a range of threats to its viability as a civilization and its very survival. These catastrophic threats include natural disasters such as supervolcano eruptions and large asteroid collisions as well as disasters caused by human activity such as nuclear war and global warming. The threats are diverse, but their would-be result is the same: the collapse of global human civilization or even human extinction.
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’).
After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.
(This report was co-written with IEET contributing writer Kaj Sotala)
By now I’ve clocked up a relatively comprehensive slew of reading up on Artificial General Intelligence, in particular concerning its ethical implications. Still mostly in the dark when it comes to any of the difficulties and scientific quandaries that go into creating such a machine, I am at least at a level of understanding whereby I can begin to tease out for myself some of the wider implications AGI would present for humankind.
According to IEET readers, what were the most stimulating stories of 2015? This month we’re answering that question by posting a countdown of the top 30 articles published this year on our blog (out of more than 1,000), based on how many total hits each one received.
The following piece was first published here on April 27, 2015, and is the #23 most viewed of the year.
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org