Subscribe Join Our Facebook Group
MULTIMEDIA: Richard Loosemore Topics

Subscribe to IEET Lists Daily News Feed
Longevity Dividend List
Catastrophic Risks List
Biopolitics of Popular Culture List
Technoprogressive List
Trans-Spirit List


Richard Loosemore Topics
Rights > HealthLongevity > Personhood > GlobalDemocracySecurity > Vision > Contributors > Richard Loosemore > Sociology > Philosophy > Psychology > Futurism > Technoprogressivism > Innovation > Artificial Intelligence > SciTech
Richard Loosemore
Defining “Benevolence” in the context of Safe AI by Richard Loosemore

The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?

Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Contributors > Richard Loosemore > FreeThought > Technoprogressivism > Artificial Intelligence > Biosecurity > Cyber > SciTech
Richard Loosemore
The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation by Richard Loosemore

My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous.  On a mo...

GlobalDemocracySecurity > Vision > Contributors > Richard Loosemore > Futurism > SciTech
Richard Loosemore
The Fallacy of Dumb Superintelligence by Richard Loosemore

This is what a New Yorker article has to say on the subject of “Moral Machines”: “An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip.”

Rights > Personhood > GlobalDemocracySecurity > Vision > Fellows > Ben Goertzel > Contributors > Richard Loosemore > Futurism > Cyber > SciTech
Richard Loosemore
Why an Intelligence Explosion is Probable by Richard Loosemore

(Co-authored with IEET Fellow Ben Goertzel) There is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue.  A thousand years of new science and technology could arrive in one year. An intelligence explosion of such magnitude would bring us into a domain that our current science, technology and conceptual framework are not equipped to deal with; so prediction beyond this stage is best done once the intelligence explosion ha...

Rights > GlobalDemocracySecurity > Vision > Contributors > Richard Loosemore > Technoprogressivism > SciTech
Richard Loosemore
The Lifeboat Foundation: A stealth attack on scientists? by Richard Loosemore

It turns out that the Lifeboat Foundation (and this is a direct quote from its founder, Eric Klien) is “a Trojan Horse” that is (here I interpret the rest of what Klien says) designed to hoodwink the people recruited to be its members.

GlobalDemocracySecurity > Vision > Contributors > Richard Loosemore > Technoprogressivism > Cyber > SciTech > Resilience
Richard Loosemore
Don’t let the bastards get you from behind! by Richard Loosemore

One day when I was a young teenager, living out in the countryside in the south of England, a dear old guy I knew drove past me when I was on a long solitary walk. He recognized me and pulled over to ask if I wanted a ride down to the village.