What are our goals as a species? This, to me, is the most important question we can ask ourselves as human beings. Another way to say it: What is the meaning of our existence as a species? We never seem to directly ask ourselves these two questions in a collective way, which seems very odd to me. Because if we were discussing these questions openly, collectively and consistently, I believe we would live in a very different society.
Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.
An important question regarding human enhancement in the military is how the deployment of modified soldiers will redefine the ethical limitations on how combatants may be treated. The provisions of the Geneva Conventions and other bodies of international law prohibiting torture generally rest on certain assumptions about the human condition, such as pain thresholds, sleep requirements, and other forms of fragility.
A movement is afoot to cover some of the largest and most populated cities in the world with a sophisticated array of interconnected sensors, cameras, and recording devices, able to track and respond to every crime or traffic jam ,every crisis or pandemic, as if it were an artificial immune system spread out over hundreds of densely packed kilometers filled with millions of human beings.
Ted will be moving to Washington D.C. to lead the Global Economics and Strategy Department and be responsible for IFC’s strategy and development impact functions, leading a global team of approximately 100 people.
I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?
Jibo, the “world’s first family robot,” hit the media hype machine like a bomb. From a Katie Couric profile to coverage in just about every outlet, folks couldn’t get enough of this little robot with a big personality poised to bring us a step closer to the world depicted in “The Jetsons” where average families have maids like Rosie. In the blink of an eye, pre-orders climbed passed $1.8 million and blew away the initial fundraising goal of $100k.
Can consciousness be created in a machine? Is the mind/brain simply a computational system? IEET Fellow and University of Connecticut philosopphy professor, Susan Schneider, was interviewed by The Humanist on these pressing topics. What kind of technology will exist in a transhumanist world the humanists are starting to question…
Without clear rules for cyberwarfare, technology workers could find themselves fair game in enemy attacks and counterattacks. If they participate in military cyberoperations—intentionally or not—employees at Facebook, Google, Apple, Microsoft, Yahoo!, Sprint, AT&T, Vodaphone, and many other companies may find themselves considered “civilians directly participating in hostilities” and therefore legitimate targets of war, according to the legal definitions of the Geneva Conventions and their Additional Protocols.
Continuing our series on co-veillance, sousveillance and general citizen empowerment, on our streets… last time we discussed our right and ability to use new instrumentalities to expand our ability to view, record and hold others accountable, with the cameras in our pockets.
This isn’t a complete review of Nick Bostrom’s Superintelligence (2014), but a summary of the thoughts that came to my mind while and after reading the book. Superintelligence: Paths, Dangers, Strategies (2014) opens with a cautionary fable: a group of sparrows consider finding an owl to assist and protect them. Only the more cautious sparrows see the danger – that the owl may eat them all if they don’t find out how to tame an owl first – and Bostrom dedicates the book to them (and of course to the cautious humans afraid that superintelligent life forms may destroy humanity if we don’t find out how to control them first).
I'm back from the first Climate Engineering Conference, held in Berlin. Quite a good trip, but in many ways the highlight was the talk I gave at the Berlin Natural History Museum. The gathering took place in the dinosaur room, which holds (among other treasures) the "Berlin Specimen" Archaeopteryx fossil, among the most famous and most important fossils ever discovered.
If you push long and hard enough for something that is logical and needed, a time may come when it finally happens! At which point – pretty often – you may have no idea whether your efforts made a difference. Perhaps other, influential people saw the same facts and drew similar, logical conclusions!
Materials and how we use them are inextricably linked to the development of human society. Yet amazing as historic achievements using stone, wood, metals and other substances seem, these are unbelievably crude compared to the full potential of what could be achieved with designer materials.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
“Every office full of ambitious people has them. And we have all worked with at least one—the co-worker with an inexplicable ability to rise in the ranks,” wrote the Wall Street Journal recently in an article entitled What Corporate Climbers Can Teach Us. “‘How do they do it?’ we may ask ourselves or whisper to friends at work,” it continued. “They don't have more experience. They don't seem that brilliant.”
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
The transfer of used military equipment from the armed forces to police departments around the country has been accompanied, at least to a certain extent, by a shift in public thinking. The news media have played a critical part in that shift, both in its coverage and in what it chooses not to cover.
On August 9, at around 12 in the afternoon, Michael Brown and his friend Dorian Johnson were attacked by Ferguson, Missouri police officer Darren Wilson. With his hands in the air, telling Officer Wilson that he was unarmed, the officer shot Brown several times, killing him as a result. This was the eyewitness account told by Brown’s friend Dorian.
Positive moods are a virtue, both in enabling enjoyment of life and in supporting prosocial behavior. But it is not the only kind of happiness, and in excess can be quite excessive. Along with positive moods we also want to cultivate flourishing, a sense that overall our lives are meaningful and going well. What are the public policies and life behaviors that support positive moods and flourishing lives? As we enter a “hedonistic imperative” future in which we are able to tweak our moods with “happy-people-pills-for-all” how will we find the right balance of positive mood to achieve flourishing lives?
The WHO medical ethics panel convened Monday to discuss the ethics of using experimental treatments for Ebola in West African nations affected by the disease. I am relieved to note that this morning they released their unanimous recommendation: “it is ethical to offer unproven interventions with as yet unknown efficacy and adverse effects, as potential treatment or prevention.”
Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.
The resilience of our entire civilization is increasingly reliant on a fragile network of cell phone towers, which are the first things to fail in any crisis, e.g. a hurricane or other natural disaster… or else deliberate (e.g. EMP or hacker) sabotage.
Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
Machine ethics is a term used in different ways. The basic use is in the sense of people attempting to instill some sort of human-centric ethics or morality in the machines we build like robots, self-driving vehicles, and artificial intelligence (Wallach 2010) so that machines do not harm humans either maliciously or unintentionally.
I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war. In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net…