The police response to protests and riots in Ferguson, Missouri were filled with images that have become commonplace all over the world in the last decade. Police dressed in once futuristic military gear confronting civilian protesters as if they were a rival army. The uniforms themselves put me in mind of nothing so much as the storm-troopers from Star Wars. I guess that would make the rest of us the rebels.
Our economy is broken. There’s one economy for the wealthy, and another for the rest of us. This division has been worsened by the behavior of corporate executives who manage their corporations for short-term personal gain rather than for long-term fiscal soundness.
The paper tries to fuse traditional concerns about the problem of evil with recent work in population ethics. The result is an interesting, and somewhat novel, atheological argument. As is the case with every journal club, I will try to kick start the discussion by providing an overview of the paper’s main arguments, along with some questions you might like to ponder about its effectiveness.
“This is an economic revolution,” a new online video says about automation. The premise of “Humans Need Not Apply” is that human work will soon be all but obsolete. “You may think we’ve been here before, but we haven’t,” says CGP Grey, the video’s creator. “This time is different.” The video has gone viral, with nearly two million YouTube views in one week. But is it true?
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
There’s a condition I’ve noted among former hard-core science-fiction fans that for want of a better word I’ll call future-deflation. The condition consists of an air of disappointment and detachment with the present that emerges on account of the fact that the future one dreamed of in one’s youth has failed to materialize. It was a dream of what the 21st century would entail that was fostered by science-fiction novels, films and television shows, a dream that has not arrived, and will seemingly never arrive- at least within our lifetimes. I think I have a cure for it, or at least a strong preventative.
A key future use of neural electrode technology envisioned for nanomedicine and cognitive enhancement is intracortical recording devices that would capture the output signals of multiple neurons that are related to a given activity, for example signals associated with movement, or the intent of movement.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
The transfer of used military equipment from the armed forces to police departments around the country has been accompanied, at least to a certain extent, by a shift in public thinking. The news media have played a critical part in that shift, both in its coverage and in what it chooses not to cover.
Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.
They say that one swallow doesn’t make a summer, and one Politico story certainly doesn’t make a campaign season. But if a recent article there is correct – if the Democratic Party’s strategy this year really is “Running as a Dem (while) sounding like a Republican” – then the party may be headed for a disaster of epic but eminently predictable proportions.
Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
Machine ethics is a term used in different ways. The basic use is in the sense of people attempting to instill some sort of human-centric ethics or morality in the machines we build like robots, self-driving vehicles, and artificial intelligence (Wallach 2010) so that machines do not harm humans either maliciously or unintentionally.
I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war. In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net…
Transhumanists as a rule may prefer to contemplate implants and genetic engineering, but few if any violations of morphological freedom exceed being torn to pieces by shrapnel or dashed against concrete by an overpressure wave. In this piece I argue that the settler-colonial violence in occupied Palestine relates to core aspects of modernity and demands futurist attention both emotionally and intellectually.
Our long national nightmare is over – for the moment. Congress has adjourned for summer recess after a session that can safely be described as “historic,” both for its historic lack of accomplishment and the historically low regard in which it is now held by the public.
This is the fourth post of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I started my discussion of Bostrom’s argument for an AI doomsday scenario. Today, I continue this discussion by looking at another criticism of that argument, along with Bostrom’s response.
In the first two entries, I looked at some of Bostrom’s conceptual claims about the nature of agency, and the possibility of superintelligent agents pursuing goals that may be inimical to human interests. I now move on to see how these conceptual claims feed into Bostrom’s case for an AI doomsday scenario.
This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.
A Korean woman was on the verge of divorce because her husband no longer found her attractive and was having an affair. Nothing worked in her efforts to save the marriage and as a last resort she underwent cosmetic surgery. The result was so dramatic and her son didn’t recognize her when she returned home.
Walgreens is the pharmacy that, at least according to its website, can be found “at the corner of Happy & Healthy.” If its executives have their way, however, it may soon be found near the intersection of Ziegelackerstrasse and Untermattweg in Bern, Switzerland. By acquiring the much smaller Swiss company that is located near that corner, the American company can dodge millions in American taxes.
I was just informed that Dick Pelletier, one the IEET’s most beloved writers, passed away this evening from Stage 5 Parkinson’s Disease. I have worked with Dick for the past two years and he taught me more about the next step in human evolution than most doctors and professors I have met. His Positive Futurist stance on humanity and mind will continue to inspire us all.
So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.
Voltaire once said that “work saves a man from three great evils: boredom, vice and need.” Many people endorse this sentiment. Indeed, the ability to seek and secure paid employment is often viewed as an essential part of a well-lived life. Those who do not work are reminded of the fact. They are said to be missing out on a valuable and fulfilling human experience. The sentiment is so pervasive that some of the foundational documents of international human rights law — including the UN Declaration of Human Rights (UDHR Art. 23) and the International Covenant on Economic, Social and Cultural Rights (ICESCR Art. 6) — recognise and enshrine the “right to work”.
This is the second part of my series on feminism and the basic income. In part one, I looked at the possible effects of an unconditional basic income (UBI) on women. I also looked at a variety of feminist arguments for and against the UBI. The arguments focused on the impact of the UBI on economic independence, freedom of choice, the value of unpaid work, and women’s labour market participation.
The introduction of an unconditional basic income (UBI) is often touted as a positive step in terms of freedom, well-being and social justice. That’s certainly the view of people like Philippe Van Parijs and Karl Widerquist, both of whose arguments for the UBI I covered in my two mostrecent posts. But could there be other less progressive effects arising from its introduction?