In light of the recent news where the Australian government officially criminalized the mere act of owning blueprints to 3D print a gun, it certainly raises the question of how other countries will handle the future prospect of advanced 3D printed weaponry. The ownership of a gun is already a controversial topic currently being debated here in the United States, and with 3D printed guns now being added into the mix, the controversy is likely to become exacerbated.
Achieving what Elon Musk’s company SpaceX has only attempted to do (and failed thus far), Blue Origin, a private space company founded by Amazon CEO Jeff Bezos, has officially landed a reusable rocket after a quick trip to and from space.
The region of the Middle East has been in turmoil for more than a decade. With the advent of the recent terrorist attacks on Paris and the threat of more by the Muslim extremist group ISIS, many have been pondering how the problems plaguing the Middle East can be solved. I believe that technology can play an integral role in the process of repairing and advancing the region. The modernization and digitization of the entire region’s infrastructure would provide numerous benefits that would increase stability and redress the damage done to the economy and society from years of war.
Airing every Sunday 9/8c, National Geographic’s latest TV show Breakthrough, hosted by Paul Giamatti, provides a unique walkthrough into the growing arena of “how-to-enhance-human-beings” using advanced science and technology. In their latest episode, “More Than Human,” Giamatti gets up close and personal with Lockheed Martin’s newest exoskeleton suit FORTIS (video clip of the episode is provided below).
Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.
The automobile industry is still looking to develop the first fully autonomous vehicle, but Tesla Motors recently took the industry one step closer. The US car company has managed to simultaneously make one of the biggest advancements in the history of recent automobile technology and generate massive controversy at the same time.
What will the future look like? The further upwards one moves from the basement domain of physics, the harder it often gets to predict long-term trends. Nonetheless, we have some fairly good clues about what to expect moving forward.
There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.
I was contacted by a staff writer from the online newsmagazine The Daily Dot. He is writing a story at the intersection of computer superintelligence and religion, and asked me a few questions. Here are my answers to his queries.
I see you’re on a tight deadline so I’ll just answer your questions off the top of my head. A disclaimer though, all these questions really demand a dissertation length response. My answers are below:
Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.
Imagine the day when we finally receive a signal from an extraterrestrial intelligence, only to find that there’s a message embedded within. Given that we don’t speak the same language, how could we ever hope to make sense of it? We spoke to the experts to find out.
Communication with Extraterrestrial Intelligence, aka “CETI”, is the branch of SETI concerned with both the transmission and reception of messages between ourselves and an alien civilization. Scientists have been trying to detect signals from an extraterrestrial intelligence (ETI) since the 1960s, but haven’t found anything.
“For the modern mad men and wolves of Wall Street, gone are the days of widespread day drinking and functional cocaine use. Instead, in this age of efficiency above all else, corporate climbers sometimes seek a simple brain boost, something to help them to get the job done without manic jitters or a nasty crash.
For that, they are turning to nootropics,” writes Jack Smith IV on the cover story for an April 2015 edition of the New York Observer.
This restructuring of how to view science is geared not just at defending science from charges of reactionism from leftists, but at more broadly clarifying how we might view that much looser bundle invoked by the word “science” as a political force. Because the array of things popularly associated with “science” is so wildly varying and hazy most of the political claims surrounding science that don’t slice it away to near irrelevance or neutrality as a formulaic procedure have sought to identify underlying ideological commitments and then define “science” in terms of them.
The fact of the matter is that the remarkably successful phenomenon that the term “Science!” has wrapped itself around is not so much a methodology as an orientation. What was really going on, what is still going on in science that has given it so many great insights is the radicalism of scientists, that is to say their vigilant pursuit after the roots (or ‘radis’). Radicals constantly push our perspectives into extreme or alien contexts until they break or become littered with unwieldy complications, and when such occurs we are happy to shed off the historical baggage entirely and start anew. To not just add caveats upon caveats to an existing model but to sometimes prune them away or throw it all out entirely. Ours is the search for patterns and symmetries that might reflect more universal dynamics rather than merely good rules of thumb within a specific limited context. As any radical knows “good enough” is never actually enough.
With the launch of Sputnik in 1957, humankind extended its presence from the Earth’s surface towards outer space. Since that time, thousands of other objects have been sent into Earth orbit, including weather satellites, communications equipment and military hardware. Wherever people go, they tend to leave their mark, mostly harmful, on the natural environment, and space is no exception. There are many pieces of space junk – the remains of discarded, malfunctioning or obsolete devices – that now whiz around the earth and pose threats to current space projects.
IEET Fellow Wendell Wallach recently co-published an article in the National Academy of Sciences‘ ISSUES in Science and Technology journal, with ASU law professor Gary E. Marchant, The piece is entitled Coordinating Technology Governance and it explores the need for, and application of, a nimble authoritative coordinating body, referred to as a Governance Coordination Committee, to fill an urgent gap with regard to the assessment of the ethical, legal, social and economic consequences of emerging technologies.
After several decades of relative obscurity Transhumanism as a philosophical and technological movement has finally begun to break out of its strange intellectual ghetto and make small inroads into the wider public consciousness. This is partly because some high profile people have either adopted it as their worldview or alternatively warned against its potential dangers. Indeed, the political scientist Francis Fukuyama named it “The world’s most dangerous idea” in a 2004 article in the US magazine Foreign Policy, and Transhumanism’s most outspoken publicist, Ray Kurzweil, was recently made director of engineering at Google, presumably to hasten Transhumanism’s goals.
Skeptics of renewables sometimes cite data from EIA (The US Department of Energy’s Energy Information Administration) or from the IEA (the OECD’s International Energy Agency). The IEA has a long history of underestimating solar and wind that I think is starting to be understood.
This map shows that AI failure resulting in human extinction could happen on different levels of AI development, namely:
1. before it starts self-improvement (which is unlikely but we still can envision several failure modes),
2. during its take off, when it uses different instruments to break out from its initial confinement, and
3. after its successful takeover of the world, when it starts to implement its goal system which could be unfriendly, or its friendliness may be flawed.
The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.
One can’t help be positive about the future. Even obstacles have a bright side. For example - humans at some point will be limited by space and time; we can’t expect to go far in space exploration without the development of strong artificial intelligence and robots.
Futurists like Ray Kurzweil believe that advancements in the field of artificial intelligence will culminate to a point in the near future to allow humans to transcend their biological form. This is what he calls the Singularity and he describes it as follows:
My plan below needs to be perceived with irony because it is almost irrelevant: we have only a very small chance of surviving the next 1000 years. If we do survive, we have numerous tasks to accomplish before my plan can become a reality.
Additionally, there’s the possibility that the “end of the universe” will arrive sooner, if our collider experiments lead to a vacuum phase transition, which begins at one point and spreads across the visible universe.
We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible.
Since their inception 60 years ago, satellites have gone on to become an indispensable component of our modern high-tech civilization. But because they’re reliable and practically invisible, we take their existence for granted. Here’s what would happen if all our satellites suddenly just disappeared.
The idea that all the satellites — or at least good portion of them — could be rendered inoperable is not as outlandish as it might seem at first. There are at least three plausible scenarios in which this could happen.
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org