Richard Loosemore is currently a professor in the Department of Mathematical and Physical Sciences at Wells College, Aurora, NY, USA. He graduated from University College London, and his background includes work in physics, artificial intelligence, cognitive science, software engineering, philosophy, parapsychology and archaeology.
Professor Loosemore’s principle expertise is in the field known as Artificial General Intelligence, which seeks a return to the original roots of AI (the construction of complete, human-level thinking systems). Unlike many AI/AGI researchers, his approach is as much about psychology as traditional AI, because he believes that the complex-system nature of thinking systems make it almost impossible to build a safe and functioning AGI unless its design is as close as possible to the design of the human cognitive system. He considers the safety of intelligent systems to be of paramount importance, and one aspect of his work involves the development of AGI motivation mechanisms that can be guaranteed to be friendly.
Other things that have occupied him at various times: electronics, astronomy, choral singing, cello, experimental parapsychology, sporadic attempts to speak Japanese and Russian, mathematics, Aikido, drawing and painting, cubing, tennis, cat herding, amateur chemistry, contemporary dance, furniture making, wilderness camping, and a renovation war between him and the 1850s Classical Revival farmhouse in which he resides.
Home Page |
"Defining “Benevolence” in the context of Safe AI" richardloosemore.com (Dec 10, 2014)
"The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation" Ethical Technology (Jul 24, 2014)
"The Fallacy of Dumb Superintelligence" Ethical Technology (Nov 28, 2012)
"Why an Intelligence Explosion is Probable" H+ Magazine (Mar 22, 2011)
"The Lifeboat Foundation: A stealth attack on scientists?" Ethical Technology (Mar 8, 2011)
"Don’t let the bastards get you from behind!" Ethical Technology (Jan 5, 2011)