"Tracking and Hacking - Values and Happiness with AI" - interview with John C. Havens
Hank Pellissier
2016-04-04 00:00:00

Hank Pellissier: Can you explain what you mean by the book title "Heartificial Intelligence: Embracing our Humanity to Maximize Machines?" 



John C. Havens: A primary reason I wrote the book was to encourage people to identify, track, and live to their values.  This is what I mean by “embracing our humanity.”  If you ask most people what the top five values are they live their lives by, they draw a blank.  However, there’s growing evidence in the field of positive psychology that if you don’t live to your values, your wellbeing and happiness decrease. This is also just common sense – if you spend more time doing things that don’t align with your values, you’re going to be unhappy.



The connection with values to Artificial Intelligence is that today in AI many companies are trying to imbue machines with a sense of human values.  Programmers may not create code that says, “emulate the value of dignity” or what have you, but they will train a system to do a certain action by, say, watching thousands of hours of YouTube videos showing people doing that action.  This helps them reverse engineer how to program “a value” into a machine. 



So, put simply, if machines are being trained to track human values, we should do the same for ourselves.  That’s what the title means and why the focus of the book is also about the need to create a common set of AI ethical standards for manufacturing that can provably align to our specific, subjective human values. 





HP: Can you provide IEET readers with some tips on "Hacking H(app)ness'?



John C. Havens:  The science of positive psychology is about “hacking happiness” in the sense it’s not really about happiness, or mood, at all.  Positive psychology is focused on taking actions around things like gratitude, altruism and mindfulness to increase your wellbeing (a holistic sense of flourishing) in the same way you increase your physical wellbeing via exercise.  A simple ‘hack’ along these lines is to do the following exercise:



* Sit in a quiet spot and ask yourself the following question: “How satisfied am I with my life right now?”  Answer on a scale of 1-10 where 1 is pure misery and 10 is ecstasy.  This question is not focused on your mood, but on a larger sense of if you feel your life has purpose and meaning.



* Take ten minutes and write a list of five people you’re grateful for, taking the time to really reflect on why you’re grateful for them.



* Ask the same question about Life Satisfaction and rate yourself again.



While this is far from a scientific exercise, positive psychology has shown in similar tests that a person’s wellbeing will increase when being grateful in this way.  This is a “happiness hack” in the sense that the exercise is not about increasing your mood, per se, as emotion is ephemeral and also different for every person.  However, positive psychology and the actions that increase wellbeing are largely universal. 

So if we can get people focused on increasing their wellbeing versus being so focused on instant gratification oriented happiness we’ll be able to hack the idea that mood should be prioritized over long-term, sustained mental and emotional health.

HP: Can you tell us about your most recent writing, your most recent publication and what you're thinking about recently?



John C. Havens:  My most recent book is Heartificial Intelligence.  My current writing is a piece for Mashable focused on the need to implement ethical standards for manufacturing.  This is pretty much what I’m thinking about at all times, and I’m doing a great deal of work to try and make this happen at a large scale.

HP: Do you have a controversial opinion, different than the mainstream amongst futurist techies?



John C. Havens:  I think many technologists find it strange or annoying to provably align AI with end-user values.  Many people I know refer to ethics as “The E-Word” because it’s often associated with roadblocks for engineers or manufacturers versus anything that’s going to help them in their work.  This makes total sense to me, which is why I’ve been so encouraged in the past months to discover methodologies like Value Sensitive Design created by Batya Friedman or books like Ethical IT Innovation by Sarah Spiekermann. 



Ethics in manufacturing today (or up until now) has typically focused on averting risk or safety issues.  Sometimes it’s about codes of ethics for engineers in their day to day work (integrity, etc).  However, the methodologies like the ones I mentioned above are about applied ethics.  These processes focus on creating stakeholder maps around for all end user values who may come into contact with an AI product or service.  In a world of predictive and personalization algorithms, this process is a must – if you’re building tools to track people’s actions that reflect their values without knowing them, it’s guesswork. 



All that said, it’s early days for applied ethics for AI, but there are a TON of great organizations working to help in this process and my goal is to help shift ethics from being the “E-word” to the “I-word” (Insights and Innovation). 





For more information, visit John’s site or follow him @johnchavens.