John C. Havens is executive director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and of the Council on Extended Intelligence. He is the author of Heartificial Intelligence and Hacking Happiness (Penguin/Random House) and a contributor to some of the world’s most prominent media outlets.
On 26 October, John C. Havens will be speaking at the AI N’ CYBER 2023 conference, organised by the Digital National Coalition in partnership with Economic.bg and the Konrad Adenauer Foundation. You can find out more about the event on the website and on the Facebook and LinkedIn pages.
Tell us more about your journey before the IEEE Global Ethical Considerations Initiative?
I went to college to be a minister and then decided my “pulpit” should be as a professional actor after a teacher told me to follow my passion. After 15 years in New York City as a professional actor, I began podcasting as a way to marry my interests in acting and business. That led to a job doing business development for a podcasting firm, then a job in Public Relations where I became an EVP of social media for a global PR firm and learned a TON about business. When I left that job I ran my own non profit for a few years called The Happathon Project, and also wrote three books and wrote as a journalist for The Guardian, Mashable and other outlets. I was invited to speak at SXSW by IEEE, and then pitched them on creating a code of ethics for AI.
What is IEEE’s mission and role in developing ethical standards in AI?
In 2015, IEEE became one of the first global orgs of its kind to create a document called, Ethically Aligned Design. This brought together (after three drafts) over 700 global experts to create a document that then inspired the IEEE 7000 standards series, the first of its kind for IEEE SA (“first” as it formally married socio-technical issues together).
What are the biggest challenges related to artificial intelligence today?
There are many. First is people have been led to think they can’t access their personal data and have portability and some sense of ownership around it. But we get access to our medical records and also are aware of the messages we send on social media. Aggregating this data in an electronic / digital format can be done, as IEEE is doing with various personal data standards. Having access to this data means we can have “algorithmic personal data agents” that can function as the proxy for our data preferences at algorithmic level, in digital, virtual, and metaverse realms.
In this context – what steps should we take to build a more sustainable future society?
We need to recognize that a focus on exponential growth alone for the AI and tech we build means we aren’t prioritizing ecological flourishing and human wellbeing which we must do to sustain a planet that can sustain 8 billion people in health and joy. This is why we created the IEEE 7010-2020 standard and our new Planet Positive 2030 project, with our first big paper called, Strong Sustainability by Design.
What do you see ahead for the AI industry in the next 5 years?
Hopefully more focus on giving access to people for their data and a focus on caring for the planet. Otherwise the prioritization of productivity, speed and efficiency will mean we don’t pay enough attention to the planet and the side of our human selves that has emotions, values, and faith.
How far do the benefits of AI extend to society and when do these technologies start to negatively impact?
There are myriad benefits of AI to society but where we don’t have access to our own data first, and are encouraged and given time to honor our human side (emotions, faith, consciousness) as a society we’re prioritizing things that won’t keep the planet safe for 8 billion people for more than 10-20 years at best.
On October 26, you will participate in AI N’ CYBER CONFERENCE 2023. How are international forums like this important?
One of the biggest benefits we had in getting feedback for Ethically Aligned Design is when many people told us the first edition of the document was too Western / only Western. So we added a chapter on non-Western ethical traditions and improved the document by about 1,000%. No AI can be “responsible” if it doesn’t utilize values and ideals from all regions of the world where possible.