Learning to live well with artificial intelligences when they touch every part of our lives.


Artificial Intelligences (AI) are posing a lot of questions around how we coexist with them in a meaningful manner. AI’s are only as good as the humans that make them, and they are not infallible. Questions around accountability are becoming more blurred as we wrestle with the idea of holding an AI accountable for its actions rather than the person or organisation that created it. Social’ norms’ are being questioned as we feel our way into our relationships with digital agents such as Alexa. We are now starting to ask how we should behave towards them and whether we should treat them with the same respect we give to humans.

Many AIs are doing tremendous good, saving lives and running complex systems that impact our lives for the better. However, not all applications of AI have been as empowering. Artificially intelligent software has been trialled and withdrawn from use in parts of the US justice system as it was found to be giving poor guidance to Judges on sentencing.

The boundaries between AI and human are blurring. Google Duplex is a next-generation digital assistant able to call humans and make arrangements on your behalf. Applications such as the AI News Reader in China that convincingly portrayed a human TV news anchor might not be a problem in itself. However, the rise of ‘fake news’ that AI can assimilate has started to erode trust and foster a sense of scepticism towards AI. Machine learning has become so sophisticated that it can now create ‘Deepfakes’; video, imagery and audio that convincingly replicates real individuals. When seeing no longer means believing, we’ll need new tools to help us assess the validity of everything we’re seeing and hearing. Deepfakes will continue to improve in ease and sophistication as developers create better AI and new techniques that make it easier to develop falsified videos. The ‘cold war’ of fake and fake detection begins.

While AIs are becoming more human, humans are augmenting themselves with tech through innovative products such as Elon Musk’s ‘neuralink’ start-up that is pioneering implants that connect the human brain to a computer. First tests on people are scheduled for the end of 2020 with the ultimate aim of creating superhumans with super intelligence. A wide range of exoskeletal suits are becoming available for a wide range of purposes. They can be worn by manual workers giving them superhuman strength, by people with disabilities to help them move and walk and by the elderly allowing them to remain mobile for longer.

As the human and AI worlds collide, the lack of an agreed ethical position and standards is leading to a sense of unease. The technology community has started to address some of the moral questions by establishing new institutions that spark meaningful global dialogue, with cross-cultural conversations about how to use technology for the benefit of all. These institutions comprise governments, companies, non-profits, academics and, crucially, the individuals whose lives they impact.

Initiatives such as Oxford University’s Future of Humanity Institute and DeepMind’s Ethics and Society project are bringing together specialists in technology and the humanities to try to foresee, and mitigate the negative social implications of AI as well as steering research and investment towards beneficial projects. In 2018, the Nuffield Foundation launched the Ada Lovelace Institute, a charitable trust, educating a new generation of digital ethicists with a mission to foster research and inform the debate. In the year ahead and beyond, we expect more and more AI companies to hire professional ethicists into senior roles.