Ethical AI: The risks and opportunities of artificial intelligence
With the exponential advancement of technology comes incredible opportunity—improved health and wellbeing, more convenience, better access to information, and the potential for better working and living conditions overall. But on the other hand, the dystopian vision of our future robot rulers feels deeply unsettling—where Big Brother is watching and technology becomes a tyrant rather than a tool.
So when it comes to technological advancement in general and artificial intelligence in particular, what are the ethical considerations? What governance should we put into place to guard against risks and responsibly wield the power of AI? What implications does this have for developers of technology, business leaders, governments, and society as a whole?
These are the questions Dr. Mark Esposito considers in his academic research as well as his own business ventures, and ones he explores with Hult students in the classroom. Among his other entrepreneurial and academic ventures, Mark is the co-founder of the artificial intelligence studio Nexus Frontier Tech and professor of Business & Economics at Hult.
We asked him a few key questions to better understand what individuals, organizations, and governments should think about in order to progress ethically in a world of big data and AI.
While the idea of interacting with androids might seem a long way off, in what practical applications of AI today should we be thinking about ethics?
“First, do no harm.” This is the vow of doctors take when it comes to ethical patient care—a form of the ancient Hippocratic Oath. However, as a doctor, there will be scenarios where, unfortunately, something goes wrong. A patient may become more unwell or die as a result of a doctor’s decision or misdiagnosis.
Excitingly, AI is stepping in and showing promise when it comes to medicine. By using artificial intelligence to analyze data related to things like symptoms, medications, and medical history, machines are now able to make a diagnosis and determine potential treatment options in ways that are often more accurate and efficient than human doctors.
But it’s important to remember that our attempts to automate and reproduce intelligence are not deterministic—they are probabilistic. And that means they’re also subject to issues and experiential biases that shape all other kinds of intelligence.
So while AI is showing promise as a tool for supporting and strengthening doctors’ decision-making, there are risks and ethical considerations. Just as with doctors, if you give AI power in decision-making along with the power of analysis, at some point it will more than likely be implicated in a patient’s death. If so, is that the responsibility of the doctor relying on AI? The hospital? The engineer behind the technology? The corporation selling it?
Similar considerations are relevant to things like autonomous vehicles. Who is morally responsible when something goes wrong?
So, how do we determine where responsibility or accountability lie? What’s the role of humans in the AI decision-making process?
AI-analysis is a powerful tool. And of course, with great power comes great responsibility.
Answers to such questions depend on how the governance is arranged—whether or not a doctor is at the end of each AI-provided analysis and the relative weight given to AI-driven insights or predictions. In short, the buffers between the AI and the outcome.
The first question around building accountability is how to keep humans in the decision loop of processes made more autonomous through AI. The next stage needs to preserve accountability in the right to understanding. That means knowing why an algorithm made one decision instead of another.
The challenge facing ethical AI is designing a clear and concise roadmap, for those who employ it, those who use it, and those affected by it. The purpose of such a roadmap is not only to understand when and how AI is used, but to help improve understanding of AI’s personal, psychological, and social consequences.
“The challenge facing ethical AI is designing a clear and concise roadmap, for those who employ it, those who use it, and those affected by it.”
The ethics around data collection and usage has been a big news story this year. What do we need to think about when it comes to AI and our data as citizens and consumers?
The benefits of AI are making many of our choices easier and more convenient, and in so doing, tightening competition in the space of customer choice. As this evolution happens, the question is less to what extent is AI framing our choices, but rather, how is it shaping them.
In such a world, we need to understand when our behavior is being shaped, and by whom.
Clearly, most of us are quite comfortable living in a world where our choices are already being shaped by AI. From search engines to smooth traffic flow, many of our daily conveniences are built on the speed provided by the backend.
The question we need to ask ourselves when considering AI and its governance is whether we are comfortable living in a world where we do not know if—and how—we are being influenced. This influence could be on everything from what we buy to how we vote.
With that in mind, what are businesses and governments doing now when it comes to data and AI governance? What more needs to be done?
In Europe, the recent General Data Protection Regulation (GDPR) gives us a right to an explanation about when and how our data is being collected and a better means of being informed about the use of that data. Governing data is essential, as is being informed, and as is providing a right to an explanation. However, this isn’t enough.
The unfortunate truth is that no regulation by itself will be sufficient to either govern or maintain effective governance of how our data is used. The degree to which that regulation would have to be updated to maintain pace with emerging innovation is a dilemma in itself.
Going forward, politicians, coders, and philosophers certainly have their work cut out for them.
“Going forward, politicians, coders, and philosophers certainly have their work cut out for them.”
Read more on the topic of ethical AI from Mark and his colleagues:
Designing a roadmap to ethical AI in government
What Governments Need to Understand About Ethical AI
Interested in finding out more about Hult’s future-focused programs and how disruptive tech is built into your degree? Download a brochure.
Grow your leadership capabilities with an MBA in international business at Hult. To learn more, take a look at our blog The importance of authenticity and decision-making in leadership, or give your career a boost with our Masters in International Business. Download a brochure or get in touch today to find out how Hult can help you to learn about the business world, the future, and yourself.