Olivia Gambelin ’13
Artificial Intelligence ethicist specializing in the practical application of ethics to technological innovation
Founder, Ethical Intelligence
What are the consequences when ethics are not built into AI and technology?
There are issues around data privacy, so there’s a violation of personal information and the feeling of lost agency over your own ability to make decisions on your life. But the perspective I take as the main risk is you’re cutting yourself short if you’re not using ethics in your development and design processes. If you do ethics by design, it naturally leads to a better business and stronger technology.
What considerations do programmers need to take into account when implementing ethics into technology?
My biggest advice is to question yourself. When you’re making decisions, do not assume that everyone thinks the same as you. It sounds simple, but it’s the easiest way to catch a lot of ethical problems. Programming is a very set way of problem solving, but you can’t solve ethical problems with an engineering patch. It’s something that involves conversation, deliberation and a lot of critical thinking.
How do you view the interaction of ethics and technology?
There are two sides of ethics. One is the risk mitigation side where you’re making sure nothing goes wrong, where regulation and compliance play a hand. The other half lies in what I find to be an exciting space of innovation and hope because you can design for specific values. We can pretty much problem solve in any direction that we want. We can help shape problem solving and innovation in a way that benefits us as people. We all have an active role to play in how we interact with technology and develop it. There’s a lot of potential and opportunity that comes along with this space.