AI Ethics as an Org Design Problem

Max Krueger
3 min readFeb 3, 2022

Rethinking the approach to AI Ethics

Photo by Jason Goodman on Unsplash

I am taking a deep dive into AI ethics and the just use machine intelligence. I will periodically post observations as I continue to learn the intricacies of this issue. Thanks for reading!

I recently stumbled into the works of Aaron Dignan, Greg McKeown and others focused on organizational design and the future of work. Initially, the deeper I got I couldn’t shake the feeling that this may be a missing piece of the AI ethics puzzle. I believe we need to rethink how data teams perform work in order to make machine intelligence safer and more ethical. Ethics is not a belief problem it is a design problem.

AI ethics — while still neglected broadly — has a lot of research focused on technical solutions. Metrics to measure fairness, algorithms that promote less biased outcomes, research teams focused on developing safe models. These comprise an exciting suite of techniques to reduce the prevalence of AI incidents, but this doesn’t seem to be enough.

For decades society knew about climate change and what measures business could take to mitigate consequences, yet progress has been glacial (intended) at best. This is, at least in part, due to centralized power structures and institutional norms. In order to quickly make progress in AI ethics I am suggesting a redesign of these structures and norms.

When I talk about rethinking work, I am not referring to a reorganization of the corporate org chart. I am talking about a radically different approach to how data teams work and interact. Inherent to this argument is the belief that (generally) people like to do the right thing. The current organizational approach forces people into a standardized framework when, in fact, we need a system tailored to the individual. One way to achieve this is to allow teams to self-manage. Empower teams to take complete ownership over a given process and provide them guardrails to operate within. These guardrails should be focused on ethical outcomes. Encourage teams to ask, “Is this ethical?” rather than “Will this drive more clicks?”

To further incentive ethical behavior and keep ethics top of mind. Organizations or indivdual teams could run “bug bounty” programs as Twitter recently completed. For example, a team could run a short hackathon focused on finding bias and unethical outcomes in a model where the winning employee/team is funded to donate to a charity of their choice. Such programs, keeps ethics at the center of the organizational mission. As you know, it much easier to do something when your peers are doing it as well.

AI ethics is an immensely complicated issue. Carly Kind — Director of the Ada Lovelace Institute — describes a third-wave of ai ethics which acknowledges that it is not just a technical problem but a sociotechnical problem requiring interdisciplinary solutions. Rethinking how data teams work is far from a silver bullet but can unlock the inherent positive behavior of people. Decentralizing power while providing a set of ethics-focused guardrails is a step towards a more just use of machine intelligent systems.

Agree? Disagree? Let’s chat. Use this link to schedule a time to talk. Thanks for reading!

--

--