Just over one year ago, corporate AI ethics became a regular headline issue for the first time.
In December 2020, Google had fired Timnit Gebru—one of its top AI ethics researchers—and in February 2021, it would terminate her ethics team co-lead, Margaret Mitchell. Though Google disputes their version of events, the terminations helped push some of the field’s formerly niche debates to the forefront of the tech world.
Big picture: Every algorithm, whether it’s dictating the contents of a social media feed or deciding if someone can secure a loan, could have real-world impacts and the potential to harm as much as it might help.
- Policymakers, tech companies, and researchers are all grappling with how best to address that fact, which has become impossible to ignore.
- And that is, in a nutshell, the field of AI ethics.
To get a sense of how the field will evolve this year, we checked in with seven AI ethics leaders about the opportunities and challenges facing the field this year.
The question we posed: “What’s the single biggest advancement you foresee in the AI ethics field this year? Conversely, what’s the most significant challenge?”
Click here to read the full piece—we’ve included one answer below.
Deborah Raji, fellow at Mozilla:
I think for a long time, policymakers have sort of relied on narratives from corporations, research papers, and the media, and projected an image—a very idealistic image—of how well AI systems are working. But as they make their way into real-world systems and get deployed, we’re increasingly aware of the fact that these systems fail in really significant ways, and that those failures can actually result in a lot of real harm to those that are impacted.
Specifically, there’s been a lot of discussion on accountability for moderation systems, but we’re going to hear a conversation about the need for auditing and accountability more broadly. And specifically auditing from independent third-party actors—not just regulators, consultants, and internal teams, but actually getting some level of external scrutiny to assess the systems and challenge the narratives being told by the companies building them.
In terms of the actual obstacles to seeing that happen, I think that there’s a lot of incongruencies in terms of how auditors and algorithmic auditors currently work.
It’s all these different actors that want to hold these systems accountable, but are currently working in isolation from each other and not very well coordinated. You have internal auditors within companies, consultancies, [and] startups that are coming up with tools. Journalists, law firms, civil society—there’s just so many different institutions and stakeholders that identify as algorithmic auditors that I think there will need to be a lot more cohesion.