From Many, One: Building Cohesive RAI Teams

I've spent nearly a decade working in operations, risk, and Responsible AI, including nearly seven years focused specifically on Responsible AI, across Facebook, Google, and Salesforce. Since ChatGPT's release in late 2022, I've watched countless RAI functions spin up rapidly across the industry. New teams forming, new mandates being created, new experts entering the field.

I’ve also noticed a pattern. These teams often look remarkably similar. They bring together ethicists, engineers, policy and privacy experts, lawyers, product managers, researchers, and Trust and Safety practitioners. Different disciplinary languages. Different risk tolerances. Different definitions of what “responsible” even means.

The Problem

Most organizations assume that assembling smart people from different backgrounds is enough: Put the right experts in a room, give them a mandate, and trust that good outcomes will follow.

But in practice, without foundational alignment, these teams can fracture rather than synthesize.

Engineers optimize for performance metrics. Ethicists focus on potential harms. Policy teams worry about regulatory exposure. Product managers need to ship while researchers want rigorous evaluation that takes time.

Even core concepts diverge. “Fairness” means statistical parity to one person, equal opportunity to another, and user satisfaction to a third. “Risk” is assessed through entirely different lenses depending on who is in the room.

Decisions are made, then relitigated. Teams talk past each other. Progress stalls, not because people disagree on values, but because they lack a shared framework for translating values into decisions.

And so, the diversity that should be the greatest asset becomes a liability.

The Floor and the Ceiling

To be clear, the key to resolving this issue isn’t making everyone think the same way. It’s establishing shared foundations that allow diverse perspectives to actually build on each other.

I think about this as creating two essential layers: a floor and a ceiling. The floor is your map, showing what's required. The ceiling is your compass, pointing where you aspire to go.

The Floor: Minimum Standards and Operating Principles

The floor defines what "responsible AI" actually means in your organizational context. It answers concrete questions:

  • What's our sphere of control and responsibility: can we implement mitigations for our own models, or if it’s a third-party system, are we mitigating around it?

  • What regulatory requirements must we meet? EU AI Act? State privacy laws? Industry-specific standards?

  • Is RAI a function that relies on carrots or sticks: can it block launches, or is its role purely advisory?

  • What professional backgrounds will comprise the RAI team? Policy experts? Trust and Safety practitioners? Privacy specialists? Researchers? Product managers?

  • Where will RAI sit organizationally: with Trust and Safety? Compliance? Research? Policy? Product?

Without a clear floor, every decision requires re-arguing the basics. Teams waste time debating what "responsible" means instead of evaluating whether their work meets that standard.

The Ceiling: Aspirational Principles

The ceiling defines where you’re going beyond compliance. It articulates what your organization stands for and aspires to achieve.

Many companies already have statements about fairness, transparency, and accountability. Consolidating these ideas into a working list of AI principles—your core focus areas—builds a strong foundation for RAI initiatives.

Developing your ceiling can even serve as a way to harness your RAI team’s diversity from the beginning: inviting all represented disciplines to align on what the organization should prioritize fosters both buy-in and a sense of ownership of this foundational document. Functioning as a sort of constitution, these AI Principles then create shared vocabulary and reference points that help teams navigate ambiguous situations.

In this way, the ceiling isn't just nice-to-have philosophy. It guides decisions when the floor doesn’t give you a clear answer.

Why You Need Both

Simply put, a floor without a ceiling becomes compliance theater: checking boxes without purpose or direction. A ceiling without a floor means everyone interprets aspirations differently and nothing ships; there's no shared definition of what meeting those aspirations actually requires.

In the end, the teams that succeed are the ones that use both to navigate complexity, turning diverse perspectives into coordinated action.

Previous
Previous

Bridging the Gap: Operationalizing Responsible AI Research