← Resources
ArticleMarch 12, 2026

What 20 Coaches Told Us About AI: Research Findings from the UB Coaching Study

We interviewed 20 coaches about how they think about AI in their practice. Every single one raised privacy as a concern. Here's what the full picture looks like — and what it means for platforms claiming to support coaching work.

This research was conducted under IRB approval at the University at Buffalo. Participant identities are protected.

Between September and December 2025, I conducted semi-structured interviews with 20 practicing coaches — ranging from internal organizational coaches to independent practitioners with ICF credentials — about their experience with and attitudes toward AI tools in coaching contexts.

The research question was deliberately broad: How do coaches currently think about AI in their professional practice?

What we found

Privacy was universal. Every single participant raised privacy concerns without being prompted. The specific concerns varied — data storage, client consent, vendor access to transcripts — but the underlying theme was consistent: coaches feel a deep professional obligation to protect the confidentiality of what happens in sessions, and AI tools create new categories of risk they're not sure how to evaluate.

This isn't resistance to technology. Many of these coaches use AI extensively in other parts of their work. The concern is specifically about introducing AI into the confidential space of a coaching session.

Autonomy protection was a recurring theme. Coaches across the sample expressed concern about AI systems that "decide" things about coaching sessions — that score or evaluate or recommend without coach review. The concern wasn't about AI being wrong (though that came up too). It was about professional identity: who is the expert here?

Interest in reflection support was high. Despite privacy and autonomy concerns, most participants expressed genuine interest in tools that could help them see patterns in their own practice that they can't currently see. "I know I have habits in how I question," one participant said. "I just don't know which ones."

What this means for platform design

Three design principles emerged clearly from the data:

1. Consent architecture before feature architecture. Coaches need to be able to explain to clients, in plain language, exactly what AI is doing with their session data. Platforms that can't support this explanation will face adoption barriers regardless of feature quality.

2. The coach accepts; the AI suggests. Every design decision that routes around coach judgment — even in the name of efficiency — will encounter professional resistance. This isn't a bug in coach thinking. It reflects legitimate expertise about what coaching is.

3. Transparency about methodology. Coaches want to know how AI reaches its observations. "Black box" systems that produce outputs without explanation won't be trusted.

The full research paper is in preparation for submission. These findings are shaping Orin's architecture in direct ways.