- This New Way
- Posts
- AI surfaces problems managers miss
AI surfaces problems managers miss
Employees won't tell their manager, but they'll tell AI (3-min read time)
What you'll learn: How to use AI to surface honest team feedback without adding more meetings.
Most employees won't tell their manager what's really going wrong. Shweta Kamble and Hari Iyer, founders of HaloVision, discovered something surprising: people open up to AI in ways they won't with some of their managers.
In this episode, Shweta and Hari walk through exactly how to:
Use AI as a confidential third party to surface hidden team problems
Build evaluation systems that catch errors before they compound
Turn raw feedback into actionable recommendations for leadership
This is one example of how teams are using AI for internal operations. Watch to get ideas you can adapt to your own workflows.
Tools mentioned in this episode
HaloVision — An AI that conducts confidential one-on-ones with employees and surfaces insights to leaders
GPT-4 / GPT-5 — OpenAI's language models that power AI conversations and reasoning
Gemini — Google's AI model, now outperforming others for certain analytical tasks
Evals (evaluation frameworks) — Systems that test whether AI outputs are accurate and consistent
LLM as a judge — Using one AI model to check another AI's work for errors
Digital twin — A virtual representation of a person or organization that AI can reference and update
Episode TL;DR
Employees share more with a judgment-free AI than with their manager. AI tools can get rich, unfiltered data because they are a third party
Small AI errors compound fast across hundreds of agents. Build "chains of evals" that trace every conclusion back to its source evidence
Put dollar values on problems. Executives act faster when impact is quantified
Tutorial 1: Get honest team feedback without more meetings
Why this works:
A third-party AI removes judgment, and people say what they actually think
AI can ask follow-up questions because it knows the context across the whole organization
How to build a version of this yourself:
Set up an AI chat — for example, a Claude Project or a custom GPT in ChatGPT — with a prompt like: "You're a confidential feedback assistant for [Company]. Ask employees what's working, what's frustrating, and what leadership should know."
Add company context to the prompt—your values, current priorities, recent changes—so the AI asks relevant follow-ups
Position it as confidential and separate from HR or management. For true anonymity, consider collecting responses through an anonymous form and then running them through the AI for analysis, rather than having employees chat directly with a logged-in tool.
Tell employees upfront: "We'll only escalate the highest-impact items"
Close the loop: share what action was taken so employees see their input matters
Pro tip: Start with leadership buy-in. If the CEO isn't committed to acting on feedback, employees won't trust the process, and the tool won't surface anything useful.
Tutorial 2: Build AI systems that don't play telephone
When you chain multiple AI steps together — like summarizing a transcript, then synthesizing themes, then generating recommendations — each step can drift further from what was actually said. Here's how to prevent that.
You'll need: An AI workflow with multiple steps (e.g., summarization → synthesis → analysis). Some dev experience helps, or pair with a developer.
The workflow:
Map out your AI chain. Identify every step where one AI output feeds into the next. These handoff points are where meaning tends to drift.
Carry the evidence forward at each step. Don't just pass summaries along — attach the supporting quotes or data points from the original source so the next step is always working from real evidence.
Add a second AI as a fact-checker. Use an "LLM as a judge" pattern: have a separate AI compare the summary against the source and flag anything that doesn't match.
For high-stakes outputs, build in debate. Have multiple AI agents review the same source independently and compare interpretations. If they disagree, surface the discrepancy for a human to resolve.
Account for stale data. Information from six months ago may no longer be relevant. Build in state management so your system knows when source material is outdated and flags it.
Pro tip: There isn't always one "right" answer. A coach and an engineer might interpret the same feedback differently — and both could be valid. Build your evaluations to account for multiple legitimate interpretations depending on who's using the output.
When people feel heard, they share what matters. These are just a few ideas for how AI might help you listen.
👉 Subscribe to This New Way: YouTube Channel 📩
Was this email forwarded to you? Subscribe here 📱
Find us on Instagram and TikTok: @ThisNewWay
Until next time,
Aydin Mirzaee, CEO at Fellow.ai and host of This New Way podcast