What Healthcare Fraud Detection Taught Me About Building AI That Actually Works
I recently attended the Behavioral Health Tech conference and the NAMPI conference, where the conversations centered on something most people outside healthcare don't think about: fraud in Applied Behavior Analysis therapy. Following a series of audits in Indiana, the industry was reckoning with a difficult truth — billions of dollars were being lost to waste, fraud, and abuse in a system designed to help children with autism.
I was there because of work I'd done building AI-powered verification systems for the behavioral health space. Specifically, we developed an Electronic Visit Verification system that could assess the quality of ABA therapy sessions, enhance accountability for providers, and give payers the transparency they needed to trust the system. It was some of the most challenging AI work I've ever done — and the lessons I learned have fundamentally shaped how I build AI systems for every industry, including the professional services firms I work with on Chicago's North Shore.
Here's what healthcare fraud detection taught me about building AI that actually works — and why it matters for your insurance agency, law firm, or financial advisory practice.
Lesson 1: AI Must Solve Real Problems, Not Just Automate Busywork
The first temptation when building any AI system is to automate the most visible task. In healthcare, that meant automating visit logging — essentially turning a paper process into a digital one. But the real problem wasn't the format of the paperwork. The real problem was that nobody could tell the difference between a genuine therapy session and a fraudulent billing entry.
Simple automation would have made the fraud more efficient, not less. What was needed was AI that could understand context — that could look at patterns across sessions, compare provider behaviors, flag anomalies that a human reviewer would catch only after weeks of investigation. The AI had to go beyond simple metrics and actually understand quality. That's a fundamentally different challenge than just digitizing a form.
What This Means for Your Firm
I see the same pattern when firms approach AI for the first time. An insurance agency says, "We want to automate data entry into Applied Epic." A law firm says, "We want AI to fill out intake forms." A financial advisor says, "We want automated portfolio reports."
These are all legitimate starting points. But if you stop there, you're automating busywork — making existing inefficiencies faster rather than solving the underlying problem. The real opportunity is AI that understands context.
For an insurance agency, that means AI that doesn't just enter data, but flags when a policy application contains inconsistencies that could indicate risk. For a law firm, it means intake automation that doesn't just collect information, but qualifies the lead and identifies which attorney should handle the matter. For a financial advisor, it means report generation that doesn't just compile numbers, but highlights anomalies that warrant a conversation with the client.
At Bace Agency, we always start by asking: what's the actual problem you're trying to solve? Not the task you want automated, but the business outcome you're trying to achieve. The answer to that question determines whether you end up with a system that genuinely transforms your operations or one that just makes your current problems run on autopilot.
Lesson 2: Accountability and Oversight Must Be Built In
In healthcare fraud detection, one of the biggest challenges was building a system that enhanced human judgment rather than bypassing it. Regulators didn't want an AI that automatically flagged providers as fraudulent — they wanted a system that surfaced evidence for human reviewers to evaluate. The AI needed to make people better at their jobs, not remove people from the process.
This sounds obvious, but it goes against the most common pitch in the AI industry: "Our AI handles everything so you don't have to." That pitch works for consumer apps. It's dangerous for professional services.
What This Means for Your Firm
When you automate client-facing workflows, you need accountability at every step. If an AI system responds to a prospect's inquiry, someone on your team needs to be able to see exactly what was said and when. If an AI processes a document and populates fields in your CRM, someone needs to review the output before it becomes the official record. If an AI generates a compliance report, a human needs to sign off.
This isn't a limitation of AI — it's a feature of good AI implementation. The firms that build oversight into their automated workflows are the firms that avoid embarrassing mistakes, maintain regulatory compliance, and build client trust. The firms that deploy AI as a black box are taking on risk they don't fully understand.
Every system we build at Bace Agency includes what I call a "human checkpoint" — a point in the workflow where a team member reviews, approves, or adjusts the AI's output before it goes to the client. This isn't because the AI can't be trusted. It's because your clients need to know that a human is in the loop, and because the combination of AI efficiency and human judgment produces better results than either one alone.
Lesson 3: Data Quality Is Everything
The healthcare fraud detection system was only as good as the data feeding into it. If providers were entering inconsistent session notes, if visit times were logged inaccurately, if patient records were incomplete — the AI would produce unreliable results. Garbage in, garbage out. You've heard it before. It's still true. It's the single biggest reason AI projects fail.
We spent as much time on data quality and standardization as we did on the AI models themselves. Cleaning up data entry processes, establishing consistent formatting standards, building validation rules that caught errors at the point of entry — all of this was prerequisite work that had to happen before the AI could deliver value.
What This Means for Your Firm
I've walked into insurance agencies where five different CSRs enter client data in five different ways. One uses abbreviations. One uses full names. One puts the phone number in the notes field. One skips the email entirely. The data is technically in Applied Epic, but it's inconsistent, incomplete, and unreliable.
If you try to build AI automation on top of messy data, you'll get messy results. The renewal reminder that goes to the wrong email because someone entered it incorrectly. The intake qualification that misses a key detail because the form field was left blank. The portfolio report that shows the wrong account value because someone transposed two numbers.
The good news is that AI can actually help clean up your data as part of the automation process. Validation rules catch errors at the point of entry. Standardization workflows normalize inconsistent formats. Deduplication processes identify and merge duplicate records. By the time the core automation kicks in, the data feeding it is clean and reliable.
That's why we always start with a discovery phase where we audit your current data and workflows. We need to understand what we're working with before we can build something reliable on top of it.
Lesson 4: Start With the Workflow, Not the Technology
The teams that struggled most with AI adoption in healthcare were the ones that started with the technology. They'd pick a platform, buy a license, and then try to figure out how to make their workflows fit the tool. Inevitably, they'd end up with something that technically worked but didn't actually solve the problem — because the problem was never about the technology. It was about the workflow.
The teams that succeeded started the other way around. They mapped their existing workflow in detail — every step, every handoff, every decision point, every exception case. Then they identified the specific steps where AI could add value. Then, and only then, they chose the technology that fit.
What This Means for Your Firm
When someone asks me, "What AI tool should my firm use?" I always give the same answer: it depends on what your workflow looks like. A law firm with a high-volume personal injury practice has completely different automation needs than a boutique estate planning firm. An insurance agency with 5,000 personal lines policies needs different systems than one focused on commercial accounts.
The workflow comes first. The technology serves the workflow. Not the other way around.
This is the core philosophy behind how Bace Agency operates. When we do a free AI audit, we don't come in with a pre-built solution looking for problems to attach it to. We sit down — usually in person, because I'm right here on the North Shore — and map how your firm actually works. Not the process diagram on the wall. The reality of how your team handles a new client inquiry on a Tuesday afternoon when two people are out and the phones are ringing.
From that map, we identify the specific bottlenecks, handoff failures, and repetitive tasks that are costing you time and money. Then we design automation that fits your real workflow, integrates with your existing tools, and solves the problems that matter most.
How These Principles Apply Across Industries
These four lessons aren't healthcare-specific. They apply to any firm trying to use AI well. I've seen every one of them play out — in different forms — with the insurance agencies, law firms, and financial advisors I work with on the North Shore.
For insurance agencies on the North Shore, these principles mean building systems that don't just speed up data entry but actually improve policy accuracy, catch coverage gaps, and ensure every client gets the right follow-up at the right time.
For law firms, they mean intake and case management automation that maintains the chain of custody on documents, ensures compliance with court deadlines, and gives attorneys more time for strategy and client relationships.
For financial advisors, they mean portfolio monitoring and client communication systems that maintain regulatory compliance, generate audit-ready records, and strengthen the trust that's at the foundation of every advisory relationship.
Anyone can access the same AI models and platforms. The difference is whether you understand the workflow you're automating.
The Bigger Picture: Every Industry Has Its Version of Fraud
In healthcare, we were fighting literal fraud — providers billing for services they didn't deliver. But every industry has its version of waste, inefficiency, and things falling through the cracks. In insurance, it's missed renewals, dropped leads, and coverage errors. In law, it's missed deadlines, incomplete intake, and client communication gaps. In finance, it's compliance oversights, stale CRM data, and reports that don't get delivered on time. I explore how these patterns show up across industries in what healthcare fraud means for your industry.
None of this is intentional. It's the inevitable result of human teams handling too many manual tasks with too little support. The "fraud" in your firm isn't someone stealing — it's time, revenue, and client trust being lost to inefficient processes.
AI doesn't solve all of this overnight. But applied with the right principles — solving real problems, maintaining accountability, ensuring data quality, following the workflow — it can systematically eliminate the waste that's been hiding in your operations for years.
What You Can Do This Week
You don't need to build a fraud detection system to apply these lessons. Here are five things you can do this week to start using AI principles in your firm:
- Standardize your data entry. Pick one field in your CRM — phone numbers, email addresses, policy numbers — and establish a single format. Train your team on it. Clean data is the foundation of everything else.
- Map one workflow end-to-end. Take the process for handling a new client inquiry. Write down every step, every handoff, every system it touches. You'll find at least two steps that could be automated.
- Add validation to your intake form. Required fields, format checks, dropdown menus instead of free text. Every error you catch at entry saves 10 minutes of cleanup later.
- Set up one automated alert. A renewal coming due in 30 days. A document sitting unprocessed for 48 hours. A lead that hasn't been contacted. One alert can prevent one dropped ball this week.
- Ask your team what they retype most often. The answer will point you to your highest-ROI automation opportunity. If three people are entering the same data into different systems, that's where AI should start.
Getting Started
If any of this resonates with how your firm operates, the next step is simple. Book a free AI audit with Bace Agency. It's a 30-minute conversation — in person here on the North Shore or over video — where we map your most painful workflow and identify exactly where these principles apply to your firm.
No pitch deck. No pressure. No hype. Just building.
Michael Pavlovskyi is the founder of Bace Agency, an AI workflow automation consultancy based in Lake Forest, Illinois. He works with insurance agencies, law firms, and financial advisors on Chicago's North Shore to eliminate manual work and modernize operations. He also hosts the RedNote Podcast.
Want to see how AI fits in your firm?
Book a free 30-minute AI audit. No obligation, no pitch deck.
Book a Free AI Audit →