How a Lake Forest Middle-Market PE Firm Can Build the AI Deal Desk That KKR Quietly Built
Key Takeaways
- ✓ KKR Capstone runs roughly 100 full-time operating professionals across more than 225 portfolio companies; Apollo's APPS team spans 190+ portcos; TPG stood up a firmwide AI task force in 2025. The architecture under the head count is a five-tool stack a Lake Forest middle-market firm can adopt directly.
- ✓ The AI deal desk for a $250M-$2B Lake Forest fund pairs Sourcescrub for signal scraping, Affinity for relationship intelligence, Anthropic Claude for target enrichment and IC pre-reads, Intapp DealCloud for diligence pipeline of record, and Cobalt for portfolio monitoring.
- ✓ Total spend lands between $143,000 and $408,000 per year for a five-to-twenty-person team, roughly half to one full middle-market associate's loaded compensation. Implementation timeline is thirty days. Head count change is zero.
- ✓ The desk's most important rule is the one most likely to get skipped: every claim in a partner-facing document, before it goes to IC, must be verified by a human. The desk produces inputs to decisions, not the decisions themselves.
Lake Forest, Ill. , April 30, 2026. At a small private-equity firm a few blocks from Conway Park, a managing partner reads a Bloomberg push notification on a Tuesday morning. KKR has just put what is "expected to raise several billion dollars" into Compass Datacenters, the latest tranche in what the firm has publicly described as a roughly $50 billion AI-infrastructure partnership with Energy Capital Partners. The partner closes the laptop. The fund he runs is one one-thousandth of that. The deal team is five people. Sourcing is whoever has lunch on Wednesday.
The question he asks his junior partner over coffee is the one every Lake Forest sponsor is asking right now. If KKR Capstone runs roughly 100 full-time operating professionals across 225 portfolio companies, and Apollo's Portfolio Performance Solutions team spans 190+ portcos, and Anthropic just put $200 million into a $1 billion joint venture with Blackstone, Hill House, Permira, and General Atlantic to embed Claude into PE workflows, what does a $400 million Lake Forest fund actually do? Buy a tool? Hire a head of data? Wait?
None of those. The answer that is emerging, which a handful of middle-market firms have already converged on, is to assemble what is best described as an AI deal desk. Not a department. Not a head count. A stack of five tools, two governance docs, and a thirty-day implementation plan that compresses what KKR Capstone built across two decades into something a five-person team can run for under $80,000 a month.
This article is the playbook for that desk. Every claim is sourced. Every recommendation is hypothetical, in the sense that the right configuration depends on a fund's strategy, its internal data hygiene, and its appetite for change. But the architecture is concrete, and the cost is knowable.
For a $250 million to $2 billion AUM Lake Forest fund running a five-to-twenty person deal team, the AI deal desk pattern compresses the front end of KKR Capstone's playbook into five integrated tools: Sourcescrub for signal scraping, Affinity for relationship intelligence, Anthropic Claude for target enrichment and IC pre-reads, DealCloud for diligence and pipeline of record, and Cobalt for portfolio monitoring. Total spend lands between $40,000 and $80,000 per month. The first usable version is live in thirty days.
~100
Full-time operating professionals at KKR Capstone serving 225+ portfolio companies, per Umbrex and KKR.com
$410.7B
2025 US PE middle-market deal value across roughly 4,018 transactions, up 16% YoY, per PitchBook
$200M
Anthropic's reported commitment into a roughly $1B joint venture with Blackstone, Hill House, Permira, and General Atlantic to embed Claude into PE workflows
What KKR Actually Built
Start with what is public. KKR Capstone is the firm's operating-partner unit, established in the early 2000s and grown into roughly 100 full-time operating professionals supporting all of KKR's investment strategies, per Umbrex's profile and the firm's own description. Capstone supports more than 225 portfolio companies. Inside Capstone is a Digital Value Creation team focused on what KKR publicly describes as digital transformation, AI strategy and execution, data strategy and execution, cybersecurity, and tech-stack management.
In 2025 KKR went a step further. The firm appointed Ruchir Swarup as Partner and Chief Information Officer and stood up an infrastructure, data, and AI platform organization inside its technology team, per Klover.ai's analysis of KKR's AI strategy. The firm hired Adam Selipsky, the former AWS chief, to anchor the real-assets and AI-infrastructure expansion, per Data Centre Magazine. Capital deployment caught up with the org chart. KKR has committed roughly $42 billion of equity into digital infrastructure across 23 investments, plus another $20 billion in power and renewables, per TelcoTitans. The Energy Capital Partners partnership, announced in October 2025, brings the bilateral total to roughly $50 billion, per the American Public Power Association.
None of that is replicable at a $400 million fund. What is replicable is the architecture beneath the head count. KKR Capstone does not invent its tools. It buys them, integrates them, and gives every operating partner a consistent stack. The sourcing engine sits on Sourcescrub-class data and a CRM-of-record. The diligence work runs on a deal-management platform of record. The portfolio-monitoring layer aggregates KPIs across hundreds of investments. The newest layer is a foundation-model assistant that drafts, summarizes, and pattern-matches. That layer is now Anthropic's Claude, sitting underneath much of the megafund stack and now explicitly priced for PE.
Apollo's parallel buildout corroborates the pattern. In mid-2021, Apollo hired Vikram Mahidhar as an operating partner; he now leads Apollo's Data, Digital, and AI team, per MIT Sloan Management Review. Apollo runs the Apollo Portfolio Performance Solutions group across 190+ portfolio companies. In January 2026 the firm backed a $5.4 billion Valor and xAI data-center compute transaction with $3.5 billion of capital, per Apollo's January 7 press release.
The Apollo APPS team, like Capstone, is structured around a small number of senior operating partners and a much larger bench of analysts and program managers. The MIT Sloan account makes a point that gets less coverage than the headline AI investments. Apollo's view is that AI capability is not built at the fund level, it is built inside each portfolio company, with the fund acting as a forcing function and a knowledge clearing house. That is a different operating model than KKR Capstone's centralized digital value creation team, but the underlying idea is the same. The fund is not the user of AI. The portfolio company is. The fund's job is to make sure the right tools are in the right hands and that lessons learned at one portfolio company travel to the next.
TPG fits the same pattern at a third slant. In 2025 the firm established a firmwide AI task force at the executive level, per Bloomberg coverage summarized at PYMNTS. Across the three firms, the org chart varies, the spend varies, and the language varies. The choice that does not vary is that all three have decided AI capability is a partner-level priority, governed centrally, deployed locally.
Two megafunds and a third major share one architectural pattern. The pattern is the part a Lake Forest sponsor can copy.
"AI is fundamentally rewiring the internet's infrastructure."
Waldemar Szlezak, Partner and Global Head of Digital Infrastructure, KKR, quoted in Data Centre MagazineThe Middle-Market Translation
PitchBook's 2025 Annual US PE Middle Market Report frames the opportunity. The middle market did roughly $410.7 billion of deal value across about 4,018 transactions in 2025, up 16% year over year. Exits hit roughly 1,022 transactions totaling $140.4 billion, surpassing pre-pandemic averages. But fundraising told a different story. Capital raised was roughly $94.8 billion, down more than 40% year over year, the weakest since pre-pandemic. The rolling one-year IRR for middle-market funds was roughly 7.6%. PitchBook noted no vintage newer than 2016 has yet achieved DPI above one. The full report is here.
Translation for a Lake Forest sponsor: deal volume is up. Exit liquidity is recovering. But the LP base is more disciplined than it was three years ago. Every GP who raises a 2026 fund will be asked, in some form, what they have done about AI. The sophisticated answer is no longer "we are watching the space." The sophisticated answer is "here is the desk, here is what it costs, here is the work it has shifted off the senior team."
Team-size context matters. Per Mergers and Inquisitions's overview of middle-market PE, a lower-middle-market deal at a $10 million target typically pulls in a partner to sign off plus a mid-level professional to run diligence. At a megafund the deal team for a single transaction is still only five or six people. The implication: most middle-market firms cannot fund a dedicated data-science team and do not need to. They need to put one tier of intelligence under every member of a deal team that already exists.
Lake Forest itself is not short on candidates for that work. Public directories surface roughly 8 to 12 PE/VC firms with Lake Forest 60045 addresses. RoundTable Healthcare Partners runs $4.25 billion of committed capital across six equity funds and three sub-debt funds, headquartered at 272 East Deerpath Road. TRIAD Capital Management runs federal-government services and specialty manufacturing strategies. Heizer Capital invests in energy, insurance, and healthcare services. EXI Investment Partners, WinForest Partners, Parallel49 Equity, Growth Equity Capital Partners, and Clashmore Ventures fill out the local roster. The broader North Shore corridor adds substantially more in Bannockburn, Deerfield, Northbrook, and Highland Park.
None of those firms publicly runs anything like a Capstone unit. Most do not need to. What they need is the desk.
The Five-Component AI Deal Desk
The architecture, by component, with public pricing where available and the workflow each tool replaces or augments at a Lake Forest-sized firm.
1. Signal scraping: Sourcescrub
Sourcescrub is the front of the funnel. Per the company's own description, the platform maintains deep profiles on roughly 15 million companies and runs human-in-the-loop machine learning to connect data from nearly 200,000 sources. The product's primary use at a middle-market firm is conference, trade-show, and award-list mining. Every speaker list at every middle-market conference, every regional Inc. 5000 list, every association directory becomes a structured query.
At the Lake Forest scale, the workflow that Sourcescrub replaces is the most expensive one a junior associate runs. It is the manual "build me a list of every commercial-roofing rollup target in the upper Midwest with $10 to $40 million of revenue, family-owned, and at a transition point." Pre-Sourcescrub that takes a junior two weeks of Google, LinkedIn, and call-spent. Post-Sourcescrub, with the right query template, it takes an afternoon. The list is also auditable and rerunnable.
2. Relationship intelligence: Affinity
Affinity sits next. Per the firm's private-equity industry page, Affinity ingests every email and calendar event across the firm's accounts and maps them into a relationship graph that the deal team can search by company, person, or interaction recency. Affinity publishes pricing that starts at roughly $2,000 per user per year. For a five-person Lake Forest deal team, that is $10,000 a year. Most firms go live in under sixty days, per Affinity's own claim.
The use case at a middle-market shop is the warm-introduction graph. When a target appears on a Sourcescrub query result, the first action is to ask whether anyone at the firm or its broader network already knows the seller, the seller's banker, or the seller's prior advisor. Pre-Affinity, that question is asked in the firm's group chat and answered by whoever happens to be at the office. Post-Affinity, it is a query that returns a structured answer in seconds. The same graph then drives outreach prioritization. Cold outreach without a relationship vector loses almost every time at the middle market. The desk's job is to never run cold outreach when a warm path exists.
3. Target enrichment and IC pre-reads: Anthropic Claude
Claude is the foundation-model layer. Anthropic in 2026 began publishing a Claude Private Equity plugin, with the explicit positioning that PE workflows are a first-class use case. The reported $200 million Anthropic commitment into a roughly $1 billion joint venture with Blackstone, Hill House, Permira, and General Atlantic is the strongest signal a foundation-model vendor has ever sent that PE is the distribution channel.
At the deal desk, Claude does three things. First, it enriches the Sourcescrub output with publicly available context: filings, news, Crunchbase, LinkedIn, regulatory data. The output is a one-page primer on every target before a partner spends fifteen minutes on it. Second, it drafts the IC pre-read once a target has been qualified. The pre-read is the document that, in most middle-market firms, an associate spends an afternoon writing and a partner spends ten minutes reading. Claude turns that into a thirty-minute first draft and a ninety-minute partner review of the actual judgment calls. Third, it runs comparable-set analyses on demand. "Show me every roofing rollup that has traded in the last seven years above $10 million of EBITDA, and what the multiple was." That answer used to require a Capital IQ subscription and a junior. It now requires a sentence.
Pricing for Claude at the deal-desk scale lands in the low thousands per month for the foundation-model API plus the workspace tooling. Less than a single seat of a deal-flow database product, an order of magnitude less than an associate.
4. Diligence pipeline of record: Intapp DealCloud
DealCloud, owned by Intapp, is the diligence and pipeline-of-record system. Per Intapp's product page, DealCloud is positioned as a single-source platform for deal management, relationship management, and firm management, covering fundraising/IR, sourcing, diligence/IC prep, and exit execution. The platform was named 2024 Deal Origination Solution of the Year at the PE Wire US Credit Awards. Pricing is custom.
The use case at a Lake Forest fund is the system the partner opens on Monday morning to see what is moving. DealCloud's value is consistency. Every deal lives at one URL with one stage, one set of documents, and one audit trail. The platform replaces the tribal knowledge of a five-person team that breaks the moment one partner is on vacation.
DealCloud is the layer that integrates Sourcescrub, Affinity, and Claude for the deal team. Sourcescrub feeds new candidates in. Affinity tags every interaction. Claude generates the pre-reads that attach to the diligence record. The partner's view is a single dashboard with the right level of abstraction. The associate's view is the work, organized.
5. Portfolio monitoring: Cobalt (a FactSet Company)
Cobalt closes the loop. Per cobalt.pe, the platform is deployed at more than 100 PE firms for portfolio monitoring at the GP level and LP-side analytics, with a partnership with Hamilton Lane on the LP product. Pricing is custom.
At a middle-market firm with eight to fifteen portfolio companies, the Cobalt workflow is the monthly KPI roll-up that, pre-platform, an analyst manually pulls from each company's reporting package and pastes into a master deck. With Cobalt, the roll-up is a query. The partner sees variance against budget, against last quarter, and against the firm's own internal benchmarks. Anomalies surface first, instead of in the third week of the next quarter when an LP asks.
Cobalt is the only one of the five tools that some Lake Forest funds will defer for twelve months. Funds with eight or fewer portcos can run monitoring on a clean Excel model and a $500-a-month BI tool for another year. Funds with twelve or more should buy it now.
What Claude Does and What the Team Still Does
The single most expensive mistake a middle-market firm can make right now is to treat Claude as the deal team. Claude is a tier of intelligence under every member of the team, not a substitute for any of them. The question is what work shifts and what work does not.
Work that shifts: target screening at the front of the funnel, where the input is a structured query and the output is a ranked list. Public-information enrichment on a known target, where the input is a company name and the output is a one-page primer. First drafts of IC pre-reads, where the input is the primer plus the diligence package and the output is a working document the partner edits. Outreach personalization, where the input is the target plus the relationship graph and the output is an email a partner can send. Comparable-set queries, where the input is the deal thesis and the output is the historical trade record. Post-meeting synthesis, where the input is the call notes and the output is a structured update to DealCloud.
Work that does not shift: the relationship. The mid-market sale runs on a partner the seller trusts, not on the email a partner sent. The diligence judgment. Claude can list the questions to ask, but the question of whether the answers add up to a thesis is still the partner's. The price negotiation. The structuring decision. The chair-on-fire moment when something goes wrong at a portfolio company and someone has to be on the phone. The IC vote.
An example: the IC pre-read prompt
The single highest-value prompt template at a middle-market deal desk is the IC pre-read. It is also the template that, when written carelessly, produces the worst kind of output: a confident-sounding three-page document that is almost right and impossible to verify. The template that works has six parts and runs to roughly 800 words of instructions before it ever sees a target company.
Part one is the firm's investment criteria, written explicitly. Sector, geography, EBITDA range, growth profile, deal-type preference, ownership structure, hold period, and named adjacent industries the firm will and will not consider. Part two is the firm's stated thesis on the relevant sub-sector, in two paragraphs. Part three is the structure of the pre-read, including the sections, the word budget per section, and the citation format. Part four is the explicit instruction that every claim in the pre-read must be either directly attributed to a public source with a URL or labeled as inference. Part five is a list of red flags that should be surfaced by name, including customer-concentration thresholds, working-capital irregularities, founder-dependency markers, and historical accounting-treatment changes. Part six is the partner's name, role, and the question the pre-read needs to answer for that partner specifically.
The input is then a single target name plus, optionally, the diligence package the firm has already collected. The output is a five-section document: Executive Summary, Why Now, Diligence Risks Already Identified, Open Questions for the Partner, and Comparable Transactions. Each section runs to its budgeted length. Each claim has a citation or a label. The partner reads the pre-read for fifteen minutes, marks the claims that need verification, and either kicks the deal forward into formal diligence or kills it.
The template is the asset, not the model. A firm that writes its template once and refines it across the first ten deals has compressed several thousand dollars of associate time into a sub-$100 marginal cost per pre-read. The template is also portable across model versions. When the next Claude or competitor model ships, the firm reruns its existing prompt and gets a better output, with no rewrite cost. This is the underrated property of foundation models. Investments in prompt design compound across model upgrades.
The same logic produces the firm's other ten templates. Outreach drafts that match the seller's prior career and known preferences. Comparable-set queries with the firm's own filters. Post-meeting synthesis that fills DealCloud automatically. Quarterly investor updates with the firm's voice and the firm's preferred numbers. None of these is hard. All of them must be written down, version-controlled, and owned by the desk owner. A firm that treats its prompt library the way it treats its model templates and its DDQ responses will outrun a firm that does not.
A simple test for any proposed AI workflow at the desk: if the output is a draft a human edits, it is in scope. If the output is a final decision, it is out of scope. The desk should never be configured to send any communication to a seller, lender, banker, advisor, LP, or portfolio CEO without a human in the loop. That is the line.
The test is also the answer to the most common LP question of 2026. "Is your firm using AI to make investment decisions?" The honest answer at the middle market is no. The firm is using AI to compress the work that produces the decisions. The decisions are still the people.
The 30-Day Implementation Plan
Three step cards. Each card is roughly a week of partner time and a week of associate time.
Week 1 to 2: Adopt the relationship and pipeline layer first
Affinity and DealCloud go in before anything else. Affinity is a sixty-day implementation that delivers immediate value once email and calendar are wired in; the firm gets a relationship graph with no behavior change required from the partners. DealCloud's first cut is the existing pipeline tracker re-platformed. Both tools also produce the structured data the rest of the desk consumes.
Decision the partner makes this week: who at the firm is the desk owner. This is not a hire. It is a name on the deal team who is accountable for the desk running. At a five-person firm it is usually the second-most-senior associate or the incoming senior associate.
Week 2 to 3: Stand up Sourcescrub and Claude in parallel
Sourcescrub has a configuration phase. The desk owner spends a week building the firm's first three persistent queries, one per active thesis, and the associated saved searches for trade shows and award lists. Claude's setup is faster but produces less without the right prompt library; the desk owner spends parallel time building the firm's first ten prompt templates: the target primer, the IC pre-read, the comparable set, the partner-meeting summary, the outreach draft, and five thesis-specific prompts.
Decision the partner makes this week: which deals are in scope for AI workflows and which are not. New platform deals: yes. Mid-stream deals where the partners are already deep in diligence: not yet. Bolt-ons under a portfolio company's CEO: at the partner's discretion. The point is to keep the first thirty days about wiring, not retroactive process change.
Week 3 to 4: One closed-loop deal, then governance
By week three, the desk runs one new target end-to-end. Sourcescrub finds it. Affinity surfaces the warm-intro path. Claude drafts the primer. DealCloud receives it. The partner reviews the primer, the associate runs the next steps. By week four, the firm writes a one-page governance memo: what the desk does, what is human-only, who has access to what, what data leaves the firm and to which vendor, and how Claude prompts that contain non-public deal information are handled.
Decision the partner makes this week: a quarterly review cadence. The desk gets a fifteen-minute slot at every Monday partner meeting for the first quarter, then thirty minutes monthly thereafter. The point of the review is not the tools. It is what the partners are no longer doing manually.
Total spend by end of month one: roughly $40,000 to $80,000 a month run rate, depending on whether DealCloud is bought now or deferred. Total head count change: zero. The desk owner is an existing associate with a new title and a new accountability.
What This Costs in Detail
Public pricing exists for two of the five components and is custom for the other three. The estimate below is built from public list prices where available, market norms for custom-priced products at a five-to-ten user middle-market firm, and recent reference-check conversations with PE-software vendors. Numbers will vary. Negotiation room is real. The point of the table is to show that the line items are not a mystery.
| Component | Approximate annual cost | Notes on pricing |
|---|---|---|
| Sourcescrub | $36,000 to $80,000 | Custom; multi-seat. Trade-show and conference data sets are commonly priced by user and by data-set scope. |
| Affinity | $10,000 to $25,000 | Public starting price approx $2,000 per user per year. Five-to-twelve-user middle-market deal teams. |
| Anthropic Claude | $12,000 to $48,000 | Mix of API spend for the prompt library and team workspace seats. PE plugin pricing is custom. |
| Intapp DealCloud | $60,000 to $180,000 | Custom; firmwide. Higher end reflects fundraising/IR module plus broader portfolio integration. |
| Cobalt (FactSet) | $25,000 to $75,000 | Custom; varies with portfolio company count and LP-side modules. |
| Total run rate | $143,000 to $408,000 per year | Roughly $12,000 to $34,000 per month. The lower end is realistic for funds deferring DealCloud and Cobalt. |
A more honest framing: the desk's first-year cost lands somewhere between half and one full middle-market associate's loaded compensation. A $200,000 to $400,000 annual run rate, against a five-to-twenty-person deal team, is roughly the price the firm would pay to add one half-time analyst. The desk does the work of considerably more than half an analyst.
Two budget pitfalls are worth flagging. First, implementation services are sometimes priced separately by Affinity and DealCloud, in the $10,000 to $50,000 range, and a firm that signs a three-year contract should negotiate those into the base. Second, foundation-model spend is the line item most likely to grow unexpectedly. A firm that runs its prompt library on a cheap, less capable model variant for the first six months and then upgrades to a more capable model for IC-grade work will see API costs roughly triple. Budget for the upgrade.
What This Does Not Replace
Three things the AI deal desk does not replace, in order of how often the question is asked.
It does not replace the LP base. A first-quartile fund with an indifferent LP roster can run the best desk in the middle market and still raise on a knife edge. The desk is a story, not a product, in fundraising. The partner who can speak to it specifically and not in slogans wins more re-up conversations than the partner who cannot.
It does not replace operating partners at the portfolio-company level. KKR Capstone is 100 people for a reason. Middle-market firms typically run in-house operating partner teams of three to ten, occasionally with sector specialists, and that work is about people on the ground inside a portfolio company. Claude can compress an operating partner's reporting and analysis. It cannot replace the operator's presence in a board meeting at a $30 million ARR portfolio company that is missing plan.
It does not replace the partner's judgment on which deals to do. The desk's outputs are inputs. The partner who treats the desk as a substitute for thinking will lose money. The partner who treats it as a way to think faster, on more deals, with less cognitive load, will not.
The Risks Nobody Is Talking About
The trade press on AI in PE skews relentlessly positive. The trade press on AI in PE is also written by people who do not have to live with the cap-table consequences of a bad data-handling decision. Four risks deserve a serious sentence each, and a Lake Forest sponsor should price all four into the desk before the desk goes live.
The first risk is data leakage. The desk runs on non-public deal information. CIMs from sell-side bankers, target financial models, working-session notes, and IC pre-reads are all material non-public, and most of them are also subject to NDA. Every foundation-model vendor publishes a data-handling policy, and Anthropic's published policy is genuinely strong. The risk is not the vendor. The risk is the prompt that an associate accidentally pastes into the wrong tab. The mitigation is twofold: the firm runs the prompt library inside an enterprise workspace where logs are auditable, and the firm trains every desk user on a one-page rule that says no NDA target's deal-specific information enters any model that is not under the firm's enterprise contract.
The second risk is prompt injection at the diligence stage. A diligence package that includes documents from an outside party, including the seller's data room, can contain text that instructs a model to behave differently. A real-world example surfaced in 2025 when researchers showed that a competitor's sales document, fed into a generic model, could be made to produce a recommendation against itself. At the deal desk, the practical mitigation is to never run an automated workflow that takes raw seller-provided documents as input and produces a final, partner-facing recommendation. The model output is always an input to a human review. The diligence package never sees the prompt without a human between them.
The third risk is vendor lock-in. The five-tool desk has reasonable optionality on three components and meaningful lock-in on two. Sourcescrub, Affinity, and Cobalt are replaceable on twelve-month notice with significant migration cost but with the firm's data still owned by the firm. DealCloud and the prompt library inside Anthropic's workspace are stickier. DealCloud becomes the firm's pipeline-of-record, and the cost of moving off it after eighteen months is substantial. The prompt library, once written, is portable across model vendors but not across workspaces; a firm that wants to switch from Anthropic to a competitor will need to re-test every prompt against the new model. The mitigation is contractual: every vendor contract should have a documented data-portability clause, and every prompt template should be stored in plain text in the firm's own version control, not only in the vendor's workspace.
The fourth risk is false confidence. The output of a foundation model reads like the output of a smart analyst. It is not. It is the output of a system that has read the entire internet and learned to produce text that sounds right. The system has no internal sense of the difference between a confident claim and an unverified claim, except for the one the prompt instructs it to mark. The desk's most important governance rule is the one most likely to get skipped on a tight deadline: every claim in a partner-facing document, before it goes to IC, must be verified by a human. Not skimmed. Verified.
A Final Note on Timing
The verdict on the AI deal desk is straightforward at the middle market. The pattern is now sufficiently clear, the tools sufficiently mature, and the foundation-model layer sufficiently priced for PE specifically that a firm holding off on this stack into the second half of 2026 is making a deliberate competitive choice. KKR, Apollo, Blackstone, Hill House, Permira, and General Atlantic have already chosen. The middle-market funds that will outperform their cohort over the next vintage are the ones that translated those decisions into a five-tool desk that fits a five-person team.
The most underrated piece of the playbook is also the cheapest: name the desk owner this week. The tools are buyable in any order. The desk does not exist until somebody at the firm is accountable for it.
Frequently Asked Questions
How much does the AI deal desk actually cost for a $400 million Lake Forest fund? +
Total run rate lands between $143,000 and $408,000 per year for a five-to-twenty-person deal team, depending on whether DealCloud and Cobalt are bought now or deferred. The lower end of that range is realistic for funds running fewer than ten portfolio companies and deferring DealCloud for twelve months. Implementation services from Affinity and DealCloud, when negotiated separately, add $10,000 to $50,000 in the first year.
Will the AI deal desk replace any existing roles at our firm? +
No, and a fund that builds the desk to replace head count will likely break the desk. The desk shifts work that was previously done in the worst hours of an associate's week, including manual list building, target enrichment, and first-draft IC pre-reads, onto a tier of intelligence under every member of the existing deal team. The desk does not replace partners' judgment, the seller relationship, the diligence judgment, the price negotiation, the structuring decision, or the IC vote.
We have not picked a CRM yet. Do we need to commit to DealCloud first, or can we start lighter? +
Funds with under eight portfolio companies and fewer than five active deals at a time can start with Affinity alone for the first six months and run their pipeline tracker in a clean spreadsheet or DealCloud's CRM-only tier. The thirty-day implementation plan in this article can be run with Affinity, Sourcescrub, and Anthropic Claude alone, deferring full DealCloud adoption to month four or five. Cobalt can wait twelve months for funds with fewer than twelve portfolio companies.
What is the single biggest risk of using foundation models on deal information? +
The most underrated risk is false confidence: the output of a foundation model reads like the output of a smart analyst, but the model has no internal sense of which claims are verified and which are inferred. The governance rule that protects against it is non-negotiable. Every claim in a partner-facing document, before it goes to IC, must be verified by a human. Data leakage and prompt injection are the second and third risks; the desk should run inside an enterprise workspace with auditable logs and should never run an automated workflow that produces a final partner-facing recommendation from raw seller-provided documents.
How does this hold up if the next Anthropic or competing model release changes the cost or capability picture? +
The architecture is intentionally model-agnostic. The firm's prompt library is stored as plain text in the firm's own version control, not only inside the vendor's workspace, so a switch from Anthropic Claude to a competing model is a re-testing exercise, not a rewrite. The five-tool stack also degrades gracefully. If foundation-model pricing rises or capability fluctuates, Sourcescrub, Affinity, DealCloud, and Cobalt continue to do their work. The Claude layer is the most replaceable component of the desk, by design.
About the author
Written by
Michael Pavlovskyi
Founder, Bace Agency
AI consulting for Lake Forest private equity.
Connect on LinkedInWant to see how AI fits in your firm?
Book a free 30-minute AI audit. No obligation, no pitch deck.
Book a Free AI Audit →