
I had a conversation last week with a marketing director at a mid-size law firm that clarified something I have been turning over for months.
She told me her firm's CRM implementation was running a full year behind schedule. The vendor had been acquired mid-project, which caused a cascade of delays. She described the experience as "extended chronic pain."
In the same conversation, she expressed interest in our AI workflows for her team. Competitor intelligence, Article generation, PR automation.
But her conclusion was: "We can't even think about AI until the CRM is done. And after that, we need to get our data cleaned up. Maybe we'll be ready by January 2027."
I hear some version of this in about half of my law firm conversations. And I want to be honest: the instinct is partially correct. AI layered on top of bad workflows and messy data will just produce bad output faster.
A marketing leader at an AmLaw 100 firm made this point well during a workshop planning call. Her new head of pitches and proposals had discovered that content was scattered across multiple locations with no central repository. Some team members suggested pointing AI at everything and letting it search. Her response was blunt: consolidate first, then apply AI.
I saw exactly this dynamic at another firm last month. The marketing team had no shared system for AI prompts, and the result was a mess of duplicated effort. People were saving their best prompts in personal Word docs, Teams chats, and sticky notes on their monitors. One BD manager had built a solid competitor research prompt. Three of her colleagues had independently built worse versions of the same thing. Four people, same task, no coordination, no shared learning. When I asked if anyone had documented their workflows before trying to automate them, the room went quiet.
She is right. If your firm's processes live in people's heads and your content lives in seventeen different SharePoint folders, an AI tool will dutifully generate content from that mess. The output will be technically competent and strategically useless.
Data hygiene is a real requirement. So is process documentation. Having your house in order before you deploy AI at scale is a legitimate prerequisite.
Here is where it gets complicated.
"Get your house in order" has a cousin, and that cousin's name is "not yet." At a certain point, the prerequisite becomes the permanent blocker.
The CRM project finishes, and then there is a budget cycle to wait for. After that, a leadership transition. And once the new CMO is settled in? Someone inevitably calls for a "thorough evaluation" of all available tools before committing to anything.
But the sequential excuses are only half the picture. There are also the emotional ones. I hear "everybody's terrified to invest in building something significant if people won't adopt it" at least twice a month. Or the sunk cost version: "We just rolled out Copilot six months ago, we need to give it a fair shot" even when usage has already cratered. Or the one that is hardest to argue with: "We're just too busy right now to evaluate something new." That last one is particularly effective because it is always true. Law firm marketing teams are always busy. There will never be a calm quarter where everyone has bandwidth to learn a new system.
I have watched firms delay AI adoption for two years using a chain of perfectly reasonable prerequisites. Each one is legitimate on its own. Stacked together, they become a strategy for never starting.
Deployment readiness vs. exploration readiness
The firms that are getting this right are doing something specific. They are separating "deployment readiness" from "exploration readiness."
Deployment readiness means your data is clean and your processes are documented. That takes time and careful work. Fair enough.
Exploration readiness is different. It means you have picked two workflows, given a small team access to a purpose-built tool, and started generating output that your people can evaluate. Clean data is optional here. So is a finished CRM. The only requirement is a willingness to learn what AI can do for your specific team right now.
One firm I am working with took this approach. Their CRM migration was still in progress. Their content was scattered across departments. But they gave their PR team access to a competitor intelligence workflow and a content generation tool.
Two weeks in, the PR director stopped sending briefs to her external agency for first-draft press releases. She was doing it herself. AI gave her a starting point, and her edits took a fraction of the time.
The workflow she chose pulled from external sources: news feeds, competitor websites, public LinkedIn activity. Nothing that required clean internal data. And the outputs were easy to benchmark because she already knew how long drafts used to take, how many revision cycles were typical, and how many pitches went out per month.
What a 60-day exploration pilot looks like
The first two weeks are uneven. People are learning the tool, testing its limits, and producing output that requires heavy editing. This is normal. If you judge AI workflows by what they produce in week one, you will kill every pilot you run.
By week three, something shifts. The tool has enough context about your firm and your competitors that the output starts requiring less revision. Your team starts using it without being reminded. One or two people become internal champions who show others how they are using it.
The team you want on this pilot is small. Three to five people, ideally from PR, content, or competitive intelligence. These are the roles where the time savings show up fastest and the output is easiest to benchmark against what they were producing before. Do not try to onboard the entire marketing department on day one.
By day 60, you should have hard numbers: average time to first draft before and after, number of pitches or posts produced per week, and how many hours your team reclaimed. Those numbers are what you bring to leadership when you ask for the larger investment.
External-data vs. internal-data workflows
I think about that distinction between internal-data and external-data workflows constantly. Many of the highest-value AI marketing use cases pull entirely from public sources: competitor activity, news hooks, industry trends, reporter coverage patterns. Running a competitor intelligence scan takes a subscription and 10 minutes, and your CRM can be in any state whatsoever.
Here are the external-data workflows I recommend firms start with, because they require zero internal data cleanup:
Competitor intelligence scans news, press releases, websites, and LinkedIn activity by competitor, topic, and date range. You get a channel-by-channel breakdown of what your competitors are publishing and where your firm has content gaps.
PR pitch generation pulls current news hooks and matches them against your practice areas to produce reporter-ready pitch ideas. Your team still writes the final pitch. But the research and ideation step drops from two hours to ten minutes.
Article generation creates thought leadership drafts in an individual attorney's voice, using their published content and LinkedIn history as style inputs. The output always needs editing. That said, the blank-page problem disappears entirely, and attorneys consistently report the editing takes a quarter of the time that writing from scratch used to.
Social content distribution takes a single article and produces multiple LinkedIn post variations using different engagement frameworks. One article becomes a week of social content.
Trend analysis monitors industry publications, regulatory updates, and competitor commentary to surface emerging topics before they peak.
The internal-data workflows tell a different story. Content personalization built on client relationship history, cross-selling recommendations drawn from matter data, pitch customization from your experience database. These depend on accurate internal records and should wait until that foundation is solid.
But waiting on everything because some things require clean data? That is how you lose two years to preparation theater.
How to sell the pilot internally
The person reading this article is probably not the person who approves the budget. So here is how to frame the conversation with whoever does.
The cost is smaller than people assume. One CMO at an AmLaw 200 firm told me he expected AI workflows to cost $300,000. When I showed him the actual pricing, he laughed. We are talking about monthly subscriptions that cost less than a single junior associate's weekly billing rate.
The commitment is limited. Sixty days, three to five people, two workflows. You either get hard data that justifies expanding, or you spent less than a single conference sponsorship finding out it was the wrong fit.
Waiting has a measurable cost. Every month your competitors are running AI-powered competitor intelligence and your team is doing it manually is a month you are falling behind on both speed and coverage.
My recommendation
Here is my honest recommendation for any firm stuck in this loop.
Start with external-data workflows this quarter. Competitor intelligence and PR pitch generation are the two I would prioritize, because the time savings are immediate and obvious to leadership.
Run them for 60 days and measure output quality alongside time savings. Use that data to build the internal business case for the larger investment.
Meanwhile, keep working on your CRM migration and your content consolidation. Those projects will take as long as they take, and there is no reason to let them hold everything else hostage.
Eighteen months from now, the gap between firms that started messy and firms that waited for perfection will be obvious.