Blog

The Question Every Legal Marketer Is Afraid to Ask Out Loud

Written by Guy Alvarez | Mar 20, 2026 2:21:02 AM

Two law firms had just merged. Different cultures, different client bases, decades of separate reputations. Now they needed to answer a question that would define the combined entity: What should we call ourselves?

AI tools are commodities. Everyone has access to the same models.

The differentiator is what you teach them.Last week, I spent two hours in a conference room with 35 marketing and business development professionals from an AmLaw 100 law firm.

I've done dozens of these trainings. And without fail, the first question on everyone's mind is never about prompts or tools or workflows.

It's this: Am I going to be replaced?

I've learned to address it head-on. Because until you do, nobody's listening.

Here's what I told them: AI replaces tasks, not people. Specifically, the tasks you probably hate doing.

The Chief Operating Officer was sitting in the back of the room. When I said that, he nodded. That nod mattered more than anything I could have said. It signaled permission from leadership to actually experiment with this stuff without fear.

The energy in the room shifted after that. Shoulders dropped. People leaned forward.

Now we could actually talk about something useful.

The Real Problem with AI Training

Most AI training for legal marketers goes like this: Here's ChatGPT. Here are some prompts. Go forth and be productive.

That approach fails for a simple reason. It doesn't help people figure out where to apply AI in the first place.

Legal marketing teams don't need more prompts. They need a decision-making framework. A shared language for evaluating whether a task is even worth automating.

So I introduced them to TRIPS, a framework developed by Christopher Penn and his team at Trust Insights. I've found it to be one of the most practical tools for helping marketing teams prioritize AI use cases.

Here's how it works:

Time: How much time does this task consume weekly?

Repetition: How often do you do the exact same type of work?

Importance: What's the risk if AI gets it wrong? (Start with low-stakes tasks.)

Pain: How much do you hate doing this?

Sufficient Data: Do you have examples or templates AI can learn from?

Score each criterion from 1 to 5. Add them up. High score means it's a strong candidate for AI. Low score means save it for later. Or skip it entirely.

The beauty of TRIPS is that it removes the guesswork. When someone on your team asks, "Should we use AI for this?" you run it through the framework together. No debates. No gut feelings. Just a shared system.

What Happened When They Used It

I broke the room into groups. Five people each. A mix of AI enthusiasts and skeptics in every group. On purpose.

The exercise was straightforward: brainstorm five to seven recurring tasks your group handles. Score each one with TRIPS. Pick the winner. Then decompose it: What's the input? What's the output? What does "good" look like?

That last step matters. If you can't define what a successful result looks like, AI won't help you. It needs guardrails.

When the groups reported back, the winning use cases weren't surprising. They were exactly what you'd expect from a marketing team:

Creating social media content from articles. This scored high across the board. It's time-consuming, highly repetitive, relatively low-risk if AI gets it slightly wrong, and most teams have plenty of examples to feed the model.

Responding to RFPs. This one takes hours. Sometimes days. The output is templated. Past responses provide excellent training material. A strong TRIPS candidate.

Event planning and management. Lots of moving pieces. Checklists. Communications. Logistics. All of it can be augmented with AI if you've documented your process.

These are the kinds of tasks where AI shines: repetitive, structured, low-risk, and data-rich.

But here's where it got interesting.

What AI Won't Help You With (Yet)

One group brought up budget management. They assumed it would score high because it's painful and time-consuming.

I had to pump the brakes.

Budgeting involves high-stakes decisions. One miscalculation doesn't just look bad. It affects headcount, campaign viability, vendor relationships. The "Importance" score on TRIPS should be inverted: the higher the risk if AI makes a mistake, the lower it should score.

AI can help you analyze budget data. It can surface trends. It can even suggest optimizations. But the final call? That needs to stay with a human who understands the organizational context. At least for now.

The other issue that emerged was more frustrating: integration.

Several groups identified use cases that would be perfect for AI. If only their systems talked to each other.

Imagine asking AI to draft a personalized pitch based on a prospect's interaction history in your CRM, combined with their engagement on your website, cross-referenced with recent news about their company. That's a dream use case. High TRIPS score across the board.

But at most law firms, those systems aren't connected. The CRM doesn't talk to the website analytics. The competitive intelligence tool sits in its own silo. And even if you wanted to connect them, security concerns around privileged information make integration feel impossible.

This is a problem unique to legal. Other industries can wire up their tools without worrying about inadvertently exposing confidential client data. Law firms can't. And it limits AI's usefulness in ways that don't get talked about enough.

So what happens? Marketers end up doing the integration manually. Copying data from one system to another. Spending hours on tasks that AI could handle in seconds. If only the pipes were connected.

Until firms invest in secure data infrastructure, this gap will persist. And the marketing teams who want to use AI will keep hitting walls.

Which Use Cases Actually Work

After years of training legal marketers, I've developed a short list of use cases that consistently deliver results. They all share the same characteristics: repetitive structure, low risk, clear inputs and outputs, and sufficient examples.

Good candidates:

Content repurposing. One client alert becomes five LinkedIn posts. One blog becomes a newsletter, a social thread, and an email sequence. The input is clear. The output format is defined. AI handles the grunt work. You add the firm-specific polish.

First drafts of RFP responses. AI pulls from your past submissions and credentials. It gives you 80% of the document in minutes instead of hours. You supply the customization and quality control.

Event communications. Pre-event reminders. Post-event follow-ups. Speaker confirmations. These are templated, repeatable, and tedious. Perfect AI territory.

Summarization. A 20-page court decision becomes a three-paragraph client alert. A transcript from a webinar becomes a blog outline. AI excels at compression.

Research synthesis. Combining competitive intelligence, market trends, and internal data into a briefing document. AI can pull threads together faster than any human.

Weak candidates:

Anything requiring real-time judgment calls. Crisis communications. Sensitive client issues. High-profile media inquiries. AI can draft, but you can't let it run.

Tasks with high consequence for errors. Budget recommendations. Pricing strategy. Partner compensation analysis. The risk outweighs the time savings.

Anything requiring confidential data that lives in disconnected systems. Until your firm solves the integration problem, these use cases remain theoretical.

Strategy development. AI can provide inputs. It can analyze data. But strategic thinking requires human judgment. Understanding firm culture, reading between the lines, making bets. That's still your job.

The Tool Question

At the end of the session, hands went up. The most common question: which tool should we use?

The firm had already rolled out an AI platform for their attorneys. But it wasn't designed for marketing workflows. Meanwhile, Microsoft Copilot was coming. Should they wait? Switch? Use both?

I wish I'd had more time to dig into this. The short answer: different tools serve different purposes. A legal research AI won't help you draft social posts. A general-purpose assistant like Copilot is better for marketing tasks but requires thoughtful prompting to get good results.

The TRIPS framework helps here too. Once you know which tasks you're targeting, you can evaluate tools based on how well they support those specific workflows.

What I Hope Happens Next

I didn't walk out of that room expecting everyone to become AI experts overnight.

What I hope is simpler: when someone on that team encounters a new task, they'll ask a different question. Instead of "Can AI do this?" they'll ask "Should AI do this? What's the TRIPS score?"

That shift from capability to fit is where progress starts.

AI won't fix broken processes. It won't magically connect siloed systems. It won't replace the judgment that makes great marketing teams great.

But it can take the tasks you hate off your plate. The repetitive stuff. The grunt work. The things that eat your afternoons and leave you no time for the strategic thinking you were actually hired to do.

That's the promise. Not replacement. Augmentation.

And for the marketers in that room, many of whom walked in wondering if their jobs were at risk, that distinction made all the difference.

Guy Alvarez is the Co-Founder and Managing Partner of InnovAItion Partners, an AI consulting firm serving professional services firms and marketing agencies. He trains legal marketing teams on practical AI adoption and builds custom AI assistants for law firms.