AI is arguably the most important current challenge facing legal teams in 2026.
The challenge has evolved far beyond the question of if legal should explore AI and is now focused on how to do it in a way that is practical, responsible, and actually useful.
That was the focus of Radiant Law’s recent webinar with Laura Revie, Director of AI Acceleration at FanDuel, who has spent her career helping teams rethink how work gets done. In conversation with Radiant’s Mario Ferreira, Laura shared a practical view on AI adoption: where legal teams should start, what often gets in the way, and how to build momentum without getting distracted by all the noise and hype.
Here are some of the key takeaways from their discussion.
1. Start with the problem, not the tool
One of Laura’s strongest messages was simple: do not start with AI.
Start with the problem.
Too often, teams begin with the tool and then try to find somewhere to use it. That usually leads to experimentation without impact.
A better approach is to identify where work feels slow, repetitive, manual, or difficult to scale, and start there.
That could be:
- a repetitive admin-heavy task
- a process handled by multiple people but with little consistency
- a piece of work that takes too long to complete or review
- a recurring bottleneck that repeatedly slows the team down
The goal is to solve a business problem more effectively over simply using AI to execute something.
2. Legal teams should make AI available early, but responsibly
For teams at the beginning of their AI journey, Laura recommended a two-pronged approach.
First, make AI tools available so people can start experimenting and learning what they are good at. Second, pair that access with clear guardrails and training so people understand how to use AI responsibly, especially around sensitive data, risk, and governance.
This is important because without access, teams don’t build confidence and without guardrails, they introduce unnecessary risk.
The most effective organisations create space for experimentation, but they do it with clear rules around what can be used, what data can be shared, and when human review is required.
3. Most AI opportunities start with process friction
Many teams assume AI adoption starts with a new tool but it actually starts with process visibility.
Laura described how often teams are working through processes that have evolved over time but have never been properly mapped. Once those workflows are made visible, the friction becomes much easier to spot and that is often where the real opportunity sits.
Not in replacing everything at once, but in identifying:
- where work gets stuck
- where decisions are repeated
- where tasks are overly manual
- where people are spending time on low-value work
In many cases, the first AI use case is a small, repeatable, frustrating task that is taking up too much time.
4. Proof of concept matters more than perfect strategy
One of the most practical points Laura made was that legal teams do not need to start with enterprise-wide transformation but with one useful proof of concept.
That means choosing a narrow problem, testing it with a small group, and learning quickly.
A short pilot can:
- show whether AI adds value
- build confidence in the team
- surface limitations early
- create internal momentum
- generate evidence for wider adoption
Not every proof of concept will succeed, and that is fine because a failed pilot that creates clarity is still progress.
5. Good AI adoption depends on change management, not just technology
A recurring theme in the discussion is how AI adoption is a change management challenge and even the best tool will fail if no one uses it.
That means successful adoption depends on:
- leadership support
- clear success criteria
- team buy-in
- practical rollout planning
- giving people permission to work differently
People need to understand not just what is changing, but why and the strongest adoption happens when teams can see the value being built in real time, shape it themselves, and understand how it improves their day-to-day work.
6. Legal should automate routine work, not legal judgment
For legal teams, Laura was clear: AI is powerful, but it should not replace human judgment.
The most effective use cases are the ones that remove low-value, repetitive work so lawyers can focus on higher-value thinking. Some good use case examples include:
- research support
- document review
- administrative workflows
- routine drafting support
- scheduling and task coordination
- surfacing insights from large volumes of information
The line becomes much clearer when the task requires legal judgment, risk interpretation, or critical decision-making and that is where lawyers should remain firmly in the loop.
7. AI adoption works best when end users help build it
One of the strongest practical lessons from the webinar was that AI tools work best when the people using them help shape them. That means involving end users early, testing quickly, and iterating in real time.
Instead of handing requirements to a technical team and waiting months for delivery, Laura described a much faster model:
- build something small
- test it early
- adjust quickly
- improve it with direct user feedback
That approach shortens the gap between idea and value and makes adoption far more likely.
8. Not every AI idea should survive
One of the most useful mindsets Laura shared was this: teams need to get comfortable stopping things.
Not every AI experiment should become a long-term solution and some ideas are worth testing and then abandoning and this should not be seen as failure, rather, discipline. If a tool does not solve the right problem, introduces too much risk, or does not create enough value, stopping is often the right decision.
The value is not in forcing AI into the workflow, but knowing when AI meaningfully improves it.
Final thought: the real advantage is not AI, it is better ways of working
One of the clearest messages from the conversation was that AI adoption is actually about helping teams work better. For legal, that means:
- reducing time spent on repetitive work
- improving speed and consistency
- creating more capacity for high-value thinking
- making smarter operational decisions
- building ways of working that can scale
The teams that will benefit most from AI are not the ones chasing every new tool, they are the ones asking better questions, solving real problems, and building practical habits around how work gets done.

.jpg)















.png)
.jpg)




.jpg)









.png)
.png)




.png)






























.jpg)

