The CFO's guide to AI tool evaluation: Glenn Hopper's proven framework

Zone & Co Team

Summary (TL;DR)

  • Glenn Hopper – CFO and AI strategist with 20+ years of experience – shared his proven framework for evaluating AI at our Finance Forward event
  • Glenn's approach: treat AI like any other capital investment, begin with low-risk pilots and always ask “what’s the cost of being wrong?”
  • This post breaks down Glenn Hopper's 8-question evaluation framework, the risk matrix for AI project selection and lessons from CFOs already applying these principles in practice

Why CFOs need a different lens on AI adoption

Every week brings another “game-changing” AI tool pitched at finance leaders, each promising to transform reporting, forecasting or compliance. Yet when leaders are asked which AI initiatives actually made it past pilot, the list is short.

But the problem isn’t the technology per se. As Glenn Hopper (author, lecturer and strategic advisor specializing in AI-powered finance transformation) told the audience at our recent Finance Forward virtual event, failures often stem from poor project selection and misplaced expectations.

AI projects in finance often get underway without the same structure and evaluation you’d expect from any other capital investment. Without that discipline up front, even promising tools struggle to gain traction or earn trust.

This article breaks down Hopper’s systematic approach – the risk matrix that guides AI project selection, the questions that prevent expensive AI failures and real-world insights from CFOs already making artificial intelligence work inside their finance operations.

The cost-of-being-wrong lens

Most finance leaders default to asking: What’s the upside if this AI project works? Hopper flips the question: What happens if we get this wrong? 

That shift reframes the risk immediately because not every mistake carries the same weight. Some are manageable, others cut deeper. Hopper broke it down with three levels of risk:

  • Operational errors: A robotic process automation (RPA) bot enters an invoice incorrectly. It’s frustrating, but the mistake is auditable, caught in reconciliation and fixed in the next cycle.
  • Reputational errors: Generative AI drafts a board summary with a misstatement. Suddenly, credibility is at stake. Executives and investors may act on flawed information.
  • Strategic errors: Fully automating investment priorities or cash management without human oversight. Following an AI recommendation blindly creates real P&L exposure and long-term decision risk.

As Hopper explained during the event: "Where the risk gets the highest is if we’re trying to offload the very human task of decision-making right now.”

The CFOs seeing progress with AI aren’t chasing efficiency at all costs. They start by mapping where failure is recoverable and where it isn’t. That distinction shapes how far they let automation run – and where the human stays firmly in the loop.

The risk evaluation matrix: a CFO’s filter for AI projects

After reframing risk around the “cost of being wrong,” Hopper applies a simple filter: magnitude × frequency. Together, those dimensions determine whether an AI project belongs in the “pilot zone” or requires stronger guardrails.

  • Magnitude: how big is the impact if it fails?
  • Frequency: how often does it run or influence decisions?
Glenn Hopper's Risk Matrix for AI Automation Projects

The intersection creates four zones:

1. High Magnitude + High Frequency – the highest risk

Examples: board reporting, financial close, audit prep

These are the processes you don’t hand off to AI. Glenn identifies these as too frequent, too consequential. Human oversight and strong controls are non-negotiable.

2. High Magnitude + Low Frequency – rare but catastrophic

Examples: M&A analysis, regulatory filings, strategic forecasting

Glenn characterizes these as rare but catastrophic risks where manual review is key. Use AI very cautiously here – let it collect and process data, but keep human oversight on insights and decisions.

3. Low Magnitude + High Frequency – "the pilot zone"

Examples: invoice processing, reconciliations, expense categorization

This is the proving ground. Glenn calls this the ideal zone for GenAI pilots: frequent use with low downside. This is where you test and learn safely. Your impact won't be big if something goes wrong, but there are great returns available.

4. Low Magnitude + Low Frequency – safe to automate

Examples: basic data validations, routine reporting updates

Glenn notes these are infrequent tasks with minor impact that are safe to automate with minimal oversight. Once your SOPs are documented and processes are stable, the risk-reward equation strongly favors automation.

As Hopper summed it up to the Finance Forward audience: "You can match the tool to the risk. Use GenAI where it's safe and use discipline where it's not."

The practical takeaway: build trust in the pilot zone first. That’s where CFOs can rack up quick wins, prove reliability and create the cultural readiness for higher-stakes AI adoption.

The eight questions that prevent AI failure

Frameworks provide structure, but discipline prevents expensive mistakes. Hopper outlined eight questions CFOs should work through systematically before approving any AI project.

Foundation questions – is the process ready for AI?

  1. Could we solve this with basic automation instead of AI? Don't overengineer solutions. Many finance processes need rule-based automation, not AI intelligence. Hopper sees clients regularly who expect to "sprinkle AI on any process" when what they really need is structured workflow automation.
  2. Is the workflow stable enough to automate with AI? This is the overlooked prerequisite that derails more AI projects than technology failures. You can’t automate chaos. Document your standard operating procedures first – they become the training foundation for any AI system.
  3. Do we trust how this AI tool handles financial data? Security, privacy and auditability aren't negotiable in finance. If you can't explain the process to auditors or demonstrate compliance controls, it's not suitable for financial operations.
  4. Is our data clean and structured? AI amplifies data quality problems. Garbage in, garbage out becomes exponentially worse when AI systems learn from bad data patterns. Clean your data foundation before adding intelligence layers.

Strategic alignment questions – will the AI investment hold up?

  1. What's the true cost of being wrong? Refer back to your risk matrix. Understand the full downside scenarios, not just the immediate operational impacts.
  2. Will this scale at 2x growth? Think beyond the pilot. If you're a fast-growing company or facing M&A activity, will this process grow with you, or will you need to rebuild as you scale?
  3. Who else depends on this process? AI decisions can't happen in silos. Finance processes typically feed into operations, sales and executive decision-making. Ensure cross-functional alignment before deployment.
  4. Can we explain the results? Technical transparency matters, but stakeholder trust and organizational adoption matter more. If your team can’t understand and communicate how the system works, adoption will stall.

Instead of treating these as a checklist, Hopper urged leaders to see them as a filter. They sharpen which projects earn resources, which ones wait and which never make it past the idea stage. For CFOs, that filter is the difference between chasing hype and building a disciplined AI strategy.

How CFOs are actually making AI work

At our Finance Forward event, the stories centered around one key thing: finance leaders aren’t chasing full-scale AI transformations. They’re starting with tactical, measurable wins – and using those wins to build the confidence, culture and discipline needed for broader adoption.

Take David Samuels at DrFirst. His team cut vendor payment processing to under one FTE while handling hundreds of vendors – not through a flashy rollout, but through careful vendor evaluations and what he calls an “AI-first culture”: leadership alignment, structured experimentation and open conversations about automation. 

That discipline gave the foundation to move further, compressing close cycles by integrating NetSuite with Adaptive Planning and Salesforce, and applying predictive modeling when investors pushed for daily cash visibility.

Chad Wonderling at Zone & Co echoed that mindset but with a different angle. His focus is on redefining roles: “We need to turn our team of doers into reviewers. The human has to be in the loop.” For him, the value comes from offloading high-volume, rules-based tasks like revenue recognition, AP processing or ARR tracking, so his team spends energy on judgment, validation and strategic interpretation.

Different tactics, same thread: these CFOs aren’t measuring success by how much they’ve automated. They’re measuring it by how much control, accuracy and strategic bandwidth their teams have gained. That shift – from chasing efficiency to strengthening oversight – is what makes adoption sustainable.

Where finance wins with AI: pace, precision and control

AI in finance is filled with noise. New tools launch daily, and every vendor claims to be the breakthrough. The CFO’s role is to cut through that hype with the same structured thinking used in any capital investment.

That’s where Glenn Hopper’s framework becomes the filter:

  • Cost-of-being-wrong mindset: weigh risk before reward
  • Risk evaluation matrix: map where AI adds value – and where it creates exposure
  • Eight-question assessment: a repeatable filter to stress-test AI projects before funding

Where this matters most is in deciding how AI enters the finance stack. 

Some vendors are pushing AI-first ERPs, promising to rebuild finance on a new foundation. But for most teams, progress is coming from the opposite direction – layering intelligence into the ERP they already trust. Enhancements like GenAI-assisted invoice capture or predictive modeling inside NetSuite respect existing controls, governance and auditability.

That approach aligns with what finance leaders value most: confidence that automation won’t compromise accuracy or accountability. Discipline, in this sense, isn’t about slowing down. It’s about scaling AI in places where reliability is clear and the risk profile is acceptable.

FAQs

7 minute read

Get a Personalized Demo Today

Start a conversation with an expert who asks thoughtful questions and shows you how Zone & Co can solve your unique problem.

Book a demo