Finance Forward: Navigating AI & Strategy with a CFO Mindset

August 26, 2025

The program opens with a keynote by Glenn Hopper (CFO, AI strategist, and author) offering a practical framework for evaluating AI in finance. This will be followed by an expert panel of CFOs from Dr.First, FinQore, and Zone & Co, exploring topics like frictionless financial operations, investor-grade reporting, and real-world boardroom dynamics. The event concludes with an interactive Q&A, offering direct insights from seasoned finance executives.

Transcript

Kathryn Adams: I have the privilege of introducing our keynote speaker, Glenn Hopper. Glenn is an author, lecturer, and strategic advisor specializing in AI powered finance transformation.

With more than twenty years experience as a CFO and private equity backed companies, he helps finance leaders adopt AI for forecasting, reporting, and operations. The author of Deep Finance and AI Mastery for Finance Professionals, Glenn also lectures at Duke University, AICPA, and CFI on applying AI in real world finance functions.

He is committed to helping finance evolve through smarter systems, stronger insights, and better execution. Please join me in welcoming Glenn Hopper.

Glenn Hopper: Hey, Kathryn. Thank you.

And thank you everyone for joining. I've gotta say I love the topic of this conversation, and I actually can't take credit for this.

When we've been planning this for several weeks now and talking to, the folks at FinQore and Zone & Co, the idea for this presentation was how to evaluate AI and automation like a CFO. And the reason I love this is there's so much hype around AI right now.

It's what brings you all to the session, I'm sure. But there's also a lot of confusion around it.

And we're going to get into that. And with all this noise out there, I think it's especially important to have that CFO's mindset as we evaluate all the shiny new tools and toys that are out there that are being pitched to us and not just chase the first thing that that comes along thinking that there's a a panacea that's going to solve all of our, our our woes and and back office operations.

So I want to talk about how we should think about evaluating AI and rolling it out. But there's the timing of this webinar is perfect because of some recent news.

So the AI Internet has been abuzz lately, about this MIT study that came out last week. There was a study from the, Nanda group at, at MIT called the state of AI and business in 2025.

And that report said that 95% of enterprise generative AI projects failed to deliver measurable business impact. Only 5% of pilots generate material P&L results.

And there's other stats out there that kind of go along with that. There was one from, the IBM Institute of Business Value a couple weeks ago that said, only 25% of AI initiatives meet ROI goals and just 16% achieve enterprise wide scale.

Another one, from, it was S & P Global Analysis said about half of AI proof of concepts are abandoned before production. And then there was one, a few weeks ago from, Gartner that said over 40% of Agentic AI projects will fail by 2027, due to unclear value and escalating costs.

So if I didn't scare everyone and have you just abandon the, the webinar, there's a reason for that. And there's the reason goes back to really the two fears around AI right now.

I would say the first one is fear of missing out. The fear that, everybody else is going to be doing AI.

They're going to be getting all these great efficiencies, and they're going to be moving ahead, and we're too slow to adopt and adapt and we're getting left behind. So there is pressure and there's pressure from boards, from investors, from senior management to to, you know, do AI.

And the the other side of the fear is we don't know what do AI means a lot of times. And the reason for that, there's the technology is new and there's a lot of conflicting information about it.

And how do we evaluate something that we don't understand? So goal in my in in my lecturing and in my articles and in my working with clients is first to bridge that education gap. And when you have the information, it's easier to make the decisions.

So I think having the information, having a framework, it keeps us from launching these projects that are destined to fail. And that these projects are not failing.

I have to say, they're not failing as a fault of the technology itself. They're failing due to incorrect assumptions about what it can do or thinking that you can just turn everything over to the magic robots who are going to solve all of our ails.

And, so we really want to think about the framework and how to evaluate this and what it means when we do AI. So evaluating AI projects, as CFOs and finance leaders, we do this with everything that comes along, whether it's a a capital investment or a forecast or or treasury management decisions, whatever it is.

We're we're supposed to be the adults in the room. You can have your your visionaries on the sales and marketing side.

You can have the the CEO who's who's driving the entire ship, but we need to be the calm measured voice of reason. And we do that through a disciplined approach.

Just like with any other investment, we evaluate the risk. We look at the impact, and we put controls in place, and we determine the value of the investment.

And with this sort of CFO lens, we can start to evaluate these projects differently. So today, I want to lead off with a little bit of education.

I have a tendency to get pedantic on this because I do think it is important that we all understand a little bit more about what's going on under the hood, with AI. Because if we if we don't if we don't understand how it's working, how are we going to trust it with decisions or know which decisions we can turn over to it? If if we don't know what's happening at a at a base level, we might as well be asking that that magic eight ball of question, shaking it up and taking the answer from that because it's the same thing.

We don't know what's going to turn up there or how or when. So we really do need to have some level of understanding here.

So we're just going to touch on that. But what I really like is the cost of wrong mindset.

And the thinking around that is, yeah, we can talk about ROI and we there's a lot of unknowns about how it's going to be deployed and what that ROI will look like. But what if we get this wrong? And that that goes to the first fear.

But if we address that head on in our decisioning process, we are ready to make more informed and and rational decisions. So we're going to go through an eight question evaluation framework and then an implementation strategy.

This is this is where the rubber meets the road and we're seeing a lot of projects fail today. So it's good to have a strategy, good to have caution, good to have everybody aligned before you get started.

So let's dig in first with, kind of the different types of AI. And we were talking just before the webinar about automation and AI.

And a lot of times, so it it in my day job when I'm a client will come to me, there's an expectation that I can, sprinkle some AI on any process and make it better. Nine times out of 10, maybe even more than that, we can put an AI layer in there.

But really what people are looking for is base automation. And you can think back to the days, and I I know it's still around and a growing concern, but I think generative AI is going to enhance and and or replace it.

But think of RPA, robotic process automation. If we have defined processes, that are rule based, we can automate those without having the risk of AI in there.

Now there are places where, obviously, where AI can add value, but we don't want to offload tasks in projects to AI that it's not geared for. And I think if we talk about the different types of AI and the different types of automation, it'll make it more clear what we're what we need to look for in our automation and our our use of AI in our companies.

So I want to go back to this is something we've been using, very effectively for fifteen plus years, machine learning. And, when there's terms we've heard them for years, artificial intelligence, machine learning, deep learning, artificial neural networks, and they're all sort of used interchangeably, and and maybe it's it can be difficult to understand what the difference between, the three are.

And I want to talk about this because when we think of classical machine learning, we're talking about deterministic algorithms that learn from data. So and and generative AI is also in the circle.

But let's let's look at this Venn diagram here, and there's actually more and more tiny circles that we could put inside, but I wanted to keep it brief for this webinar. So very broadly, artificial intelligence is the overarching, the the overarching field of AI is systems that are designed to mimic reasoning, decision making, and problem solving.

This is basically machines that can think and act like humans. And I don't mean, we're not talking about artificial general intelligence here.

We're talking about just the broad field. Mostly, what we see out there today is narrow intelligence.

This is self driving cars, recommendation engines, forecasting tools. So that's the broad field.

And RPA falls into our, into artificial intelligence and so does machine learning. Machine learning as a subset of AI, these are algorithms that learn from data rather than being explicitly programmed.

So machine learning really got big with the explosion of data. We were all talking a decade or or more ago about big data and what the impact of that is on business.

But the real impact is when we have access to this amount of data, we can train models, use algorithms to learn from this data. So an example would be if I, had 10,000 pictures of cats and I trained an algorithm to identify cats.

Once it saw those 10,000 pictures of cats, it would learn how to identify cats. Very basic example, but that is not a program that is explicitly coded to recognize cats.

It is it's just trained on the data. You could feed it pictures of of whatever and it would learn to identify them.

So within machine learning and and I I will say before I move on, machine learning really it does three things. So if it sounds super complicated and a lot of, coding and, calculus and back propagation and everything that you think about with machine learning, it really is just doing two, three or three things.

The first is classification. That is, you know, identifying pictures of cats, dogs, whatever, classifying it in a category.

The second one is regression or prediction. So you have time series data, and it follows a pattern, and it predicts what the, likely, pattern is to to continue.

And then the, third one is clustering. This is customer segmentation that we use in churn prediction and just grouping like items together.

In all these tools, the superpower of machine learning beyond being able to learn from data is computers are better at identifying anomalies, at making predictions, at at clustering, customer segments than we are. It can find ways to interpret the data way way more clearly than we can.

Think about if if we're defining customer segments, we might look at, oh, well, this is a, customer cohort that is defined on when their purchase first purchase date was and what, their demographic is, you know, their average income, and age and all the items that we pick. But with with clustering algorithms, they can find characteristics that we didn't think made segments similar to each other.

So there's a lot of value in machine learning. And all this, unlike generative AI that's out there now, this is deterministic.

We can it's repeatable, it's explainable, and, we're we're using it to great success, and you always get the the same response from that. So there are, dozens, if not hundreds of applications of machine learning out there that are still, just being rolled out across finance and accounting.

Now a subset of machine learning is deep learning, and this is where AI gets a little, we get the black box problem. So deep learning is, made up of artificial neural networks.

These are networks that are designed to mimic the human brain. So it, handles more complex tasks, like image recognition that I've I mentioned earlier.

That would be a deep learning model, language generation. And the key distinction here, is these classical AI that are logic trees, deterministic systems, machine learning systems adapt and improve over time, and they're data driven.

And and the reason that I'm I'm talking about all this is not all AI is the same. So when we are talking about AI, don't just think we're just talking about chat GPT and we have to, brush the entire field of AI with one broad brush stroke because there are distinctions in there.

So we we need to clarify our expectations when we're looking at AI. Will the system follow rules, learn pattern patterns, make predictions? Is it generating new content? Thinking about what we're doing.

And before I I move on from this, I will say generative AI that everybody's talking about is it is a a huge, artificial neural network. So it is a type of deep learning, but the difference is in the output, and we'll dive into that now.

So generative AI, as the name implies, is a system that creates it creates text, images, audio, video. We're seeing more and more, AI generated video out there that can be done in seconds now.

And so in all the text that's being generated. And this is, very different from traditional AI that was focused on, again, classifying, detecting, recommending.

But this generative AI, think of it as a super powerful autocomplete. So on your on your phone or in your email when you're typing and, it recommends the next word for you, we know that historically has been notoriously bad.

So if you've ever, done that thing where you run through and, try to let predictive text generate an entire, conversation, for you. We know that it it just goes off the rails pretty quickly.

And the reason that that is is because, the predictive text that's on your phone has a context window of about three to five words. It's called long term, short term memory.

And if you can imagine trying to guess what the next word is going to be when you only have, you know, three to five words ahead of it, if I said the grass is, where am I going to go with that? Am I going to say always greener, or am I going to say too long, or is whatever type? So there's too many options there. But if you broaden that context window, then you can start to better guess what the next word is.

So the big breakthrough in generative AI was the expansion of that context window. First, maybe to a a few 100 words, then a thousand words or tokens as they're referred to in in the science.

And now we're up to, you know, Google and other models out there have million word context windows. So when you have that much context, you are able to much better predict what's coming next.

And that is sort of what makes, generative AI so incredibly good sometimes, but also can lead to hallucination and incorrect answers and inconsistent answers, which we all know and that might work when we're writing marketing copy, but, not when we're doing variance analysis or comparing board reports or or closing the books. So but there there still are areas where a lot of areas and expanding every day where generative AI can be used to great can be put to great use in finance and accounting, but we just need to understand the difference in in how they work.

So when we think about rule based automation versus generative AI, again, rule based, deterministic tools, they're predictable, they're repeatable, high reliability, they're strong for audit trails, but they're also rigid and expensive to scale. Have you know, not every company is going to have their own giant machine learning models that they're able to apply across the enterprise.

So you can think of this automation of, RPA, scripts, workflow tools, and these are excellent for tasks like journal entries and reconciliations. And then on the generative AI side, it's probabilistic.

That means our outputs are going to vary based on probability, not rules. So there's a whole other talk I give on data privacy because I do think, certainly, if not deployed correctly, data privacy is is a serious concern when using generative AI.

But when properly deployed, there's no more risk in using these generative AI tools than there is any other cloud based platform. So, I I think a lot of times with the, the the fears around data leakage are used sort of as the scapegoat of the broader fear of AI in general.

But again, it has to be configured correctly and you have to have guardrails on how your employees are using it and how you're using the the data. And with generative AI right now, it's very powerful for tasks like, drafting board reports and summarizing contracts.

And it but it does carry those risks I talked about. There's potential for hallucinations while that risk is decreasing.

It still hallucinates more than, hopefully, more than most of our employees do. Opacity is a problem.

When you you can't tell an auditor, that I just fed this into the magic black box and this is what I got out. You've got to explain how you got the numbers.

It's gotta be reproducible, and you've gotta have that consistent performance. So generative AI use has to be paired with governance and human review, keeping a human in the loop.

And there are there are ways that you can get massive increases in productivity using generative AI. You just need to know how to have that human in the loop and human review and audit trail and everything around it.

And that's what we, as CFOs and finance leaders, need need to be thinking about when we evaluate these programs and projects. So the the key difference, isn't what we're going to automate.

It's how much risk each introduces when something goes wrong. And that doesn't mean that there's not areas where generative AI is the perfect solution, but we just need to understand how it works and how to use it.

So let's start there's everyone wants to know how can I find ROI on AI investments? And if you think about I want to say a word on ROI. If you're doing big bespoke development for a system that is internal to your company, that there's an expectation that it's going to improve employee productivity or whatever the goals are, just like any other capital investment, you think of that at you think of it in terms of ROI.

But if we're talking about rolling out generative a AI, rolling out chat GPT or whatever other tools, your your company is using from Copilot to Claude and and whatever else, we're talking about a software expense. And when we roll out, most generative AI rollouts, successful ones need to happen in two ways.

There's the top down deployment where management is picking, the projects, the big projects to go after. These are the ones we're looking for ROI.

There are company wide initiatives. But then there's the bottom up deployment.

And this is getting generative AI in the hands of our employees. Getting training, getting the guardrail set in place, having a policy, and giving them consistent tools to use and training on how to do it.

And a fully functional organization that is is is fully functionally using generative AI is going to have both of those approaches. So when we think about that bottom up, the ROI is happening at an employee level, and it's it's harder to manage, and it's more self reported and all that.

So maybe let's not look at we're spending 30 to $60 a month per employee for whatever tool it is. That's a software expense.

We're going to get productivity gains, and I've got a million studies that that do show the productivity gains around that. But trying to find concrete numbers and saying, you know, this data entry person is going to save sixteen hours a week or whatever, that's very hard to nail down.

So let's instead a good way to evaluate generative AI is what happens if we're wrong. So a lot of us here have used RPA before.

So if you've got an RPA bot that, enters, you know, misinterprets and enters a wrong invoice amount, that's something that's fixable, it's auditable, and it's got a contained impact. If GenAI makes a mistake on a board summary, there is increased risk there.

There could be reputational risk, stakeholder trust issues, just like if you had a rogue employee or or you didn't check the numbers that you were reporting. There's Gen AI is getting we're getting more human like here and, and, you know, there's a a little bit different risk than just an RPA bot making a mistake.

Where the risk gets the highest is if we're trying to offload the very human task of make of decision making right now. So if if you blindly follow an AI recommendation, you have a bad forecast model, you there's strategic risk, there's business decision impact.

This is where I say, we're our jobs are are safe across the board if you're thinking rationally. And what one way to think about these levels of errors is there's a great quote by a a guy named Clifford Stoll, and the quote is data is not information, information is not knowledge, knowledge is not wisdom, and wisdom is not understanding.

And I I use that quote all the time because to me, as you move through each level of that quote, we're talking about levels that automation and AI are knocking out. So we're we're squeezing the human value up to the top of that list of, turning knowledge to wisdom.

And if we can automate the mindless the the lower level parts of that job, then it really has the opportunity to elevate us as as humans. So think about that as you, as you go through and look at what are we going to automate, what are we going to apply AI to.

So on on the risk side, when we're evaluating risk for any project, we want to think about the magnitude. How big is the impact if it fails? We want to think about frequency.

How often does it run or influence decisions? So primary targets for automation are things that happen all day every day that are kind of mindless, whether it's swivel chair, data transfer, something that is very rule based and we just do it over and over and it takes a lot of human time. That's where we want to free up human capital to, to focus on more meaningful endeavors.

So the the frequent tasks are the ones we should really be looking at for automation. And recoverability is the last.

Can we catch or reverse errors if if they happen? I love a two by two matrix. So, this one, I think, is a good way to frame and think about which projects to take on.

So let me walk you through. We're looking at frequency and magnitude here.

And let me see if I can get a little bit more of this displaying on the screen. So first, we have the high magnitude, high frequency.

These are the highest risk. That's frequent use with severe consequences.

So this is an area where we want to avoid full automation. This requires human oversight, strong controls.

If you move over on the left side of high magnitude and low frequency, these are rare but catastrophic risks. Manual review is essential here, and this is an area where you would use AI, very cautiously.

So this whole top part of the two by two is, we need to think heavily. This isn't where we would run off and do a a pilot AI project.

But when we get down, to the bottom part of this, especially on the bottom right where we've got low magnitude and high frequency, this is a great zone, for Gen AI pilots. Frequent use, low downside.

You this is where you want to, test it and learn. We we can do it safely, where your impact is not going to be big, and there's great returns coming.

And on the on the, left, we've got low magnitude and low frequency. So that's infrequent tasks with minor impact.

And this is this is safe to automate with minimal oversight. So just thinking about areas that you're you're looking there.

You can, you know, match the tool to the risk. Use GenAI where it's safe and use discipline where it's not.

Okay. So this is, this is where I want to go through our eight question list.

And this is really just a way, that you can think about assessing, any project that comes along. So first is is your foundation assessment.

So a a good question to ask, could we automate this instead of hiring for it? That is the question that we're seeing CFOs and CEOs and everyone ask across the board, and you're hearing a lot of public comments around, I'm asking all my employees before we hire another headcount, to ask, can we automate this instead of hiring for it? And that is where you can start the exploration of defining what the processes are and thinking about where, you know, what where do we need human in the loop here? And maybe it's not we can fully replace it, but how much could we augment this role? How much could we change existing roles because we could automate half of what the role previously did? Next, is this workflow stable enough to automate? I so many times and this is this is the the unsexy part about AI. You you can't even start AI.

Well, first, obviously, you have to have, the the foundation of having your data house in order because you can't, you know, it's garbage in garbage out, and and AI thrives on it. The fuel for AI is the data.

So you've gotta have the data right. But you also have to have the processes.

You need to know the people, processes, and procedures of all the work that's being done in your company. So if you guys haven't created SOPs before, now is the perfect time to do it because, one, while SOPs previously might have been used to train humans, these SOPs will become the training platform for generative AI.

As generative AI gets better, as agents get better, having these SOPs in place and having clearly defined processes, you're going to be far ahead more more so than so many other things you can do if you have all these processes identified and defined and documented. So, you know, you can't you can't automate chaos.

So you gotta fix the processes first. Third question on the foundation assessment is, do we trust how the tool handles financial data? And this is where we look at security, privacy, auditability.

If if it's a black box and, we're a public company or we we face audit, we're we're not going to be able to explain it. We can't use it.

So, these are especially true with third party LLMs. All the Frontier companies do have tools in place right now where you can use them, and there's ways that you can build, you know, prompt and response logs.

There's ways that you can validate data. There's ways that you can store even the code that these models write.

And these are all right now, primarily bespoke in engagements, but you're starting to see more and more tools either directly, integrating generative AI into the SaaS platform itself, or you're seeing tools that are integrating through tools like, through protocols like MCP and and other ways to connect to the large language models to let users, be able to, interact with their data using chat rather than writing SQL queries or sweep scripts or or whatever people used to have to do to, interact with their data. And then the fourth question on the foundation assessment is, is our data clean, structured, and complete? And again, this is the the the two well, I'll I'll say data is the foundation for any AI project you're going to do.

So we've been talking about digital transformation for as long as I've been in the workforce, which is, I don't know, half a century now or something like that. But, you know, so if you haven't digitally transformed at this point at this point, you might be a little bit behind.

But that that transformation really it's it's it's a moving target, and it is not really a one and done transformation. It's really the evolution.

But just having data and we were we were all dwelling on on data lakes and data warehouses, years ago, and now there are all these cloud tools and there's there's other ways to handle the data. But having good data governance, a data dictionary, identifying your sources of truth, you've got to do all that before you drop AI in anywhere because it it's if it's hard for your human agents to go through and work with the data, it's going to be that much harder for AI to do it.

So we start with the data. We have those processes and procedures in place, have them documented.

And when we've gone through and made this assessment, then we can move forward and, rank and prioritize what projects we want to move forward with, in AI. Next is strategic alignment.

So what's the real cost of being wrong? So we go back to that that matrix and we look at here, we'll pop back here just to show that again. You know, we look at the magnitude and the frequency and the the high risk and low risk scenarios.

And will this still make sense at, at twice the scale where we are now? Think about, think ahead of where you're going. So if you don't have clearly defined processes and you automate something now, but you're going through M&A or the company's hyper scaling or just standard growth, is this a process that will grow with the company? Are you going to have to go back and keep, keep reinventing the processes? So think think long term about these decisions, especially if you're building something out using an AI tool, using, linking systems together with AI.

Is it going to scale with me? And then who else depends on this? So if you're doing something just in the finance department, who are the other stakeholders? Are we are we reporting is is sales or ops using this this information, or are we using information from them? Are we all aligned? Do we understand the the source of truth and the, reasoning behind it and how we're going to be using the data? It's it can't be done in a vacuum. It takes the full cooperation and strategy of the entire senior management team across the company.

And finally, you know, especially true I would say, you know, arguably true in in any department, but especially in in finance and accounting, can we explain how these results are produced? It it's not, this isn't a a technical part. It's about stakeholder trust, and it goes back to that first comment I made about, fears that people have, around, AI that they don't that they don't understand it.

So we start small. We build trust.

We we get the training. And then, understanding how the results are produced, that's going to let us truly shift the organization, not just from a technology's perspective, but in sort of in that change management way where the entire company is leveled up, understands generative AI.

They're all using it. They know the the strengths and limitations, and we're all getting those efficiency gains from that.

So, you know, thinking about that framework to action is, first, you assess the cost, then you look at scalability, you look at who the stakeholders are, and you ensure it's explainable, starting with pilots, putting those guardrails in that I talked about, and building a scale a scalable operating model. And this is really I want to before my time's up, and I know I'm I'm getting towards the end end of my section here.

But think about that top down and bottom up simultaneous way that we're rolling out generative AI. And don't discount that bottoms up because even if we're not seeing from roll from our our employees using generative AI, even if we're not seeing massive ROI that we can explain and understand, these self reported studies of individuals, with the latest number I saw was employees at 90% of the companies in this survey admitted to using generative AI at work.

And there's there's a scary part around this is that if your company hasn't sanctioned it or put guardrails in place or, or if they've outright prohibited it, it's still happening. So that's why we need to really address that bottom up.

But if we can start there, there are small wins. And because this is distributed across all the employees, those wins do add up and you do get better results.

And we see time and time again, that from from surveys and and information around that, that there are efficiency gains to be made from individuals using it. But at the same time, we can have massive organizational shifts as well from the top down that are giving us equally big and easier to measure ROI as well.

So really think about both of those. Think about what's scalable.

Think about those guardrails and training and what your employees know and having the right team in place, having the collaboration across the entire management team as you roll forward with these GenAI projects. So I obviously, with all the the numbers I started with, there's a there there's could be a temptation to throw up your hands and say, well, I don't even know why I'm going to do this.

Nobody's able to pull it off. But I say there are plenty of opportunities to use AI, but use it like a CFO.

It's here. This budgets are tight numbered.

Budgets are always tight. I will say last year talking with companies, that there wasn't existing budget for AI.

Now most of the companies I talk to have some dollars committed to AI development. But thinking about how you want to deploy that, and you don't want to be the part of the people the 95% of people in that report that say it failed.

You want to think about this cautiously, smartly, pick the right projects, and move forward at a measured but rapid pace. And and keep in mind, we're not the I'm not advocating we go back to the CF.

No. You know? We're CFO mindset isn't skeptical.

It's structured. So just bring the framework to all the proposals just like you would at the capital investment proposal or anything else that came across your desk.

Demand evidence, clarity, and alignment, and that's where you're going to get success. These these projects are failing because they kicked off with the wrong mentality and the wrong prep, not because the technology, is inherently bad.

So I I think for everyone walking away from this today, I would say apply the framework to the the projects that you're considering. Keep that discipline and demand the readiness.

You're going to be working with vendors and internally. Apply all of that and, help help evaluate your AI movement that way.

And, I think you'll start to see a more of a clear path through all the noise and and hype around the, AI hype cycle that's out there now. So with that, I will wrap up my presentation and and thank you for the time.

And I'm I'm going to stick around and listen to the, the panel as well.

Kathryn Adams: Great. Thank you so much, Glenn, for that insightful keynote.

Alright. Well, I'm excited to welcome three experienced CFOs, to join our CFO panel today who will share a behind the scenes look at what it really takes to build high performing finance teams.

They're going to talk about the challenges they face throughout their career, lessons they've learned along the way, and strategies they use to drive real impact across their organizations. I'm going to take a moment to, introduce each of them, and then we'll dive into the panel.

So first, I'm thrilled to introduce David Samuels. David is an award winning financial executive with leadership experience in Fortune 500 and private equity backed companies.

He's a six time CFO and has led over $1B in financings and drives transformations through M&A, restructurings, and operational turnarounds. He's currently the CFO of DoctorFirst, and he has held has held C suite roles with many cutting edge technology companies, and also had a long term tenure as a senior finance leader with Marriott Corporation and its businesses.

Beyond the C suite, he brings valuable insight as a corporate and nonprofit board member, frequent conference speaker, guest lecturer, and former Forbes Finance Council contributor. Notable affiliate affiliations include the Economic Club of Washington DC and the board of the Wolf Trap Foundation for the Performing Arts.

Next, I'm excited to welcome Vipul Shah. He is the cofounder and CFO of FinQore, an expert-verified and AI-powered data platform purpose built to serve strategic CFOs and finance teams.

Leveraging two decades of experience as an operator and investor at Goldman Sachs, Brightwood Capital, and Aeromark Partners, Vipul built FinQore to solve critical challenges driven by fragmented data and inefficient manual processes that frequently impede growth and strategic clarity. He previously cofounded and successfully exited exited a fintech solution and actively invested and advises enterprise software, AI, and fintech startups.

And then finally, we're joined by Chad Wonderling, CFO of Zone & Co, where he oversees finance, operations, and corporate development. He previously held leadership roles at Salesloft, helping drive four acquisitions, raise over $300M in financing, and facilitated the company's $2.

3B sale to Vista Equity Partners. Chad began his career at KPMG and has sixteen years of experience leading finance teams at high growth software companies.

So thank you all for your time today. I'm going to just jump right into the panel, and get to what everyone's here for.

Glenn talked about evaluating AI like a CFO. Where have you guys found AI and automation to actually move the needle in finance operations? David, why don't we start with any thoughts that you have here?

David Samuels: Sure. Happy to be here, and thank you for including me.

I think Glenn had a had a terrific, presentation and was quite informative. We talked about a framework, and that's how I think of what we're doing at DoctorFirst You know, we have employed an AI first culture, and it's interesting because everyone at the organization has a license to Claude AI, which is owned by Anthropic.

So over the past several months, we've we've really been extremely AI focused. We have, group chats.

We have leadership meetings. We've actually set up an AI leadership team to provide a lot of structure to what we're doing, over the next several months, in really facilitating driving efficiency, process improvement, and automation.

And I think of this as the early innings, and I also think of the crossover. The time that we're in right now is really a crossover between automation and AI.

There's a lot of experimentation. But we're we're getting comfortable with the technologies, getting comfortable with research, getting comfortable with Claude.

Not only as a research tool, but but really how to get it to to function as a as a developer, if you will, to enhance process efficiency and to create those use cases that are repeatable. So, you know, I also look at the the business system tech stack.

So we are a NetSuite shop at DoctorFirst but we have a number of different business systems. And so we have spent many months talking to all of our vendors and, you know, there was a remark in one of the slides that, was just presented by Glenn.

And if your vendors aren't ready, they're probably not the right vendors for you. One vendor that is ready and I think is a little bit ahead of the curve is Tipalti.

That's our vendor management system. We've had a lot of wins there.

We are we are down to less than one FTE at this point in time in terms of processing all of our payments to, you know, many, many hundreds of of vendors, and we have a very repeatable process and a cycle, and it's very efficient. The policy has done a great job, at what they've done.

You know, I think of it as AI powered invoice processing, intelligent three way matching, and important to me is the smart approval routing. So I think they've done a a really nice job.

Financial close automation, we've seen some benefit there mostly on the consolidation side. We integrate NetSuite with Adaptive Planning and also with Salesforce.

So we've been able to to shorten our close cycle significantly. We'd like to get it to five business days.

We're probably in the seven to eight business day time frame now. It was a lot longer.

But certainly using the tools that we have, driving efficiency, driving process automation, we've been able to do that, And a lot of that's been done internally. Cash flow forecasting is probably the third and last one that I'll focus on right now.

These are my remarks here. But, as I said, you know, we use adaptive planning.

That platform has tools, where we can actually use the information, the data that we have on a go forward basis to do predictive modeling. It's not perfect.

It's developing, but at least it gives us, you know, some some foundation on a go forward basis to improve and refine that.

Kathryn Adams: Great. Thank you for sharing your insights.

It seems like AI is really helping you move the needle in your organization. Vipul, do you have anything to add to that?

Vipul Shah: Yeah. I mean, I think I think Glenn covered some some pretty important topics.

So I'll start with the first one that I think came across loud and clear. When I first came along and we're we're an AI first company ourselves in terms of solution we provide, first thing we put in was, responsible AI policy.

And I like what Glenn said, 90%+ of people are using AI in some form or another. You might as well put in a policy, get a business version of it, and turn off the data sharing because that's the biggest risk to the business.

So that's just, leaning into what Glenn was saying. In terms of how I use it on a day to day basis as an AI fan myself, wearing the finance hat and working with a lot of finance folks is accelerating analytics.

And, I use Claude for some things. I use OpenAI for some things.

It just depends on what the use case is. What I found in both situations is that you've gotta have context rich prompts.

So prompts are basically the questions you're asking AI. And my one call out to everybody on this call is get really, really good at prompting.

The better you prompt, the better the answers are going to be, the more reliable they're going to be. And the second thing, Glenn, I love you.

I think you emphasize this on almost every slide of yours is you need really structured segmented data because AI probably amplifies the garbage in garbage out comment. So the two things I'm obsessed on is, like, make sure you're really good at prompting and we help our teams with this as well, and then make sure that the data you're bringing to AI is structured and segmented for that use case.

Now the the third leg of the stool, in addition to the prompts and the right data, is keep a human in the loop. I still verify all the math and different things I'm looking at to make sure.

It is getting better and better and better. Trust but verify.

Right? AI like the CFO, as Glenn said. The second area that I'm seeing more on a day to day basis, probably lots of folks on the call are using it for legal document review, research, board prep.

And so that's that that's become more and more common use. I won't talk about that too much.

The third and and more underappreciated use is I like to use AI to pressure test my own thinking, run scenarios, challenge any assumptions I might be making, just kind of act as a sparring partner. For those that are super nerdy, I'll share that I use Claude Opus 4.

1 if it's more business y, sales y, go to market in nature, and I'll use Chat GPT 5 if it's getting into more technical math and deep research.

Kathryn Adams: Great. Thank you for sharing.

And, Chad, anything anything you want to add before we move on to our next topic?

Chad Wonderling: I would say just a just a couple things relative to kind of running the finance operations is so so it's second, third, fourth, everything that that Glenn and and these guys just shared is one is AI is a culture. It's an operating model tethered on strong processes, well defined processes.

And I think in terms of running the financial operations, you know, where where we've seen the biggest impact at Zone has really been incremental and hasn't been the flashiest kind of wins. So for examples, it'd be like those high frequency types of, actions relating to, that are that are very recurring in nature, revenue recognition as an example, AP processing, billing workflows, even some ARR tracking.

So it's really the high frequency stuff where we've seen the biggest impact so far as as we think about our financial operations. But, of course, those are tethered to strong processes and frameworks.

Kathryn Adams: Now I want to go into our next topic. It's really looking at beyond the boardroom and go into what CFOs have to explain or defend or pivot to it in board meetings.

Chad, I'm going to start with you. After a major raise or acquisition, what changes first in your daily priorities as a as a CFO? And and, Vipul or David, feel free to chime in as well.

Chad Wonderling: Yeah. I would say like, the biggest thing I would say is the speed to insight.

I think the speed to insight becomes paramount. And our job within the business as the CFO is to provide those insights to then enable the business to then drive results.

So it's a little bit within finance, I think about what, so what, now what, and it's kind of the the now what where we have to go. So it's beyond just being the scorekeeper and really into being the owner of the business model.

So specifically after after big transaction, maybe you have a new PE sponsor coming in. There's a lot of focus around the quantitative components.

And I think really having your hands on those quantitative components, those metrics that can, kind of permit and promote those those insights to then drive corrective action is really where you need to be to drive the the now what.

David Samuels: Yeah. Yeah.

I would echo that. And I'd also add, you know, cash cash management is, absolutely paramount.

And you're going to see that, you know, whether it's it's a private equity investor or a lender. You know, sometimes you're seeing a lot of recounts today, in the marketplace, and companies are have a have a dual, ownership structure between private equity and and a large lender, so they have a lot of debt.

And so cash management is is absolutely critical. We're really focused on that, like, every day, where the money how the money comes in, what form is it coming in, you know, wire, ACH, lockbox.

And we have daily reports twice daily. We have very defined processes about how the money goes out.

Right? You've got payroll, you've got commissions, vendor payments, and I talked a little bit about the quality. So that is that is one area.

And it's also the other another important area is allocation of capital. Think you're spending your money.

You know, what's what's providing the biggest lift to the business? And sometimes it's not just about EBITDA and how to create or how to how to enhance or generate EBITDA, but there are other other items that that may be of importance in where you're going to spend money on a short term, and then truncate it off to put that capital to use in in a different way.

Kathryn Adams: Yeah. Vipul, any insights there? You've been on on both sides.

Vipul Shah: So for everybody in the audience, don't hold it against me. I started my career as a founder, operated a business for a dozen years, but then I did switch to the dark side.

I've done buyouts, roll ups, so I've sat on both sides. What I found, and the empathy I've built over the years, the post investment shift, it's it's immediate and it's intense.

And what drives that, just to shed a little bit of light behind the scenes, is that you've got so having been an investor, you know, it's not uncommon for investor to have a team of multiple analysts for multiple months dissecting every KPI, every cohort, every segment, and really building their investment thesis and understanding the drivers that are going to drive that growth. Because when they're investing in a business, no matter whether you're VC, growth equity, private equity, you've got an expected return you're underwriting to.

It might be a 2x, 4x, 10 x, whatever it might be. So now you've done all that work for months.

As soon as the investment closes, that is on the CFO's plate. That's on your plate.

That's on your team's plate. But you still got the same lean and mighty team and pressure to hit rule of 40, pressure to hit rule of 65 and do everything with the same amount of folks.

So, like, my suggestion on that is how do you navigate that immediate and intense shift? You know, sync up with your ELT and the new investors and really understand what metrics matter the most, what do you need to monitor. Every investor has different DNA, different reporting cadence, and leveraging open and early, conversations, like, just be really open in those conversations and build a roadmap because there's no way you're going to take six months of analytical work that the investment firm has done and within the first hundred day plan start to produce those.

It's just not going to happen. So laying out that plan is critical, and it isn't going to change their expectations.

They're going to want institutional grade insights, and they should. Right? They're going to expect that of a lean team, and they should.

It's just a matter of making sure you're creating the right confidence and the right cadence to get there. And I think if you can do that, it really works.

The one thing that does cause a little bit of friction is, as a finance leader, you do have a dotted line to the investors, in the way your reporting structure works, and you have a very clear line of reporting to your CEO and your ELT. And to do all of this and manage that, balance that, and navigate it, is more art than science.

I mean, Chad and David have been through this way more times than I have. It it's, you know, it's a pretty tricky balance.

Kathryn Adams: Yeah. Thanks for expanding on that.

I want to I want to build on this topic a little bit more. And, you know, we see a lot of shifts in this business and industry.

So how do these shifts influence the annual operational budget cycle, especially in different macroeconomic climates that we're seeing, especially, in this environment? And how do you keep plans credible and achievable? And and, David, I'd love to, start with you and get some insights from you.

David Samuels: Yeah. Thank you.

You know, we think of the annual operating budget as a financial blueprint. It is created with the business at a point in time.

It doesn't mean it's the end all and be all. And so what we do in our business, and I think most most companies, most CFOs, and most companies, you you have rolling forecast.

You are always on top of the data and the insights of the data. You're you're communicating with the business every day, every week, every month about changes, new contracts, loss of contracts, any unexpected situations that can have an impact, major customer, major vendor on the business, and attainment of the goals for top line and bottom line.

So we've got rolling forecast. We actually spend time with each revenue generating and non revenue generating business leader and unit manager.

Every single month, we roll out our report cards. So that's simply, you know, budget versus actual.

But we we make the conversation meaningful from a business dialogue, and we talk about what's going to happen during the rest of the quarter or early in the quarter. What do you think is going to happen in the first half? How do you think the end of the year is going to shake out? We look at we bring KPIs, and we have debates about pipeline, particularly when we're talking about sales and driving sales in the different areas of the business.

And so we'll, you know, have a business discussion every single month. We try to, you know, using automation, try to drive, precision into these discussions, data precision.

And it's continuous monitoring. So it it doesn't end, and everybody wants to know with the validity and certainly with with everything that's going on in the administration and then, you know, tariffs and and how they're impacting your customer sets.

I mean, you know, as these tariffs are being rolled out, how does it impact our customer sets? And we were on it, and we did the research and we kind of knew, and, you know, how does it impact our business? We knew the answer to that as well. So, we're focused on it.

We we, have a dedicated, you know, FP and A team, that works very, you know, tirelessly with the business to just make sure that we're on top of all the data and and we're ready. And it's not just adhering to the budget.

That's a framework. That's a point in time.

But really, where are we going and what business decisions do we need to make, a, to achieve the budget? And if we're exceeding budget on the bottom line, can we deploy more capital into the business that will produce even greater ROI?

Kathryn Adams: Yeah. It seems like you gotta be fluid, agile, proactive, and really stay on top of things as things like you mentioned the fluidity here.

And I know a couple of you mentioned also building that investor, trust. So, Chad, how do you build that trust with your investors? Well, also, maybe having to make some bold moves, especially in in this environment?

Chad Wonderling: Yeah. I mean, it it it starts as we refer to kind of, like, the transaction and acquisition, the investment.

I mean, it really starts pre investment. And that is, like, through the relationship first and understanding seeing the world through the through through their eyes, putting yourself in their shoes, being able to anticipate what they're looking for, and really, like, knowing down to the to the detail what the investment thesis is.

I think it's it's really understanding that. But then I think secondly, and just as important or even more importantly, credibility and trust comes through execution.

That's that that's most important. And then I think it's about I used this phrase earlier, the what, the so what, the now what, is is, you know, being very proactive in the sense of, you know, we talked about the rolling forecast and how David and his team are approaching that.

It's it's not an annual operating plan. It's a continuous operating plan.

And around scenarios, understanding various scenarios, if then what, and being very proactive in having that having that open dialogue with them as you're kind of the guiding light. The CFO is that, even for the investors and for the sponsors.

So I think those are kind of, like, the biggest things that really stick out to me.

Kathryn Adams: Yeah. Vipul or David, anything to add on on building the trust with your your investors?

Vipul Shah: I think David and Chad really covered, the big points. I would just add one thing.

Good news fast, bad news way, way faster.

David Samuels: Yep.

Vipul Shah: Easiest way to build credibility internally. It isn't going to make you popular.

I know the CFO is no longer the CF. No.

You're still never going to win the popularity contest. Like, that's why Chad, David, and I hang out with each other.

But you need to give good news fast, bad news way, way faster. And that is, like, the world class CFOs I see, like David and Chad, that's their hallmark, is they don't surprise.

David Samuels: Yeah. Surprises are not good.

It's it's an evolution. It's not going to happen overnight, but I think it also gets back to that framework and and educating everybody on the framework.

You've got a process and constant communication, constant feedback, and and that will build trust, get them educated to build trust.

Kathryn Adams: And I know we, had a couple conversations before before this panel, and it was brought up that you hear, investors say, I need 90% of the CFO work to go through AI. So on that, Vippel, how do you navigate that pressure from private equity firms versus the reality reality that teams just can't get can't shed three fourths of their finance and accounting teams overnight?

Vipul Shah: Yeah. I mean, you do and this one's tricky for me.

As a big AI fan, founder of an AI business, the the investor expectations of what AI can do, they're not fully grounded in in what's possible today, but I think they're encouraging you to go down this direction for a reason. Is they know competition's growing, you know, interest rates have gone there's just a lot of pressure on a business today than there ever was before.

So one of the areas that provides a lot of hope from saying, hey. How do we grow in a profitable fashion is leveraging AI in the right places.

I like what Chad, David, and, Glenn all covered is, like, pick the high frequency manual mundane tasks, and you should look to have as many of those be, AI powered as possible. You've gotta keep expert oversight, you know, in the mix.

So I'm biased doesn't mean I'm wrong. Right? Because because that's a lot of what we do, but that is that is key where AI can help do it, keep a keep a keep a human in the loop.

And I think, at the same time, manage expectations in terms of, hey. We're going to use it here.

We're not going to use it here. We are trying things.

We're testing things. And another thing, it's going to sound a little duplicative, but from a cultural standpoint, part of what your your investor, whether you call it a p sponsor or growth equity firm VC, what they're looking for is, like, are you as the CFO advocating for that additional budget to explore AI, to leverage AI and automation and and the benefits that come with it? But if if you let anybody in, whether your ELT or your board believe that you're going to buy some tool and next day, 75% of people are gone.

I mean, David used an example. It's taken them many, many months to get the results where they're down to one or two FTEs in a given function.

So it's a it's it's a journey. It's not a destination.

Kathryn Adams: Yeah. David, Chad, anything anything to add on here?

David Samuels: No. I I think it's you know, as I just said, you have to educate.

Right? And and, you know, Google made a a statement about, you know, use the word ignorance. But, you know, people say, you know, pegs and sponsors might say we can wipe out, you know, three quarters of the finance and accounting team.

You can't you can't depending on the size of the organization and depending on what, you know, processes can be automated. If you have a, you know, a simple one product organization, SaaS space, you know, you may be able to, have simplicity if you have a you know, you're a half a billion dollar revenue company and you're you're publicly traded or even if you're you're not, and you've got other stakeholders and you've got debt and you, you know, you've gotta go through an audit too.

And, you know, everybody has has auditors and, certainly, you have to make sure that you comply with with certain, you know, documentation and and processes. So it's an evolution.

It may get there in certain, you know, easier AP/AR, certain, you know, static journal entries during the closed process. But it is an evolution and, you know, it's it's really educating.

And I've I've been through this in other initiatives as well over the years. So I'm familiar with how to how to structure and how to educate.

Kathryn Adams: Now I want to start to, you know, look ahead, and I have a question for all of you. How do you weigh near term efficiency gains against longer term transformation when evaluating new AI tools or automation investments?

Chad Wonderling: Yeah. I I would say first is, I think in the near term is start small, be consistent, and then allow it to compound over time because it is those compounding small efficiencies that are going to win in the medium and even the long term.

And then I think there's another element too of just by the sheer nature of our roles is we've got to keep our mind in our head about a year to two years out, meaning that we've got to be solving that problem that could occur a year to two years out today. K? The only way that we can that we that we can get there is, for example, having clearly defined processes, having very clean data that allow us to leverage and utilize AI to to drive high leverage kind of impact, today for a better tomorrow.

David Samuels: Yeah. The only the only point that I would add is my belief is that 2025, 2026 are transitional years.

We're going to see a lot of experimentation. We're going to see a lot of use cases, and we will see adoption of best practice in use cases.

'27 maybe brings AI agents, in into more formality, if you will. But, but I do think, you know, it's it's it's really, you know, it's process.

It's it's iterative. And I was having this discussion with my team the other day, and I said, when you look at sort of where we are and how over the past few years we've built our tech stack, our business system tech stack, it's because, you know, we are we are in the market.

Right? We understand best practice. We understand this has been an evolution.

We're going to look back five years from now. Right? And, you know, it'll be like, yeah.

This is easy. It's it's different, but this is where we were five years ago, and this is where we are today.

And this is what the majority of organizations are doing because it is best practice, and it will just become adoption.

Kathryn Adams: Yeah. Vipul, any anything to add before we open up the floor for questions?

Vipul Shah: I think it I like it. You know, small wins.

I think, one of my favorite books is the hard thing about hard things. There's no silver bullet.

It's lots and lots and lots of lead bullets, and, no other role embodies that more deeply than the role of a CFO, where you just gotta be, anything goes wrong. It's a big issue.

Every millions of things that go right, that's the expectation. Right? It's an awesome job, bit of a thankless job, but it does, you know, being very measured and thoughtful.

I do think that the evolution of the CFO, pushes on the topics that David and Chad talked about more and more, which is the CFO has gone from being back office supportive to a strategic front office decisioning machine. It's the second most important role in any company in my opinion, and I think that trend is only going to go in one direction.

And so being thoughtful, methodical, I like it. I'm going to steal Glenn's, phrase on AI like a CFO.

I I think everybody listening to this call, like, AI like a CFO, we measure, know every metric, know why you're doing it. The ROI needs to be driven by time and cost.

I don't think everything is going to have a dollar ROI. But if you can turbocharge your best people and create super analysts, those things are going to have very high payoffs.

Kathryn Adams: Well, thank you all for your time. I want to make sure we leave, enough time for some questions that we had.

We have some questions come in during registration and here in the chat. I'm going to bring Glenn back to the screen here as well.

So we will see him here shortly. Alright, Glenn.

We had one specifically come up for you during your presentation. So, we had an attendee wondering what does properly deploying AI mean and entail?

Glenn Hopper: Yeah. If you okay.

I'll take another thirty minutes real quick. So properly deploying AI really starts with alignment.

And I don't and and depending on where you are I mean, obviously, this panel, very knowledgeable on on the topic. But just different companies of of all sizes are are in different places.

So I'll I'll talk about when I come in, and we're going to assume from not AI savvy to deployment. So to me, it starts with educating senior management first.

It can't be, I'm just hearing about this. I don't use it, but I want all my employees using it.

I think senior management has to understand how it works, interact with it, and and have an understanding of of what we talked about in that presentation of this is, how it works, this is when to use it, and this is when to question it. So starting with that education of of senior management, and then once understood by the entire management team, then you can decide, okay, where's our comfort level? What are we going to allow the employees to do with this? What data can go in? What tool are we going to use? What safe you know, are we going to have a, zero retention policy, are we going to use Copilot or OpenAI enterprise, or what what tools are we going to use.

And then once you have that, then you get the training for the employees, have them out using it, and you're starting that bottom up implementation there. All the framework we talked about in my presentation is for the top down part.

Let's let's pick the projects. And it's you've gotta pick the right projects first.

I'm a huge agile guy from the development side. I think you find test projects, you pilot, you get quick wins that build the trust and roll it out.

And as the organization gets more savvy as the technology advances, then you can start taking on the meatier longer projects. But rolling it out, it's very it's not a a big waterfall that you just hand over.

Here's your magic AI kit. It is an incremental development across the board coming from two directions.

So you can get there reasonably quickly, but you have to be measured and organized in in the way that you roll it out.

Kathryn Adams: We had another question come in. When do you think AI related investments go from CapEx to OPEX? What factors, KPIs do you think will help to pivot to those decisions? Does anyone want to volunteer to to answer that one?

David Samuels: Yeah. I was going to say today, depending on the type of business you're you're in, if you're an operating company, most today are going to be OpEx investments unless you're you're, you know, procuring hardware or some other, you know, platform that's needed in the business.

But I I think today, majority, you know, of smaller investments, you you know, will be OpEx related.

Kathryn Adams: Vipul, did you have some input on there as well?

Vipul Shah: Yeah. Yeah.

I I would agree with that. I do think there's, there I think there's going to be a change in terms of whether you can capitalize expenses or not.

And the question is, even as pricing and other things involved for such solutions, The more customized, the more tailored, is there a significant enough upfront cost that you can amortize over time? I think that'll happen, but it'll probably happen at the enterprise level first. You know, I don't know exactly the audience we have.

Probably most are SMBs, but if you're biggie enterprise, then I imagine there's going to be capitalized expenses. For small medium sized businesses, one of the KPIs I look at on, are you effectively deploying API? It's a KPI we live by as a North Star, which is understanding your revenue per employee.

I think proper proper deployment of AI over time is going to take revenue per employee to another level. You're just going to see and as a result, there's no free lunch.

There are going to be more CapEx expenditures to get that revenue per employee to a different level, and and and I think that's one KPI. I don't like having tons of KPI.

So for the question that was asked, I just wanted to answer that one question. Is, revenue per employee is going to be a good guide? And I think your investors are going to start to look at that.

And you that you view that as a 100,000 foot level proxy for are you operating efficiently.

Kathryn Adams: And, David and Glenn, we just had a question come in. I think it'd be great, for both of you is when evaluating which projects, do you look at the amount of time slash investment it would take to document, improve processes, and clean up data structure?

Glenn Hopper: David, maybe better for you to answer this first since I'm selling that very service. I do.

I could I could put a plug in for myself, but I think from the CFO, I'd love to hear your thoughts on that.

David Samuels: Yeah. I think, you know, look, datasets are and how you the cleansing of the data.

We have had, a lot of internal discussions, with various parts of the organization because we have data in different systems, business systems and operating systems, you know, Tableau, NetSuite, Salesforce. Those were sort of our our three column sources of truth.

But the datasets have to be claimed, the the parameters around that data, and, you know, that's sort of your foundation, if you will, to then start to evaluate which projects to bring forward, and where you think automation could be, you know, achievable and and a value to the business.

Glenn Hopper: I guess what I would add to that and that that's exactly right. I guess what I would add to it is that the time that it takes to document and process everything, do it now.

Don't don't wait for the the projects. Go have that you heard it here.

This is that's going to be your road map for automation going forward. It's inevitable you're going to have to do it.

You can save time by going through and looking at all the processes that are done. Go ahead and document them if you haven't already.

And then and and then it's sunk cost in in time that you you've already done that. And then you don't have to factor that in to the, to the ROI calculation anyway.

Kathryn Adams: Awesome. We have a two part question.

So one's for Vipul, one is for Chad that came in from one of our attendees. So Vipul, for you first.

How does your AI powered data platform solve the fragmented data challenges that prevent CFOs from getting the institutional grade insights investors expect?

Vipul Shah: Yeah. So, actually, David and and and Glenn just spoke to this.

The biggest issue is there's data chaos and there's data ghosts. And there's just so much data living in so many different places, and they're all systems very specialized to solve a given purpose.

So, you know, David mentioned he's got data in Salesforce, NetSuite, and variety of different places. Those are two completely different disciplines, two completely different datasets, and owned by two different sets of teams.

So bringing that data together, takes a tremendous amount of effort to say, okay. How do I take every invoice, every bill that's living in NetSuite? How do I join that with every data point that ties to that that's living in Salesforce, which could be, is this an account at the parent level, the child level? Who's the rep that sold it? Who's the CS rep on it? Going back to NetSuite, is that invoice paid or not paid? What products are included in it? The algorithms, the AI, and the automation we're using to bring that data together to say, okay.

It happens to live in Salesforce, NetSuite, Tipalti, Zone, all these different places, but it's still my revenue, my customer, right, for the company that I am the CFO of. We've gotta bring all that together.

And what we've done is created, you know, I joke, like, sales has Salesforce, accounting has NetSuite, strategic and operational finance has been homeless for a long time. Right? And they need a home.

And we built that home, and what that home is doing is is helping manage that data by bringing it across from all these different places. And as Glenn said, data is that foundation, and that is where we are using AI automation and an expert in the loop to get that data chaos and turn it into turn it into clarity.

I mean, I think that's a you know, that's the approach we've taken, and that's the sole business we're in is is to do that cleanup.

Kathryn Adams: And then, Chad, from the same attendee, another question for you was, what framework do you use to help CFOs balance investor pressure for rapid AI adoption with the reality of maintaining strong financial controls and processes?

Chad Wonderling: Yeah. So I would say approaching this probably in in in three layers is kind of the way I've I've I've thought about it or tried to approach it.

And I think one is, first is defining kind of the why and the where. So do your value mapping, look at your processes, and really understand where the AI impact can can be made.

Okay? And so that's going to be along the lines of what we've already talked about. Lower risk, high impact, high frequency type stuff, which is around anomaly detection as an example, forecasting support.

Okay? And then I think the second layer is a little bit of operating in two speeds. So you've got the kind of, you got the micro, which is all about the quick wins, kind of the the the front lines type stuff, showing those quick wins, really important for for incremental improvement.

And then kind of your gear two or your second speed is kind of the more macro, which is more of the strategic foundation in terms of data governance, control layers, kind of the explainability type frameworks that are kind of all happening in in in parallel. And I think three, which is, you know, speed two is kind of a transition to kind of the third layer that I think about is like, I think it was David that mentioned it is is, and we've done this here at Zone is rolling out your governance playbook around around AI.

How do you use it? And then I think as part of this, it's like, let's turn our team of doers into reviewers so the human has to be in the loop. And then and then from our seat, when it comes to that investor pressure, it's a lot of, proactive communication.

Okay? What's in scope? What's gated? Why? Etcetera. Here's what we're automating.

You know, and really sell some of those some of those, some of those wins as they occur over time.

Kathryn Adams: Just had a interesting question come in, around the evolution of the CFO. How did the panel members gain the foundational knowledge and comfort on various AI tools plus use cases to evaluate which is the best AI tool to use for the ERP system? Are each of you coming from a CTO background?

David Samuels: No. You know, I'll just take this one first.

You know, look, when you're in the CFO chair, you you understand the processes, right, and the business systems because presumably you you put them in and you are familiar with how they work in your organization. So that's the I call this sort of foundational lever, and then from there, you figure out And then it's research.

Right? There's trial and error. Right? It's experimentation, probably another word.

And you figure out what's which processes from the you know, foundational lever. This is how you do business.

Okay. Now take your next lever layer is, you know, which processes can be automated, can be simplified.

And if so, how do we do that? And by default, I think, interestingly, what a lot of people on this call are going to see is that some of the business systems that you already have in your tech stack, your vendors are going to be adding more AI centric features into those products. So by default, you will see, right, that we have sort of push pull.

And you're going to see over the next coming months, you know, we've seen it in Tipalti. Probably see it in NetSuite, and we'll probably see it in Concur and and other platforms that we use where there are new features embedded, because they want to stay best of breed.

So, use your vendors.

Kathryn Adams: And we had another question. Lots of questions today, which is great.

I love the love the engagement. When all, when will agentic AI or autonomous AI really become a reality for finance? And what could be early use cases?

Glenn Hopper: I'd love to jump in first on this one because I I guess I'm just a pedantic guy. But I have a real beef with people saying agentic AI when it's not truly agentic.

And I know so this question so, you know, we talked about automation and agentic AI. The problem out there right now, and I this is why there are so many projects failing.

Agentic a an an AI agent is a tool that has agency. Human people are agents.

If you give if you have an analyst and you tell them to go off and do something, it can go off and work it. Your analyst can go off and work for, however long it takes to complete that task.

They'll come back to you if they have a question, but you don't need that continual prompting or a very specific rule based app to go down. So everybody's talking about the agentic AI right now.

The truth is, if you've if you've played around with tools like Manus or the, the agent mode in OpenAI and and some of the other tools that are out there, it's pretty impressive. You can see that that agents will be I think of it as everyone having a very easy to implement RPA for their own processes and tasks that they'll as a as an analyst, I'll now I could have an when Agenctic AI works, I'll have bots that can go off and do my work and I'll be orchestrating and managing this team of of bots and I'll be that much more productive.

But that's not where we are today. So around automations, I seldom encounter a a client no matter how big or or small the company.

If they have a digital task, we can automate it. There may be AI in the automation process and there may not.

And it may get when we have truly a agentic AI, you if the the setup of it will not require consultants to come in and build, you know, fancy code based agentic workflows. But in the interim, we can get the solution.

So I would say that there's any process within the office of the CFO, we can put automation in today. As far as the breakthrough where agents can really do it, AI today is the worst AI we're ever going to have, so it keeps getting better.

I'm I'm I'm proven to be a terrible futurist when I try to put timelines on on things. I will say though, the framework is there.

If you think about what an agent does and if anybody's got any any sort of background in in coding, it's just a conditional loop. It keeps going through and repeating a process until it comes up with an answer.

Right now, because of hallucination and going down the wrong path and and lack of context, agents aren't working great. But I I certainly, think that it's it's possible in the next, three to five years to have very robust agents.

And I hope I'm I I hope it's shorter than that. But, I would say three to five years to have very robust agents that won't need the whole the hand holding that they do now and have better understanding.

But we've also all the stuff about documenting our processes and everything, that's what we've gotta do today because all that documentation and definition around processes, all that data cleanup, that's just that's what's going to allow us to use these agents when they do come to market and they are reliable.

Vipul Shah: Yeah. I'll add to that a little bit.

You know, there's there's two approaches we're taking on, you know, this question has double meaning for for me at FinQore is how are we thinking about agents and agency. So the first thing we've done, I would agree with what Glenn has said, is we've kept, you know, an expert verified element in in the loop on everything we do.

And I think that expert, for us, as a managed solution can be our team, but it can also be the CFO and the finance team that's leveraging this. And I think you'll see tiers of that evolution happen.

So for example, today, when when you are using any kind of app, because you have agency, that is an agentic workflow itself. It's just with a human agent as as Glenn said it very nicely.

I see that being one of the first things that evolves over time in saying, okay. How do you give more agency better, faster, easier to the person that's using it.

Right? The expert. So for example, you know, we we are now enabling you to securely and safely access all your FinQore data through cloud, Chat GPT, etcetera, using MCP servers.

And for that, we had to build out the right resources, which is just basically, content or data. We had to build out tooling that's specifically to CFOs, and we had to see that with a set of prompts.

So now you can have extreme agency and say, okay. Let me now combine the power of LLMs with the power of really well structured and segmented data.

I I think that's going to evolve very nicely. You'll probably see executive use more.

It might be a top down use case or maybe top down and bottoms up. I don't know where the middle is going to come out.

I think that's where a lot of the murkiness and, the apprehension is. Now on true agentic workflows to, Glenn, I don't think you're being pedantic.

I think the word agentic is so overused at this point. It it's become a proxy for any kind of automation.

What we see as a practical matter, and I have, like, played with every kind of AI tool that's out there. My my cofounder and CTO is a serial CTO, was founding CTO at HubSpot.

The guy is I think he's lost in the matrix now. I think we'll get him back someday, but, like, he's pretty deep in AI.

And, Glenn, I know you've done a session with him. Yeah.

Love Jim. Yeah.

What we have done is, like, Jim and I have said, hey. Let's build microscopic agents.

Give make the task so small, so clear. This ties into something that Chad and David have both brought up, like small and I would say goes in from small to micro, micro repeatable things.

Those kind of agents are working. So don't say, hey.

Give me an agent that's going to do my rev rec. I wouldn't believe that if I were on on this call, anybody listening, if somebody tells you they built an agent to entirely automate your rev rec file, a, I'd like to meet that person, and get their blessings on where to go in life, but it's not going to be real.

But if somebody says, hey. I built a microscopic agent that takes a cash bridge, a revenue bridge, or takes bookings and cash and gives me deferred and does just a really small thirty second tasks, I'm more likely to believe that.

So what we're doing at FinQore is saying, hey. What are those micro annoying tasks? Anomaly detection, data cleaning, segmentation tagging, how do we turn those into, truly agentic workflows and get that work off of everybody's place? Otherwise, we're putting a lot of our time into, giving a ton of efficiency to the people that have high agent high agency, which is your super analyst for most CFOs.

Kathryn Adams: Now in the interest of time here, I'm going to squeeze in one more question because we got one that I think is a really good one, to end on. How do you see AI shifting the CFO's role from financial gatekeeper to strategic value creator over the next three to five years? David, do you want to take a stab at that one?

David Samuels: Sure. I think, you know, the CFO role will be, top of mind first and foremost to evaluate all new investment in the business, human capital investment, CapEx, OPEX with regard to consultant vendors, and really understanding with a what's called a small leadership team, what makes sense.

You know, do and I think someone I can't recall who said this earlier in the conversation when you're, you know, evaluating headcount. We do this all the time now.

You know, is this headcount that we need? Can can existing headcount can we automate, existing headcount? So you're really, you know, at the forefront of all new investment into the business, human capital, OPEX, CapEx.

Chad Wonderling: And then I I would also just say too, just in closing, is is, one, we're already sensing it. We're already feeling it.

And I think all that's happening is the the pace is just accelerating and even compounding. So it's the CFO as the growth partner, the CFO as the value creator, and the CFO as, in many ways, as as the scenario architect.

That's where we are right now, and that's only with AI and and and the acceleration of change is only going to get us there faster.

Kathryn Adams: Vipul, Glenn, anything anything to add on this?

Vipul Shah: AI like the CFO. AI like the CFO. I'm stealing that, Glenn.

Kathryn Adams: Well, thank you all the attendees for joining us today. Thank you to Glenn for your awesome keynote and all the panelists for your great insights during during our CFO panel. And again, we really appreciate you joining us today. Thank you.

Get a Personalized Demo Today

Start a conversation with an expert who asks thoughtful questions and shows you how Zone & Co can solve your unique problem.

Book a demo