If there’s someone who understands the intersection of AI and finance, it’s Glenn Hopper.
As CFO and director at Eventus Advisory Group, where he also leads AI research and development, and author of Deep Finance: Corporate Finance in the Information Age, Hopper has a unique vantage point for how AI is already disrupting the world of corporate finance.
With a new book out, AI Mastery for Finance Professionals, we spoke with Hopper about navigating AI adoption challenges and the importance of explainable AI.
This interview has been condensed for length and clarity.
When did you first start working on—or even thinking about—this book? And why?
My first book [in 2021] was about using machine learning in finance. Back then, not many people were talking about it. It was just statistics nerds who were thinking about using machine learning algorithms in finance. But I’d been using it for a few years, so I wanted to get the message across there about how you improve the finance function and get better forecasting.
For the new book…everybody’s talking about generative AI, and I’m a big enthusiast about generative AI, but I also think that for finance and accounting people, there’s a lot of reasons you can’t just blindly go in and just think you’re going to throw all of your data into a large language model, and it’s going to spit out the answers.
I’ve been teaching courses on using generative AI for the past year and a half. People, a lot of times, just want you to tell them: “How can I do financial analysis in ChatGPT? Or how can I use Claude to read a 10-K?” And I always lead off with: “I can show you how to do this, but if you’re going to be using it, if you’re going to be incorporating this into your workflow, I understand that your domain expertise is in finance and accounting, but this is a powerful tool, and in order to use it effectively, you have to understand the basics of what machine learning is, what AI is, how it works.”
Between your first book and now your second, CFOs have officially moved beyond the phase where they just need to pay attention and abstractly think about AI, but now are really actively involved with adoption.
You have a chapter about AI applications. Could you walk through a small sample, or almost a preview, and why CFOs should consider them?
If you could put the power of generative AI in the hands of your entire finance and accounting department, they would be able to interact with your data in a way that they haven’t been before. And the interesting thing with generative AI is if you’re a CFO, and you’re making a decision to put in a new software program, whether it’s a new general ledger, or an ERP, or an [accounts payable] tool, whatever, those decisions come from the top down, and you just tell the employees, “Okay, here’s the new tool. Go use it. This is how you do it.”
But with generative AI, it’s more about [how] you build the environment [where] they can use it…You put the guardrails in place. You give them the direction that they need. They’re going to, on their own, come up with ways to use this to make themselves more efficient, whether it’s quicker ways to do their monthly close…all the normal functions that they do, rather than you as a manager telling them, “Hey, use it like this.” Instead, you give them the tool, and they’ll figure out [how] to get more efficient.
News built for finance pros
CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.
Especially for those that are more cautious [about AI], can you walk through how to approach some of the main challenges?
The biggest one is trust. In the state of generative AI today, trust has not yet been earned by these new chatbots, but the more you understand it, the more you know when you can trust it, when you need to check it.
If you think about other departments, who maybe were able to lean into generative AI earlier, say it’s sales and marketing, well, if you’re writing marketing copy, if it’s slightly off, that’s not a massive problem, and it’s kind of easy to catch. But if you’re in finance and accounting, and you’re asking it to build the amortization table for this loan that we’re taking out, if the LLM is just trying to regurgitate something based on its training data, and it actually doesn’t do the math to build the amortization table, then it could give you completely wrong numbers. And the only way that you’re going to know if they’re right is if you go do the math behind it, and so then it kind of defeats the purpose of having an LLM do it.
One thing for finance and accounting people to really be aware of is [that] large language models on their own can’t do math…It’s kind of opaque if the LLM is just answering a question from you. [But] if you’re using the capability of the LLMs to write code and do the math for you, then it’s not a black box. There’s explainability. You can open up and see the code that it wrote. It’s something you can replicate.
The second to last chapter of the book is about actually implementing AI in financial institutions, and there’s a section on governance. How, especially for an audience of CFOs, should we think about [AI governance] moving into 2025?
A big thing that CFOs need to think about, especially for compliance and for regulations, is transparency and explainability for the models they’re using. So you can use AI…but [there’s a] black box problem [if] you can’t explain how the algorithm came up with the answer, well, that’s not something you could use for public reporting and for financial reporting.
There’s a field, explainable AI…and that is going to be very important. In order to get SOC 2 compliance, or to meet auditor requirements, you have to be able to explain [how you got an answer]. If you’re using AI, great, [but] how are you using it? Can you replicate this answer; can you validate and verify? That is something that, especially at a public company and on the audit side, they need to be very aware of.