Just as one does not simply walk into Mordor, financial planning and analysis (FP&A) teams must not simply automate their existing processes with AI tools. Take it from Liran Edelist, a 25-year-plus FP&A expert and cofounder of the knowledge-sharing website FP&A Hub.
“You can’t just put [in] a system that is keeping your old processes[, but] automated,” said Edelist, the chief product officer for enterprise planning at Lumel, which makes a tool for enterprise performance management (EPM). He spoke with CFO Brew about how CFOs and finance leaders should approach using AI for FP&A, specific ways that teams can try out AI, and how they can avoid some of the risks associated with the tech.
This interview has been edited for length and clarity.
How should CFOs and finance leaders evaluate and compare AI solutions for FP&A?
Let me take half a step backwards. Organizations need to start using some [form] of AI. Not everyone is there from a mindset perspective.
Get a little more education about AI capabilities. It could be in the prediction area, natural language processing, generative AI, optimizing, alerting, automation. What potential AI solution can bring the most value for the organization [and] what [their] difficulties are.
Organizations must be thinking [about how] AI [works with] their data infrastructure and their data lake [or] data hub. I don’t want to say it’s a waste of time or definitely incorrect [when an] organization [uses] AI as a patch, but [they] might end up with AI that is not super effective, is not covering all the data that [they] have, with multiple AI tools that are not connecting with each other. The strength of AI is to be able to monitor data, and that data needs to be organized and standardized. By the way, you can even use AI tools to create your data infrastructure today. Creating a proper data infrastructure is not overly complicated, as it was five, 10, or definitely 20 years ago.
What are the best or most common ways of using AI that you’ve seen from FP&A teams?
If we’re talking more about prediction, what I’ve seen in software business[es] but also in many other recurring businesses, we found that retention, renewal rates in many cases are much more accurate with AI, even with very simple AI, thank if you try to do it manually. Doing prediction for recurring business, whether that's revenues or expenses, that usually works very well, especially if you find the balance between data that is not fully aggregated but also not fully detailed.
News built for finance pros
CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.
The other area is less common, but when I saw people using [it], they really like[d] it. A lot of stuff is falling between the cracks in budget meeting[s], forecast meeting[s], all those types of meetings. Using AI to record the conversation and summarize the key topics and the action items is amazing and a very low-hanging fruit, because almost any tool in the market today [can] create a summary of a meeting. An FP&A professional [using AI can] spend five, seven minutes after the call to read the summary, correct it, and send it to everyone. It’s much faster than writing an email, summarizing a conversation.
You’ve mentioned some challenges FP&A teams have to face with AI, including its potential to give answers that are inaccurate or difficult to confirm and the need to make sure someone is accountable for its answers. How should they handle those risks?
The first stage to resolve a problem is to recognize what it is, right? I think the main idea is to be away of those challenges and build the controlled environment around it, to try to and mitigate those [challenges].
How can you know that all your ERP transactions are summarized correctly? You do some audits, you do controls, you do tests. It’s almost the same way. It’s almost the same. AI can speed up processes, but you still need to put that controlled environment around that to make sure you’re not [allowing] automatic decisions by the machine instead of by a human that got some help from the machine.
If we take an example of planning, I would always try to do a benchmark between a salesperson[’s] prediction and an AI prediction. I’m still going to use people experience. It could be that, over time, you will say it’s reliable enough and we need [fewer] controls.
[For] ChatGPT and other AI tools and intellectual property, legal and technology organizations should always be checking out what type of your data is exposed, what data you use.