Compliance

AI governance isn’t so scary after all

Experts advise orgs to “lean on” existing methods when setting new AI standards.
article cover

Champpixs/Getty Images

4 min read

Artificial intelligence may frighten some (it is spooky season, after all), but establishing effective governance around its implementation shouldn’t be a cosmic horror unknown to CFOs and other C-suite execs.

We shouldn't think of AI like a mysterious bogeyman lurking behind the closet door. Siqi Chen, CEO and CFO of finance platform Runway, said we can view it—and the governance surrounding it—in the same light as other technology.

“It’s kind of odd to me that AI governance is like this separate thing that’s really scary when, in fact, it is just the same thing that they’ve been dealing with this whole time,” Chen told CFO Brew. “It’s just a different piece of software.”

Chen acknowledged that AI is different from other innovations in that it has massive potential to disrupt and is creepily intelligent. But “at the end of the day, it’s still chips and software and data, and we’ve been working with chips and software and data for a very long time,” he said.

Research from professional-services firms shows that AI tools are quickly catching on in finance departments, while AI-related updates to governance frameworks may be lagging. A June Gartner survey of 121 finance leaders found that AI adoption in finance functions climbed to 58% this year, a 21 percentage point increase from 2023. A Jefferson Wells study of US internal audit executives in May and June 2024 found that just 26% of organizations, however, “have fully integrated generative AI standards” into their governance and compliance frameworks.

But it is something that CFOs will need to pay attention to in the coming years. Good AI governance helps protect businesses from damage to their reputation or their pocketbook, according to Matt Farrell, director at advisory firm Riveron.

Here are some highlights from a recent panel of experts from Riveron who spoke at Workiva’s Amplify 2024 conference this September in Denver.

Use wisely. The path to establishing governance and assurance around AI implementation can follow other legacy technology, Farrell advised.

“Let’s lean on the methodologies that have worked prior to the advent of AI, things like a system development life cycle,” he said at the Workiva conference. This process, he said, involves communicating with end-users to gather “business requirements from the people that are going to be utilizing it on a daily basis,” and undergoing a “robust user acceptance testing process” to identify potential issues before the AI application goes live.

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.

Finance leaders should work alongside other departments including IT, operations, legal, and human resources, all of which are “going to have a vested interest ensuring that this model has effective governance around it,” Farrell said.

Leaders should also keep employees informed on how AI will change their day-to-day work, Senior Managing Director Drew Niehaus said.

Risk management. Farrell said the old adage “garbage in, garbage out” applies to AI, meaning bad data will not magically produce good results just because it’s artificial intelligence analyzing it. “If you give it bad data, it’s going to create bad outputs.”

Companies also need to ensure their AI tech is transparent, Niehaus said. He noted that one client, a financial institution that used AI to approve loans, followed a process known as explainable AI to avoid a “black box” scenario where the tool would spit out its determinations without explanation. In effect, the client ensured the AI tool would “provide a rationale by which it was making its determinations…so there was an audit trail in how the tool was making decisions on how to approve or deny a loan, and how the model was functioning.”

Good AI governance helps protect businesses from damage to their reputation or their pocketbook, according to Farrell.

“Every company has a set of values, and ensuring that these models don’t drift away from those values, it helps to protect the company’s reputation [and] shareholder results,” he said.

Handling it. The primary risks that companies should consider when adopting AI, according to Runway’s Chen, are hallucinations and stolen or leaked data—which, for the most part, isn’t all that different from other technology and cyber risks, right? (Though if your legacy tech is hallucinating, you might just be in the latest Poltergeist sequel.)

Companies can help prevent data leakage by educating employees on cyber risks and using security tools likes firewalls and VPNs, Chen said.

“There’s no perfect security, but there’s a lot you can do, both in terms of training but also technology that [is] very well proven to handle these problems,” he said.

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.

C
B