Skip to main content
Compliance

New, upcoming AI regs mean more compliance risks for CFOs

Time to get AI governance programs in order, experts advise.
article cover

Adventtr/Getty Images

4 min read

Now that artificial intelligence (AI) is fully and completely the hottest technology among businesses, it’s the regulators’ turn to catch up. Their mission (as they probably see it): Regulate AI before it assumes direct control of society.

“We’re at the infancy stage in terms of AI regulation, but we are seeing a number of proposed regulations coming out, and they’re really impacting [the] state level, federal level, and international level,” John Farley, managing director of insurance broker Gallagher’s cyber liability practice, told CFO Brew.

Farley pointed out a somewhat awkward dichotomy: Widespread AI adoption may not be far off, but regulation of the technology is just getting started. This brings up uncertainty in enforcement, penalties, and ultimately, damages.

“It doesn’t matter,” Farley continued, “if [the organization is] in healthcare, financial services, or a certain industry sector, I think ultimately if there are concerns around the accuracy of an algorithm…it’s going to affect that organization.”

Of course, new regulations mean new risks of noncompliance and litigation that CFOs have to worry about.

Starting line. While AI regulations are still in their infancy, here are some recent developments that signal more rules may be on their way:

  • The New York Department of Financial Services (DFS) in January released proposed guidance aimed at preventing discrimination or bias in AI-fueled insurance underwriting and pricing practices.
  • Also last month, the Federal Trade Commission (FTC) announced an inquiry with five companies that “will scrutinize corporate partnerships and investments with AI providers” so that the FTC can better wrap its head around how AI may impact “the competitive landscape.”
  • The US House of Representatives’s Financial Services Committee recently formed a bipartisan working group to examine how AI is impacting the financial services and housing sectors.
  • European Union lawmakers just this month approved a provisional agreement governing the use of AI. Called the AI Act, the agreement will, according to Reuters, “pave the way for the world’s first legislation on the technology.”
News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.

It’s clear; more regulation is coming. But why should all companies be concerned with state-level insurance regulations or an FTC inquiry into Big Tech? While much of the current AI rules focus on certain industries, such as insurance and technology, “I think in time, every single industry…will have to be concerned about it, especially if they’re going to be using [AI] for hiring practices,” Farley said.

OK, we get it. We should be paying attention. But what should organizations and their risk managers be doing about it?

“Right now, what I’ve been advising our clients [to do] is to take a hard look at AI usage and update your risk management programs, to incorporate that usage and to do an AI risk assessment to figure out where things might go wrong, and how you might be able to manage that risk,” Farley said.

Specifically, an organization should take inventory of its current AI tools. Once it has a full understanding of how it uses AI, it should put together an AI governance program, or a set of “policies and procedures…as to who can access these AI tools and for what purpose,” Farley said. A company must also understand if and how it is legally liable for third parties using AI. For instance, a class action lawsuit may name both the AI platform and its users as defendants.

“All of these things have to be considered, and with the backdrop of all of the regulation that’s starting to bubble up, to really get a full picture of potential liability,” Farley said.

If an issue should arise, organizations should also not shy away from filing a claim with their cyber liability insurers, according to Anthony Crawford, a partner at law firm Reed Smith who represents insurance policyholders.

“If you have something that comes up and maybe a potential claim involving AI, don’t let the AI portion of it dissuade you from pursuing your claim,” Crawford said.

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.