Skip to main content
Compliance

Building a responsible AI framework

Ethical AI involves two crucial things working together, IBM and Cognizant experts say.
article cover

Suriya Phosri/Getty Images

3 min read

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.

A responsible, well-planned AI framework is make or break for any company. But for CFOs, who have to anticipate the financial risks behind any AI innovation, plus questions from investors and analysts, there’s all the more reason to lead with caution.

At the same time, anyone who’s moving slowly on AI right now is already behind. The upshot: It doesn’t hurt to listen when tech and finance experts talk about responsible AI, because we could all use some guidance.

In an October 4 webinar hosted by Salesforce, IBM partner Matt Francis and Cognizant SVP and business unit head for consumer business Anup Prasad got together to discuss how consulting firms can establish ethical AI standards.

Get it together. Their message was a familiar one, but worth repeating. A responsible AI framework is composed of two elements, working together: humans plus AI, Prasad and Francis maintained.

“That’s really where the rubber meets the road,” Prasad said. “That’s the potential for AI that I’m most excited about.”

“It’s not just machines augmenting humans,” he continued. “It’s also about humans having that feedback loop to improve the models and improve the performance of the models.”

Easier said... Even if we’ve heard it a million times by now, it’s not always obvious how to design a truly people-first AI plan. To that end, Francis called out IBM’s practice of “explainable AI,” a process that lets users understand how AI arrived at a given output. This will become increasingly important as AI grows in complexity, making it more difficult to understand how and why it’s spitting out what it does.

In Francis’s eyes, transparency is the key to decision-making. “If we make a decision on a case or an opportunity…the real key is being able to explain how we got to the decision,” he said. “It can’t just be: ‘We made a decision because it told us to.’ It has to be AI with humans making decisions.”

Prasad, meanwhile, stressed the importance of “an AI council and a steering committee approach that really focuses on accountability [and] collaboration.”

“It requires collaboration, depending on the use case, across different functions,” he explained. “You need the regulatory compliance function involved to make sure the laws and regulations in that space are getting taken care of.”

“People are [at] the center of it,” Francis said of an effective AI strategy. “Everyone in the process needs to be involved and provide [questions like]: Is it getting better? How do we train it? How do we make it more efficient, and drive better outcomes?”

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.