22 October 2024Technology

Balancing AI risks and rewards

The artificial intelligence (AI) revolution is well underway, bringing both opportunities and risks for businesses. At the FERMA Forum in Madrid, Philippe Burger, organisational transformation leader, UK and Europe at Mercer, kicked off a discussion on how AI will reshape risk management and productivity. 

The key question: how can organisations leverage AI to enhance productivity without introducing new threats?

Businesses, especially in the insurance industry, face significant changes as generative AI becomes more integrated. Coupled with a shrinking workforce and a persistent talent shortage, strong leadership is essential to ensure organisations not only adapt but thrive in this evolving landscape.

“Thirty years ago, the half-life of a professional skill was about 30 years,” Burger explained, meaning someone who started work in the 1980s could rely on their skills for decades. Today, however, that half-life has drastically “shortened to three or four years”, requiring regular skills development for professionals and ensuring that the continuing “talent shortage” in the industry remains pronounced.

While AI promises to boost productivity—Burger predicted a 30% increase by 2027—it also poses new challenges for management.

“We see the opportunity, but what is the associated risk?” asked Gregory Van den Top, head of cyber risk consulting at Marsh. “No one is really sure how it impacts our risk profiles.”

With AI-driven changes “happening exponentially”, there is an urgent need for businesses to define best practices and for leaders to support their employees in adopting AI effectively, Van den Top said.

A critical question, Van den Top noted: “Who owns AI risk within an organisation?”. 

“There aren’t many people that fundamentally understand how AI works and how to deploy it in organisations,” he said. “Attracting the right talent is going to be vital.”

“There are lots of questions around liability that we still haven’t solved when it comes to AI.” Gregory Van den Top, Marsh 

External factors also play a role, such as the European Union’s AI Act. Van den Top emphasised the need for clear guidelines and legislation, particularly around liability. For example, in the case of a self-driving car accident, who is at fault? 

“There are lots of questions around liability that we still haven’t solved when it comes to AI. We need guidelines and legislation to put those questions at ease,” he said. 

Internally, some organisations are beginning to implement risk mitigations around AI, particularly regarding brand risk, cybersecurity, and personal data protection. Van den Top pointed out that there is a “real risk of spreading sensitive information” when using AI, and businesses must carefully consider what data they feed into AI systems.

For Burger, AI presents both a risk management challenge and an opportunity. He sees it as a “tool” that can be leveraged in managing risks and improving organisational efficiency. While “transactional tasks” may decrease as AI takes over, interpersonal and human-centric skills will likely become even more important.

Ultimately, Burger believes AI integration “is a journey”. Organisations should set measurable goals, empower their people, and focus on accountability. “There is no way back from AI,” he said, “but we want to reinforce accountability at all levels” as we integrate it into our businesses. 

Looking ahead, Burger is optimistic: “We’re not going to be replaced by AI. We believe in the combination of humans and machines for augmented intelligence. We just need to assess the technology’s impact on humans as we go.”

FERMA Forum Today is in partnership with Captive Review, part of Newton Media.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.