Shutterstock_583288633
18 December 2023Technology

AI in the ascendance: why regulating innovation is vital

The explosion of interest in AI brings with it an inevitable increase in regulation and government policy. Josh Brown explains what this could mean for the future of AI development in insurance.

You can’t talk about the recent explosion of interest in artificial intelligence (AI) without mentioning ChatGPT and its natural language model with vast amounts of data behind it. “It was one of the first user-friendly, powerful AI tools that has a real-world use-case that consumers could get immediate benefit from,” says Josh Brown, head of Technology Professional Indemnity (PI) at Markel International.

AI has started to move from a science fiction movie trope to something people can actually use. “That’s pricked the ears of investors and business leaders who now have a massively increased interest in this space, creating a race to innovate,” he says.

Alongside this, a serious debate is raging about the potential harms such advances could bring and as a result, governments, regulators, and technology leaders are calling for more regulation of this space.

The key issue for AI companies is not only to make the most out of AI capabilities, “but to do it in a safe way that doesn’t leave them exposed to legal or regulatory ramifications,” says Brown. Complying with upcoming AI-specific regulation will be a vital part of this.

AI innovation pitfalls

Data privacy and security are a “massive” issue that companies using AI will need to manage carefully. “We already have data protection rules, and AI companies are subject to those, as well as other global regulatory frameworks. Companies will need to make sure that they are compliant with such rules,” Brown says.

For Brown, the privacy element of AI is central, as regulation in this area has been in place for some time. “It impacts organisations using AI in the same way as any other company that controls or processes personal data. An organisation will have to be sure that the data behind AI tools is compliant with the relevant regulatory frameworks.”

He says access to quality data is the key to success for AI platforms, and that data needs to be clean, accessible, well governed, and kept secure. Data anonymisation is important for datasets that will be used to train AI.

In the same vein, data must be secure. “As AI systems often rely on vast amounts of sensitive data, they may find themselves big targets for cybercriminals, so security measures must be robust,” Brown says.

“Getting privacy right for AI organisations is absolutely essential for building trust with regulatory bodies and governments, as well as consumers. It’s also vital if you want to maintain a competitive edge.”

Breach of contract claims is another area AI firms need to monitor carefully. “For example, if you are an AI software provider to a financial institution (FI) and you are subject to a breach which leaks data that you are in control of, or you’re processing on behalf of your client, your client may receive a fine from the regulatory board.

“Subsequently, the FI may launch legal action against you in an attempt to recover losses,” Brown explains.

On top of this, he flags up issues around investing in computational power, which is necessary for processing the vast volumes of data to build AI systems, but this can be extremely expensive. Most startup organisations are not likely to have the budget of the large global organisations, he explains.

Talent is another challenge, as people with the right skillsets and experience are expensive and highly sought-after, and therefore rare, he says. But ensuring technology companies have the right talent is “vital to ensure that organisations adopt AI as smoothly and safely as possible.”

AI integration into existing systems and scalability are further challenges that are complex, time-consuming, and expensive.

Tipped to win

As the race for AI innovation accelerates, what will define the winners? Brown says that with the attention AI has gained recently, organisations are scrambling to invest in it. But the direction AI will take remains unknown.

He believes the organisations that will thrive will exhibit an ability to keep data private and secure, be able to attract and retain the right talent, and have a strong emphasis on technical expertise.

“Organisations with strong technical expertise in AI and machine learning, natural language processing, and computer vision will have an advantage because they will be able to call on that experience and knowhow, which will help them avoid the common pitfalls.

“High quality and diverse datasets are vital for training AI models. Companies which have access to the largest and most diverse datasets will be able to build more accurate and robust AI systems.”

Finally, a user-centric approach will be pivotal, Brown says: “You need to understand user needs and deliver solutions that provide tangible value and very good user experiences.”

He is clear that accelerated competition in the AI space is positive as it drives innovation. But, he adds: “It’s absolutely crucial that organisations don’t run before they can walk. This is where I think they may fall foul of things such as inadvertent non-compliance with regulation or having insufficient security in place to protect personal data.”

This is where the potential for regulatory fines and penalties arises, as well as the unforeseen consequences of developing ever-more powerful AI capabilities without restriction.

“We have seen examples of AI systems returning unintentional and potentially quite harmful outcomes that were not foreseen by the developers.”

Regulatory developments

Regulation has a central role to play in supporting technology businesses using or developing AI capabilities, says Brown.

“In theory, it can help by establishing guidelines and frameworks for ethical and responsible development and deployment of AI tech. Regulation protects consumers by setting standards for transparency, fairness, and safety within AI applications.”

He emphasises that it can help promote fair competition and prevent anti-competitive practices, while building public trust and acceptance of AI.

However, he adds the caveat that while regulation is vital, it’s important to strike a balance. When it is overly burdensome or restrictive it can stifle innovation and hinder growth. “Regulation needs to address societal concerns, as well as protecting the interests of individuals and businesses, but also exhibit the flexibility and adaptability that is crucial to promote innovation,” he says.

Regulatory frameworks that touch on AI do exist, with more in the pipeline. Brown points to the EU AI Act, which is due for adoption in 2024. It includes stringent requirements for high-risk AI systems, including those in human resources, banking, and education, and Brown calls it “a kind of data protection rule for AI” with comparably hefty penalties for non-compliance.

An existing framework of note is the Digital Services Act which, Brown says, was intended to “put an end to self-regulation of tech companies” and was more focused on social media and advertising, but “it will likely impact AI”, he says.

The US has the AI Bill of Rights, which is a good guide to the design and deployment of AI systems, according to Brown.

China’s framework “Deep Synthesis Provisions” came into effect earlier this year (2023) and Brown says it strengthens the government’s ability to supervise AI development.

The UK government has various policy papers that suggest regulation is coming but nothing specific has been proposed yet. Brown thinks the UK may follow something similar to the EU AI Act.

Insurers have a key role to play in providing tailored insurance solutions with specific coverage for data breaches, cyber attacks, and errors and omissions that may arise from AI activities.

“We can assist organisations with our experience of best practice within risk management. This can help organisations understand and manage their own risk and, we hope, avoid or mitigate any issues before they arise,” he concludes.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.


More on this story

Insurance
20 May 2021   She will lead the global underwriting strategy for affirmative cyber products as well as cyber risks impacting various lines of business for Markel.
Insurance
21 July 2020   Prior to joining Markel, the executive held several positions at Marsh for seven years.
Insurance
14 April 2020   The appointee will be responsible for all claims arising through the Markel's offices in Singapore, Hong Kong, India and Labuan.

More on this story

Insurance
20 May 2021   She will lead the global underwriting strategy for affirmative cyber products as well as cyber risks impacting various lines of business for Markel.
Insurance
21 July 2020   Prior to joining Markel, the executive held several positions at Marsh for seven years.
Insurance
14 April 2020   The appointee will be responsible for all claims arising through the Markel's offices in Singapore, Hong Kong, India and Labuan.