Hadrian/Shutterstock.com_179573879
18 October 2024Technology

FERMA comments on new EU AI Act

FERMA has issued an EU Policy Note on the EU’s Artificial Intelligence Act (EU AI Act) which provides guidance on the practical implications of the risk-based approach underpinning the legislation and considers the potential insurance impact.

The EU AI Act, published in July, will apply to all 27 EU Member States with companies expected to comply starting in February 2025. It aims to create a high level of protection of health, safety and fundamental rights against the potential harmful effects of AI systems. The risk-based approach at its core classifies AI systems from low or minimal risk to unacceptable risk, with most regulatory requirements applying to high-risk systems.

Under the legislation, high-risk systems must be registered in an EU database and must comply with specific obligations relating to data training and governance, transparency and risk management systems.

“The AI Act is arguably one of the most significant regulations introduced by the EU in recent years given the potential impact of AI across every aspect of our lives,” said Philippe Cotelle, board member, FERMA and chair of the Digital Committee.

“It not only places a clear onus on risk managers to raise their game on AI, but it also addresses another piece of the puzzle which is how this all impacts upon topics such as liability and innovation.”

Three-pillared approach

The Policy Note highlights three essential pillars of an approach aimed at making the most out of the new requirements, which can act as a basis for risk managers to consider in their organisations.

The first is the development of an AI strategy and transposition into a suitable governance framework, which can be demonstrated by a policy document and end-to-end processes implementation.

The second is the implementation of the appropriate technology and investment in the continuous training of employees and partners, as well as providing documentation and guidance for customers.

Finally, it stipulates that governance and technology are designed in a way that anticipates audit requirements; and, pursuing a formal certification is recommended, although not explicitly required by law.

“FERMA encourages risk managers to consider creating an internal set of benchmarks to measure AI system performance.”

In this context, FERMA advises risk managers to follow an internationally recognised ethical standard, to clearly define the scope of the policy and roles and responsibilities, and to consider the scope of the environment in which their organisation’s AI system operates.

The Policy Note calls on companies to invest in safe technology implementation, as well as training. FERMA encourages risk managers to consider creating an internal set of benchmarks to measure AI system performance, and to ensure users are trained to mitigate the risk of misuse, unethical outcomes, potential biases, inaccuracy, and data and security breaches. All uses of the system, it adds, must align with the AI policy.

From an insurance perspective, FERMA also considers how the impact of AI on insurers may flow through to corporate risk and insurance managers. It also encourages risk managers to assess ‘Silent AI’.

FERMA Forum Today is in partnership with Captive Review, part of Newton Media.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.