Artificial intelligence (AI) has the potential to add trillions of dollars to the global economy annually — with one estimate suggesting generative AI alone could boost output by as much as $4.4 trillion per year.  But while AI offers immense opportunities, it also comes with risks that need to be carefully managed, creating an important role for the insurance industry to help clients with new risk protection products, according to a report from Swiss Re.

“Wherever there are opportunities, there are also risks. And AI, like any technology, can go wrong. AI may fail against performance benchmarks; it may inadvertently perpetuate discrimination; it could be subject to malicious attack; or it will perhaps cause real world damages,” the report states.

Swiss Re developed a model to assess the risks posed by AI across 10 different industries. The model leverages a combination of historical data on past AI incidents and forward-looking patent data to provide a comprehensive view of the AI risk landscape.

A key aspect of the model is that it takes into account both the probability or frequency of an AI-related incident occurring, as well as the potential severity of losses stemming from such incidents. This  approach enables a more nuanced assessment of AI risks, considering not just how often things might go wrong, but also the magnitude of impact when they do, the report explains.

The model focuses on six primary risk categories:

  • Data bias or lack of fairness: The risk of AI systems unintentionally discriminating against certain groups based on characteristics like gender, race, age or geography.
  • Cyber: Vulnerabilities in AI systems that could be exploited by malicious actors, as well as the potential for AI to be used for harmful purposes.
  • Algorithmic and performance: The possibility of AI failing to meet required performance benchmarks.
  • Lack of ethics, accountability, and transparency: AI systems not adhering to necessary ethical standards and accountability measures, compounded by a lack of transparency into their inner workings.
  • Intellectual property (IP): Issues around the use of third-party IP in training AI, and the risk of AI infringing on IP rights.
  • Privacy: The exposure of sensitive personal data during AI training, and the potential for AI to compromise individual privacy through unintended disclosure or identification.

By quantifying risks across these key dimensions, Swiss Re’s model provides a framework for understanding and managing the complex challenges posed by AI as it becomes increasingly embedded across industries. The insights can help insurers, businesses and policymakers develop more robust strategies for harnessing the power of AI while mitigating its attendant risks.

Key AI Risk Drivers Over Time

While AI offers immense potential, the risks posed by the technology are projected to evolve in concerning ways in the coming years, according to Swiss Re. In the near-term, intellectual property risks appear to be the most severe category, likely stemming from issues around generative AI models and copyright infringement. As these AI systems are trained on vast amounts of online data, the risk of them reproducing copyrighted text, images or code in their outputs is high, the report notes.

Looking ahead, the risk of AI perpetuating societal biases and discrimination could become increasingly severe if proactive corrective measures are not taken. Algorithmic bias has the potential to unfairly skew outcomes in high-stakes domains like lending approvals and pharmaceutical research. Historical biases embedded in training data, if not addressed, may get amplified as AI systems become more widely deployed.

The greatest loss severity in the future, however, is expected to be come from algorithmic and performance risk.

“Over the longer term, however, as AI becomes embedded across a wide range of industries, we expect the single most severe risk to become one of performance, whether that is of vehicles, manufacturing plants, crop modelling, consumer chatbot interfaces or any manner of other uses,” the report states.

Near-Term AI Risk Rankings by Industry

Swiss Re has assessed the probability and severity of AI risks facing specific industries to develop an overall risk ranking for the near-term 2024-2025 time frame.

The IT sector has the highest overall risk ranking at present, driven by having the highest probability of risk as a “first mover” in developing and using AI technologies. The analysis found that 55% of total near-term AI risk probability falls on the IT sector.

Government/education ranks as the second most probable source of AI risk in the near-term, reflecting the wide scope of AI use across public and educational sectors. Media/communications ranks third in risk probability, resulting from the high potential use of AI and legacy intellectual property (IP) issues in that sector.

While lower in probability, the energy/utilities sector has the highest severity ranking for near-term AI risk incidents, due to the critical nature of infrastructure. Health/pharmaceuticals ranks as the second most severely impacted sector at present, given the potential risks of AI use in this highly regulated industry.

Future AI Risk Rankings by Industry

Looking ahead 8-10 years, when AI is extensively used across industries, Swiss Re expects the probability of AI risks to be much more evenly distributed across sectors compared to the near-term.

However, the health/pharmaceuticals sector will face the highest overall risk in the future, remaining high severity while also being exacerbated by rising frequency, Swiss Re predicts. This is due to the many health care delivery processes that could be enhanced by AI, such as pharmaceutical development and AI-powered diagnoses.

“The use cases for AI over the whole spectrum of health delivery are exhaustive, from improving and streamlining administration, to patient monitoring, to diagnosis, to drug development and many more. All told, with these numerous touch points, the potential frequency of adverse AI outcomes is high,” the report notes. “The other half of the equation is that the risk potential is severe. With risk of bodily injury or even death, health care is a highly regulated sector with tightly controlled approval processes.”

The mobility/transport sector ranks second highest in future AI risk, driven by the severity of potential incidents related to automation like self-driving cars. The energy/utilities sector will also see AI risk grow in frequency over the next decade, as AI-powered smart grid technologies increasingly come online to support the transition to net zero emissions.

Implications for Insurers

Providing AI risk protection products and services presents a significant business opportunity for insurance companies. However, it could also become a vulnerability if AI-related risks accumulate unseen within insurers’ portfolios.

Insurers are already providing coverage for certain AI risks, particularly in the rapidly growing cyber insurance market. Swiss Re Institute estimates that $13 billion in cyber insurance premiums were written globally in 2022, a threefold increase in just five years.

While cyber attacks specifically targeting AI systems have been limited so far, Swiss Re warns this risk could rise substantially in the future: “If cyber criminals come to target AI systems in the same way they target non-AI digital systems, the risk could be significantly higher. One can imagine the damage that could be caused by, for example, hacking the AI of an autonomous car fleet, let alone the use of AI as a hostile attack weapon.”

Beyond cyber, other AI risk categories may fall partly or entirely under insurers’ existing coverage lines. For instance, AI performance issues leading to property damage could be covered by property insurance policies. Intellectual property infringements by AI could be handled under professional liability lines. And data privacy breaches involving AI may be covered by cyber security policies.

As AI is adopted more widely, insurers have an important role to play in assessing AI systems for risks associated with ethics, accountability and transparency, Swiss Re contends. Insurers that develop expertise and solutions in these areas can help their clients mitigate AI risks.

However, insurers must also be vigilant about the potential for “silent AI risk” as the technology becomes ubiquitous across industries. If AI-related risks are not specifically included or excluded in traditional insurance policies, it could lead to unexpected losses and risk accumulation in insurers’ portfolios, Swiss Re notes. I

View the full report on Swiss Re’s website. &

The post AI Advances to Bring Economic Benefits, But Also Potential Risks appeared first on Risk & Insurance.