Artificial intelligence, all the rage in popular culture, is not a new fad in the insurance market, where it is quietly driving operations that allow businesses to run more efficiently, at lower costs, and with fewer errors.
Still, AI is in its infancy and its rapid development means insurers, brokers and their clients should see the technology continue to spread across customer service, claims management and sales, experts agree.
Thanks to the viral popularity of ChatGPT, an AI chatbot developed by OpenAI that lets users perform tasks ranging from frivolous to serious, AI is suddenly mainstream. But letting a chatbot write a poem about a pet is not the same as letting it handle claims or steer buyers to coverage, which is why AI in insurance is being carefully implemented in select areas of commercial and personal lines.
“Six months ago, nobody was talking about this,”; said John Cottongim, New York-based chief technology officer at Roots Automation, which provides technology services to the insurance industry. “We are in the toddler phase of AI capabilities,” he said. “It’s going to be a wild ride over the next 10 years.”
Then it’s likely that “we’ll just be able to point the platform at your website or knowledge base, click a few buttons, and you can have a really good virtual agent live and support it,” said Bill Schwaab, San Francisco-based vice president of North America at Boost AI AS, a Norwegian company trading as boost.ai, develops conversational AI platforms for insurance companies. Even with such advances, back-end systems will still require work to maintain, he added.
AI is well-suited to claims management, the sources said, in part because of the amount of insurance data available to help it figure out which claims need immediate attention or how they can be handled with less human interaction.
“Insurance is kind of the original database business; we have relied on data and data science capabilities since we existed as an industry,” said Mano Mannoochahr, director of data and analytics at Travelers Cos. Inc. in Hartford, Conn., in a recent webinar. “The opportunity that we have, generally speaking from an AI perspective … is to be able to rethink and rethink every part of our business.”
Employers have long called for a system that would streamline the workers’ compensation claims process, said Dennis Tierney, Marsh LLC’s Norwalk, Conn.-based national director of compensation claims. In January, the broker introduced its AI-powered process to identify claims that may not have received proper attention based on the old method of selecting files largely based on reserve amounts, he said.
Using AI, “we’re helping to identify claims that aren’t on everyone’s radar,” Mr. Tierney, and prioritizes them by considering such factors as jurisdiction and type of injury.
At Travelers, an AI model is trained on millions of high-resolution images of insured U.S. properties, Mannoochhr said. After a natural disaster, the model can quickly assess damage and begin the claims process, in some cases before the owner returns to the property.
“It allows us to make better decisions about where to deploy our adjusters and claims handlers and how to prepare them for the onslaught of calls that we might get,” he said.
Beyond claims, virtual agents in an AI environment can be trained to sell, Schwaab said. “The data from the virtual agent can be leveraged to do some selling on behalf of the company” by encouraging customers to consider additional products they feel they might need, he said.
Chatbot technology can use real-time analytics to gauge customer sentiment and help virtual agents make decisions, including customized product suggestions, said Mamta Rodrigues, New York-based division head of banking, financial services and insurance at Teleperformance SE, a Paris-based digital company. business services company.
“AI is becoming more and more accurate and further augments the digital experience, making that conversational bot that much more intelligent and personal,” said Rodrigues.
However, complex commercial lines can take some time to manage with AI, says Cottongim of Roots Automation. “Personal lines are aligned with the common natural language that we speak every day, which these algorithms are trained on,” he said.
But while the platforms can help with car damage claims, interpreting a clause in an errors and omissions policy would be a different story, Cottongim said.
“It’s quite esoteric and the algorithms are unlikely to have been trained on it,” he said, adding that fine-tuned models are expected to eventually tackle complex commercial issues.
However, it would be a mistake to discard the human touch entirely in favor of AI platforms, Rodrigues said.
“We’re doing a lot of proof-of-concepts and demonstrations for some of our big customers to show how a bot platform can generate stronger efficiency,” she said, “while not taking away the personal touch that comes from a human. “
Employers need guardrails to reduce the risks that come with chatbot use
Artificial intelligence presents both benefits and threats to risk management.
On the one hand, AI can help risk managers identify potential risks and automate routine tasks. But AI also poses risks such as algorithmic bias, lack of transparency and cybersecurity vulnerabilities.
At least that’s how ChatGPT sees it. The previous paragraphs were written by the bot on the question of whether AI is a threat or a benefit to risk management.
A self-assessment that is not far off, according to Karla Grossenbacher, a partner with Seyfarth Shaw LLP in Washington.
While employers are unlikely to sanction the use of ChatGPT solely to provide advice or make business decisions, they need strong policies to ensure that doesn’t happen, she said.
“Employers may not necessarily think about what they need to do to make sure it’s not abused,” Grossenbacher said. “There are things that employees can ask ChatGPT to do and they can be exploited, but there are certainly situations where you would want to prohibit it.”
When developing guidelines for the AI workplace, employers should address the risk of confidential or proprietary information being released, Grossenbacher advised. And there is a risk that information from publicly available bots may be inaccurate or may be subject to copyright protection and, if used without proper credit fees, may constitute plagiarism, she said.
“It would be wise to ask your risk manager to weigh in on these types of issues,” Grossenbacher said.
The U.S. Commerce Department’s National Institute of Standards and Technology provides some guidance in its 42-page “Artificial Intelligence Risk Management Framework,” issued in January for organizations that design, develop or use AI systems.
The voluntary framework promotes changes in institutional culture and suggests how organizations can measure and monitor AI risks and consider the potential benefits and threats of the technology.