Artificial intelligence (“AI”) technology is poised to radically improve human functionality, although some say the technology is quietly learning how to overtake it. Meanwhile, the insurance industry has been using AI to save time, achieve consistency and improve risk mitigation. But while the industry is looking forward to cost savings and better business using generative AI, some insurers have also warned policyholders about the potential risks that reliance on AI can pose. The insurer’s cautionary statements cast doubt on the integrity of their own reliance on the technology.
For example, the Attorneys̵7; Liability Assurance Society Ltd. announced (“ALAS”) to its policyholders at the law firm that ChatGPT – perhaps the leading publicly available AI platform – is “Not Ready for Prime Time”. In a bulletin issued to its policyholders, ALAS warns that the use of the technology by law firm policyholders could lead to legal malpractice that may not be covered under professional indemnity insurance. Although cautious in nature, this type of warning from insurance companies predicts that they will be denied coverage for claims arising from its use. It also shows the different perceptions of AI, even among members of the same industry.
According to insurance industry executives, AI has already been implemented for use in claims automation, product development, fraud detection and chatbots that meet employees and customers. AI technology can process large amounts of claims information in a fraction of the time it currently takes humans and can eliminate human bias in the underwriting process. But the insurance industry’s expected reliance on AI is undercut by the same industry’s warnings about the inherent flaws in AI and its lack of reliability, especially when it comes to timely and up-to-date information. For example, ChatGPT readily admits that its information database is only current through early 2021. Other AI databases may contain more up-to-date information, but even they are only as current as the most recent information upload. Contrast that with the presumptive knowledge and information available to a qualified claims adjuster, who is required to keep abreast of current events and changes in laws and applicable regulations that affect the handling of a claim.
Lawyers and law firms are similarly obliged to keep up to date with relevant information. Yet, unlike insurance companies, lawyers also looking to generative AI for innovation must do so while navigating the ethical rules that govern their profession. The American Bar Association Model Rules of Professional Conduct state that attorneys must provide competent and transparent representation while protecting client information. There is a growing awareness of the risks associated with using generative AI for legal work – in particular, how a law firm’s integrity and client confidentiality will be protected and the accuracy of the technology. Problems with current generations of the technology have shown a tendency to “hallucinate,” or generate text that appears reasonable, yet is not factual, given its ability to tap into billions of data points to predict the next word in a string of text. Yet the legal industry – like the insurance industry – is keenly aware of the benefits that will result from the technology’s exploitation.
Regardless of the degree of adoption of ChatGPT and generative AI by law firms and the legal industry, policyholders should be wary of the potential risks as well as the insurance coverage implications that will inevitably follow. In the case of claim denials, policyholders should also inquire about the level of AI used by the insurer and, when warranted, insist on human review and analysis of the claims decision. Experienced coverage counsel can help identify discrepancies in insurer claim denials as well as help determine potential pitfalls that can be avoided when insurers similarly use the technology.