This post is part of a series sponsored by TransUnion
Social and regulatory attention has used justice and fairness as a lens to evaluate the results of existing processes such as insurance guarantees. For example, a new law in Colorado, which will come into force in early 2023, will require insurance companies to provide analytical evidence that their operational processes using consumer data input and predictive models do not result in unfair discrimination against certain consumer groups. Credit-based insurance points (hereinafter referred to as insurance risk points) are an example of input data used in these operational processes
Insurance risk points have become crucial for insurance companies as they try to take out insurance quickly and correctly and attract new business. But the relationship between credit information and insurance risk assessment is technical and complex. Most consumers are simply unaware that insurance risk points are used in insurance guarantees, and when they receive incomplete information about it, they may distrust their use.
This reality highlights two dimensions of justice ̵1; justice outcomes and consumers' perceptions of justice in relation to these methods. These issues of justice are important, and insurance companies must be able to show that their methods will not lead to unfair results and that they seem fair to consumers.
Justice testing – the need to adapt to best practices
Actuarial science and predictive modeling are decades old and well-honed. The insurance industry has become very good at building models that are empirically sound, demonstratively strong and stable. In the insurance industry, research and practice are still in their infancy, but they are more robust in academia.
Much of the current focus is on race, ethnicity and income; However, it is against the law for insurance companies and consumer reporting authorities to collect or store information about race and ethnicity, which makes it very difficult to analyze justice and fairness along these axes. The industry will need to evaluate the options for capturing or appreciating these characteristics.
Next, there must be a standard definition of fair. From a computer science and predictive modeling perspective, a fair outcome is one where the predicted outcome corresponds to actual outcomes based on a certain measure of statistical significance. On the other hand, some would say that justice means equal treatment of the results over the whole population. As the industry works to define justice, both variations in actual results and population profile should be taken into account – a behaviorally adjusted fair outcome.
Consumer perceptions of justice
Regarding consumers' perceptions of justice, one of the most important academic researchers in this subject, Stanford University's Dr. Barbara Kiviat, who studies social attitudes to credit rating. In particular, she has developed the concept of logical kinship in the use of credit rating: Consumers oppose or disapprove of the application of credit ratings in areas of their lives if they do not see a clear connection between the two. And many consumers and legislators currently do not see credit as something logically related to insurance, which leads them to see insurance risk points as unfair.
Dr. Kiviat points out, however, that "If logically unrelated, morally heterogeneous data do not seem so bad if their use promises to expand the market to previously excluded individuals." In other words, even if consumers and policy makers do not see a logical connection between insurance risk points and insurance pricing, will they appreciate their role in expanding the market?
Another important finding in Dr Kiviat's research is that consumers are more likely to find a credit-based points fair if they know it does not misclassify risks. As TransUnion has demonstrated with the CARES Act accommodation, insurance risk scores can be tailored to exclude factors beyond the control of the consumer that remain stable and predictable.
An opportunity to raise awareness and educate consumers  Based on Dr. Kiviat's research, in order for anyone to accept the use of consumer data, such as insurance risk points, they must be provided with a clear causal theory that explains why and how the points system works. Insurers have the opportunity to provide a clearer understanding by taking a number of steps to raise awareness and educate consumers about the use of credit information in issue guarantees, including:
- How and why credit information is used
- The benefits and opportunities provide consumers
- protection and rights given to consumers in the current process
What would an education campaign on insurance risk points look like in practice? TransUnion specifically recommends that insurers:
- Provide consumers with an explanation of what insurance risk points are, how they differ from financial credit points and how insurers use them in combination with other variables to take out insurance.
- Explain to consumers why insurance risk points are used in emissions guarantees, focusing on the benefits for consumers.
- Provide consumers with information about protection and rules governing insurance risk scores, including rights that consumers have access to, dispute and control the use of their personal credit information.
- Describe to consumers the credit behaviors that may lead to an improvement in their scores. By providing consumers with this information, you can enable them to control and manage their personal credit history, which can lead to greater financial inclusion and lower costs.
Finally, insurance companies must take their advocacy mission to local and national legislators, too. Teams working with insurance risk-based products should work hand in hand with corporate authorities to identify potential problems. Now is a good time to make your Government Relations colleagues aware of this topic and make sure they work to get involved on behalf of your company.
Interested in Market ?
Get automatic alerts for this topic.