Life’s Not Fair. Is Life Insurance?

The rapid adoption of artificial intelligence techniques by life insurers poses increased risks of discrimination, and yet, regulators are responding with a potentially unworkable state-by-state patchwork of regulations. Could professional standards provide a faster mechanism for a nationally uniform solution?

By Mark A. Sayre, Class of 2024

Introduction

Among the broad categories of insurance offered in the United States, individual life insurance is unique in a few key respects that make it an attractive candidate for the adoption of artificial intelligence (AI).[1] First, individual life insurance is a voluntary product, meaning that individuals are not required by law to purchase it in any scenario.[2] As a result, in order to attract policyholders, life insurers must convince customers not only to choose their company over other companies but also convince customers to choose their product over other products that might compete for a share of discretionary income (such as the newest gadget or a family vacation). Life insurers can, and do, argue that these competitive pressures provide natural constraints on the industry’s use of practices that the public might view as burdensome, unfair or unethical and that such constraints reduce the need for heavy-handed regulation.[3]

Second, the most common form of individual life insurance is a long-duration product with guaranteed premiums, meaning that the insurer has a single opportunity to set a given individual’s premium that will then remain unchanged for ten years, twenty years, or possibly the remainder of the individual’s lifetime, through a process known as risk selection.[4] Accordingly, life insurers are constantly evolving their risk selection practices in order to reduce the risk of losses resulting from misclassification or adverse selection. Adding new data categories to risk selection practices can result in controversy[5] and questions about the right balance between fairness to the individual (ensuring access to coverage through the risk pooling and risk diversification functions of insurance) and fairness to the group at large (avoiding excessive subsidization of higher risk individuals by lower risk individuals), and regulators are generally more comfortable with medical data than non-medical data.[6]

The need to attract customers with a quick and easy buying experience in a competitive process while protecting against the risk of unrecoverable future losses requires both speed and accuracy, which makes life insurance risk selection a ripe space for the adoption of AI. In fact, many life insurers have already adopted AI specifically for this task.[7] Although AI can be used at many points in the life insurance customer journey, from improving prospect targeting for marketing to rapidly adjudicating claims on death, the use of AI to improve the speed and accuracy of risk selection decisions is a key focus for regulators.[8] Regulator concern around risk selection practices in particular may reflect a fear that the industry is, perhaps unintentionally, circumventing the existing regulatory framework, which was passed prior to the development of AI techniques. To understand this perspective, a brief history of both industry practice and insurance regulations is required.

A History of Increased Price Differentiation

Early forms of life insurance often formed around fraternal associations and fraternal beneficiary societies and featured little to no price differentiation between policyholders.[9] However, these early schemes quickly suffered from underfunding, and a more robust mechanism for balancing contributions against expected benefits was required. This led to one of the earliest known forms of a mortality table.[10] As the relationship between age and mortality rate became more apparent, the fairness of asking young and old alike to contribute the same amount to the scheme became increasingly hard to defend.[11] Over time, the ability to measure the mortality of sub-populations with increasing accuracy resulted in further price differentiation on a growing range of characteristics.[12] In some cases, sub-populations were excluded entirely or were charged significantly higher rates based on incomplete or improper statistical analysis.[13]

The exclusion of Blacks from insurance after the Reconstruction Era led to one of the earliest antidiscrimination laws in insurance. In 1884, Massachusetts passed a law prohibiting race-based rates or premiums for life insurance, drawing a clear line in favor of individual notions of fairness.[14] However, the equitable treatment of Blacks in insurance pools triggered concerns about subsidization, which in the view of critics violated the group conception of fairness.[15] By the turn of the 20th century, insurance companies found a new solution to this perceived problem of forced subsidization. The publication of Race Traits and Tendencies of the American Negro, a statistical analysis of mortality trends by race led by a statistician at the Prudential Life Insurance Company, provided support for race-differentiated pricing.[16] Despite its numerous shortcomings, the analysis was widely accepted by companies and regulators as justification for the new “Jim Crow” era of life insurance; even states that led the charge in ensuring access during the Reconstruction Era overturned their laws to support this new form of “fair” (i.e., seemingly statistically justified) discrimination.[17]

Race-differentiated pricing in life insurance finally came to an end during the Civil Rights Era, led by both regulators and industry.[18]

Current Antidiscrimination Law and the Threat of AI

Antidiscrimination law in the life insurance industry has not evolved much since the Civil Rights Era. The law is governed predominantly at a state level through statutes prohibiting “unfair discrimination.”[19] However, surprisingly few states explicitly prohibit the use of race in life insurance.[20] Instead, unfair discrimination generally means any practice which is not actuarially justified (i.e., correlated to expected risks such as mortality and morbidity).[21] Given that mortality rates are known to vary by race,[22] current antidiscrimination law does not appear to prohibit the use of race in life insurance. Thus, it seems that the reason companies do not use race today is driven primarily by social norms and not the law.[23]

As the industry adopts more advanced AI techniques, these techniques will be able to sort through a vast array of characteristics of a population in order to determine which characteristics are most predictive of future outcomes. Complex AI techniques feature little to no inherent “explainability,” meaning that, without concerted effort, the AI’s developers may not be able to explain exactly why it has latched onto certain variables in certain ways to make its predictions.[24] However, developers will be able to determine whether the AI is successfully predicting differences in risk at a statistically significant level – in other words, if the AI’s results are actuarially justified. Thus, a company deploying a complex AI system will be able to satisfy its regulatory burden by showing that no prohibited input variables such as race were used (in the few states where such prohibitions exist) and that the model’s results are actuarially justified, even if the company is not entirely sure what is happening within the AI itself. Because mortality rates vary in the aggregate by protected classes such as race[25], an AI that is able to successfully infer, or proxy, a protected class from otherwise neutral input variables, will be effective at predicting relative mortality risk. This form of potential discrimination referred to in legal literature as “unintentional proxy discrimination,” [26] is not currently addressable through existing anti-discrimination laws.[27]

Current Regulatory Action Risks an Unworkable State-by-State Patchwork

Regulators are aware of the risk that AI may circumvent current antidiscrimination laws and are actively working to identify new regulatory actions. In August of 2020, the National Association of Insurance Commissioners (NAIC) unanimously adopted guiding principles on artificial intelligence, which included a specific reference to the risk of proxy discrimination by AI systems but stopped short of proposing new regulations.[28] The NAIC has established a Special Committee on Race and Insurance, which recently met in November of 2021 to learn more about potential regulatory approaches to address unintentional proxy discrimination by AI systems.[29] The approaches being considered to fall into two general groups: (1) expansion of a disparate impact framework to the insurance sector; and (2) incorporating race directly into AI models and removing any variables whose predictive power is lessened by the introduction of the race (“proxy discrimination”).[30] While the first approach derives predominantly from the legal sphere, the second approach derives from      AI and Data Science literature. Both approaches may require the collection of race data, which insurers do not currently collect.[31] The first approach requires significantly less statistical and modeling expertise than the second and can apply equally to rule-based and model-based algorithms, while the second only applies to model-based algorithms. Resultantly, whichever approach is selected by regulators will have significant operational impacts on companies’ ability to adhere to this new area of regulation.

Despite the NAIC’s focus on the topic, some state regulators have chosen to move forward with new regulations independently from their peers. For example, Colorado recently amended its unfair discrimination statute to include race and other protected characteristics and address discrimination by AI.[32] The bill prohibits the use of external consumer data and algorithms that result in unfair discrimination and empowers the insurance commissioner to promulgate rules providing more detailed standards.[33] The bill further requires that companies provide information to the commissioner on their use of such data and/or algorithms, establish risk management practices to determine the potential for unfair discrimination, and provide the commissioner with the results of any unfair discrimination analysis.[34] It is not clear which, if either, of the regulatory approaches outlined above, may be adopted by the Commissioner, although some have interpreted the legislation to be more aligned to the proxy discrimination approach.[35]

Separately, the District of Columbia is currently considering broader AI legislation that would include the insurance industry.[36] Titled the “Stop Discrimination by Algorithms Act of 2021,” the bill would prohibit covered entities from using AI techniques to make eligibility determinations regarding important life opportunities, such as access to credit, housing, employment, or insurance, on the basis of race and other protected classes.[37] In addition to prohibiting direct discrimination, the bill would also prohibit covered entities from utilizing any practice that has an indirectly discriminatory effect, similar to the disparate impact approach referenced above.[38] Further, the act would establish new notice and reporting requirements promoting greater transparency around companies’ use of AI.[39]

The recently introduced American Data Privacy and Protection Act would establish a uniform, federal baseline for addressing potential discrimination by AI.[40] However, in its current form, life insurers may be exempted from such a requirement.[41]

The pursuit of new regulations by individual regulators, which vary in their breadth, approach, and requirements, greatly increases the risk that future AI regulation will be an unworkable state-by-state patchwork. Significant variation will increase confusion, reduce efficiency, and perhaps even undermine the effectiveness of regulation. Moreover, AI techniques generally require large data sets, which results in the development and implementation of AI models at a national rather than a state level. The mismatch between nationalized AI programs and state-specific regulation will increase the complexity and cost of adherence.

A Faster, National Solution: Professional Standards

One universal component of current state antidiscrimination laws is that risk selection practices must be actuarially justified. Generally, as discussed above, this means that the practice must be shown to be predictive of expected future claims. A less frequently discussed component of actuarial justification is that it requires the professional judgment of an actuary. Furthermore, unlike data scientists and other AI modelers, actuaries are part of a recognized profession governed by organizations that oversee education, credentialing, qualification, and disciplinary standards. Actuaries who perform actuarial services in the United States are subject to the qualification standards outlined by the American Academy of Actuaries.[42] Additionally, actuaries must adhere to the Actuarial Standards of Practice (ASOPs) when performing actuarial services.[43]

Because state regulations require that every insurance company appoint an actuary, generally for the purpose of filing required annual statements,[44] every life insurance company in the United States has access to the services of an actuary. The guarantee that life insurance companies have access to an actuary, and the requirement that such actuaries adhere to the ASOPs, indicate that the establishment of a new ASOP focused on discriminatory AI practices in life insurance risk selection could provide a faster route to uniform national regulation.[45] Moreover, drafting a new ASOP from scratch would not be required, as “ASOP No. 12, Risk Classification (for All Practice Areas)” already covers actuarial services related to “designing, reviewing or changing risk classification systems” used in life insurance.[46] The standard, under a section titled “Considerations in the Selection of Risk Characteristics,” indicates that the actuary should consider the following factors when deciding which risk characteristics (such as data inputs or AI models) to include in a risk selection program:

 

  1. The relationship between risk characteristics and expected outcomes;
  2. Causality;
  3. Practicality;
  4. Applicable Law;
  5. Industry Practices; and
  6. Business practices.[47]

 

This list of considerations could be expanded to include a “fairness” element, which would involve either the disparate impact approach or the proxy discrimination approach, as referenced above.

Although an update to ASOP No. 12 would be significantly faster than the current regulatory approach and would ensure national uniformity, it would also suffer from a weaker enforcement mechanism. Currently, actuaries who fail to adhere to the Actuarial Standards of Practice may be subject to disciplinary measures by the Actuarial Board of Counseling and Discipline, including the removal of an actuary’s credentials.[48] However, there is not a current mechanism within the system of professional standards outlined above to punish or fine an insurer that engages in practices that fail to adhere to the ASOPs.[49] When compared to a formal regulation, which may include statutory fines or a private right of action for impacted individuals, amending the ASOPs would give companies less of an incentive to ensure that they are following the rules.

Conclusion

The rapid expansion of AI systems within life insurance companies poses a significant threat that current regulatory mechanisms will prove insufficient to prevent unfair discrimination by AI. While most regulators recognize this challenge, the slow pace of regulatory action by national entities, such as the NAIC, has led some states to pursue new regulatory action on their own with others likely to follow. This current path will result in a patchwork of inconsistent state regulations that will be difficult to reconcile with nationalized AI programs.

Partnering with the Actuarial Standards Board to amend the current set of ASOPs would provide regulators with a faster and more uniform mechanism to adopt new regulations. Additionally, the use of ASOPs would provide regulators and the industry a more flexible system within which future necessary changes could be made more quickly. Although the enforcement mechanisms under the ASOPs are significantly weaker than those offered by regulation, regulators may be able to find innovative ways to address this issue by partnering directly with the Actuarial Standards Board and other actuarial organizations or by expanding professional negligence jurisprudence to embrace ASOPs more broadly as the professional standards of care for actuarial services.

[1] The Insurance Information Institute groups insurance into three main sectors: (1) Property/Casualty, including auto, home, and commercial insurance; (2) Life/Annuity; and (3) Private Health Insurance. See Insurance Information Institute, Insurance Handbook, https://www.iii.org/publications/insurance-handbook/insurance-basics/overview.

[2] Compare this fact with auto insurance, which is required by law prior to registering a vehicle. See, e.g., Me. Rev. Stat. tit. 29-A § 402. Health insurance is also required by law for most individuals. See 26 U.S.C.A. § 5000A (West, Westlaw through Pub. L. No. 117-80).

[3] See American Academy of Actuaries, Issue Brief: Life Insurance and Annuities: The Impacts of Regulatory Requirements on Consumer Cost and Consumer Choice, https://www.actuary.org/content/life-insurance-and-annuities-impacts-regulatory-requirements-consumer-cost-and-consumer-choi.

[4] Risk selection also referred to as risk classification or underwriting, involves the evaluation of numerous characteristics in order to determine if an individual is eligible for an insurance product, and the appropriate rate to charge based on the individual’s estimated level of risk. See, e.g., Telles v. Comm’r of Ins., N.E.2d 359, 360 (Mass. 1991) (“Insurance underwriting is the process by which an insurer determines whether, and on what basis, to accept a risk.”)

[5] Examples include the addition of HIV testing, questions about Family History, the addition of Driving History, including Motor Vehicle Records, the use of Prescription History, and, most recently, the use of Credit Data. See, e.g., Circular Letter 2019-1, N.Y. Dept. of Fin. Servs. (2019).

[6] See id. (excluding prescription history and other medical data from its scope and focusing instead on credit data, facial recognition, and social media).

[7] See, e.g., Marc Maier et al., Transforming Underwriting in the Life Insurance Industry, https://ojs.aaai.org/index.php/AAAI/article/view/4985/4858.

[8] See National Association of Insurance Commissioners, Artificial Intelligence, https://content.naic.org/cipr_topics/topic_artificial_intelligence.htm.

[9] See People v. Com. Life Ins. Co., 93 N.E. 90, 94-95 (Ill. 1910).

[10] See  Daniel B. Bouk, The Science of Difference: Developing Tools for Discrimination in the American Life Insurance Industry, 1830-1930, Vol. 1, PhD Dissertation to Princeton University, 26-29 (2009).

[11] Id.

[12] For example, gender was first addressed through an “age setback” mechanism, where females of age X were charged the rate for males of age X-5, to reflect their lower mortality rate. See, e.g., Nat’l Org. for Women v. Mut. of Omaha Ins. Co., 531 A.2d 274, 277 (D.C. 1987). Another early example can be found in slave insurance, where premiums varied by occupation, with slaveowners paying more to insure slaves who were employed in coal pits, mining or on steamboats, and policies included explicit provisions governing changes in occupation. See Michael Ralph, Life . . . in the midst of death: Notes on the relationship between slave insurance, life insurance and disability, https://dsq-sds.org/article/view/3267/3100.

[13] See 2 Bouk, supra note 10, at 169-173.

[14] Mary L. Heen, Ending Jim Crow Life Insurance Rates, 4 Nw. J. L. & Soc. Pol’y 360, 363 (2009); see also Megan J Wolff, The Myth of the Actuary: Life Insurance and Frederick L. Hoffman’s Race Traits and Tendencies of the American Negro, Pub. Health Rep. vol. 121(1): 84-91 (2006).

[15] 2 Bouk, supra note 10, at 175.

[16] Wolff, supra note 14.

[17] Heen, supra note 14, at 376-78 & n.127.

[18] Id. at 380-383.

[19] Ronen Avraham et al., Understanding Insurance Antidiscrimination Laws, 87 S. Cal. L. Rev. 195, 199, 232-33 (2014).

[20] Id. at 235-241.

[21] See, e.g., Me. Rev. Stat. tit. 24-A, § 2159.

[22] Elizabeth Arias et al., United States Life Tables, 2017, Nat’l Vital Stat. Reps. Vol. 68, No. 7.

[23] Avraham et al., supra note 19, at 243-44.

[24] See Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Colum. L. Rev. 1829, 1829 (2019).

[25] Arias, supra note 22.

[26] See Anya E.R. Prince & Daniel Schwarcz, Proxy Discrimination in the Age of Artificial Intelligence and Big Data, Iowa L. Rev. 1257, 1265-66 (differentiating between a scenario in which a model uses a variable which “fortuitously happens to be correlated with membership in a suspect class,” and a scenario in which a model uses a seemingly innocuous or neutral variable “whose predictive power derives from its correlation with membership in the suspect class”).

[27] See id. at 1275.

[28] See Press Release, National Association of Insurance Commissioners, NAIC Unanimously Adopts Artificial Intelligence Guiding Principles, https://content.naic.org/article/news_release_naic_unanimously_adopts_artificial_intelligence_guiding_principles.

[29] See National Association of Insurance Commissioners, NAIC Special (EX) Committee on Race and Insurance 11/23/21 Meeting Agenda, https://content.naic.org/sites/default/files/call_materials/Materials%2012-01-21.pdf.

[30] Compare Mary Francis Miller, The Actuary and Social Justice, Presentation to the Casualty Actuarial Society Annual Meeting, video available at https://youtu.be/6Ai-N50N3U0 (from 6:00 to 10:33)(contrasting the current Unfair Discrimination standard with a Disparate Impact standard), with Birny Birnbaum, Proxy Discrimination and Disparate Impact in Insurance, Presentation to the NAIC Special Committee on Race and Insurance, available at https://content.naic.org/sites/default/files/call_materials/Materials%2012-01-21.pdf (at pages 59-65 of file)(discussing modeling techniques to differentiate between disparate impact and proxy discrimination, and recommendations for mitigation).

[31] Insurers may even be prohibited by law from collecting race data. See Circular Letter 64-5, N.Y. Dept. of Fin. Servs. (1964). Additionally, the collection of race information by insurers poses significant operational and privacy challenges. In lieu of collecting race data, companies may be able to estimate or infer race through a method such as Bayesian Improved Surname Geocoding, commonly known as “BIFSG.” See, e.g., Clerveaux v. E. Ramapo Cent. Sch. Dist., 984 F.3d 213, 225 (2d Cir. 2021).

[32] Colo. Rev. Stat. Ann. § 10-3-1104.9 (West).

[33] Id. § 1(b), 2.

[34] Id. § (3)(b)(I)-(V).

[35] See Steven A. Morelli, Colorado Law Bars Insurance Discrimination by Data https://insurancenewsnet.com/innarticle/colorado-law-bars-insurance-discrimination-by-data.

[36] See 2021 Washington DC Legislative Bill No. 558, Washington DC Council Period Twenty-Four.

[37] Id. § 4(a)(1).

[38] Id. § 4(a)(2).

[39] See generally id. §§ 6-7.

[40] H.R. 8152, 117th Cong. § 207 (2d Sess. 2022).

[41] Id. § 404(a)(2) (Covered entities, including life insurers, that are in compliance with the privacy requirements of Title V of the Gramm-Leach-Bliley Act, are deemed to be in compliance with the “related requirements” of the American Data Privacy and Protection Act).

[42] See American Academy of Actuaries, Qualification Standards for Actuaries Issuing Statements of Actuarial Opinion in the United States § 2. Recent changes include the addition of a minimum of one hour per year of continuing education on “bias topics.” Id.

[43] See Actuarial Standards Board, Actuarial Standard of Practice 1 (“The ASB is vested by the U.S.-based actuarial organizations with the responsibility for promulgating ASOPs for actuaries rendering actuarial services in the United States. Each of these organizations requires its members . . . to satisfy applicable ASOPs when rendering actuarial services in the United States.”).

[44] See, e.g., Nev. Admin. Code 681B.170(1). See generally

[45] ASOPs have been recognized by courts in determining the standard of care that an actuarial firm owes to its client when performing actuarial services. See, e.g., Milliman, Inc. v. Md. State Ret. & Pension Sys., 25 A.3d 988, 1005-07 (Md. 2011).

[46] Actuarial Standards Board, Actuarial Standard of Practice 12, § 1.2.

[47] See id. § 3.2

[48] See Rules of Procedure for the Actuarial Board for Counseling and Discipline at 1-2, http://www.abcdboard.org/wp-content/uploads/2017/02/ABCD-Rules-of-Procedure-Revised-2014.pdf (“The [Actuarial Board for Counseling and Discipline] within its jurisdiction has authority to . . . [r]ecommend disciplinary action against an actuary to any participating organization of which the actuary is a member, recognizing that authority to discipline members rests exclusively in the participating organizations . . . .”).

[49] However, courts’ recognition of ASOPs as the applicable standards of care for professional negligence claims, see Milliman, Inc., 25 A.3d at 1005-07, may apply sufficient pressure to actuarial firms such that stronger enforcement mechanisms are unnecessary.