AI, Regulation, Definitions
What is AI?
If you’re using math, you might be using AI.
Elaine Gibbs, March 13, 2024
Regulation of the use of AI by insurance companies has begun, with notable movement from the NAIC as well as the Colorado Division of Insurance and the New York State Department of Financial Services.
However, the term “AI” can mean anything from simple business logic to ChatGPT. So, it’s worth asking what exactly insurance regulators have indicated is in scope for this new regulation.
The answer: What’s considered “AI” is very broad, with anything driven by math, as opposed to human intuition, potentially falling under the definition.
The terms
First, consider the wide range of terms used across the various releases (as of February 2024):
NAIC Model Bulletin: AI, AI Systems, Algorithms, Predictive Models, Machine Learning
New York Insurance Circular Letter: AI Systems
California Bulletin 2022-5: AI, Algorithmic Data, Big Data
Colorado SB21-169 and related regulations: Algorithms, Predictive Models
Regulators are not just talking about AI, but a wide range of related concepts.
The definitions
Next, consider the broad definitions from the NAIC, New York, and California:
The NAIC defines:
“AI System” as “a machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, content…, or other output influencing decisions made in real or virtual environments. AI Systems are designed to operate with varying levels of autonomy”
“Predictive Model” as “the mining of historic data using algorithms and/or machine learning to identify patterns and predict outcomes that can be used to make or support the making of decisions”
New York defines “AI System” closely in line with the NAIC’s language
California defines “Big Data” as “extremely large data sets analyzed to reveal patterns and trends”
With definitions this broad and general, any sort of business logic built on data as opposed to by human intuition could qualify.
Implications (or, Are GLMs in Scope?)
Perhaps it is best to ask what is likely out of scope. From our reading, data visualizations and descriptive analysis fall firmly outside of these definitions. Traditional actuarial tables and simple if-then business logic might technically be in scope, but presumably will undergo somewhat less scrutiny given their long history of use.
That leaves any other math or algorithm as likely in scope, including long-used techniques like linear and logistic regression (including GLMs). Needless to say, more recently developed and advanced machine learning technology, such as generative AI, may receive the most scrutiny.
Presumably, these distinctions will be clarified as the regulations are actually put into practice.
Parting Thought: Use Case Matters
Finally, it’s worth highlighting that the various regulations to date do not indiscriminately target AI and “math,” but rather particular use cases, those with potential for consumer harm. We further explore the regulators’ key concerns here.
Latest articles
Get in touch.
Curious how we can help? Let's discuss.
Reach us at hello@bell-analytics.com.