Monday, September 2, 2019

Harnessing the Power of Data Logistics & Artificial Intelligence in Insurance and Risk Management


Data Logistics and Artificial Intelligence in Insurance and Risk Management – Calls for Action or an Industry in Transition

Data is quickly becoming the most valuable asset in the insurance sector, given its tremendous volume in our digital era. Simultaneously, Artificial Intelligence (), harnessing big data and complex structures with Machine Learning () and other methods, is becoming more powerful. Insurers expect more efficient processes, new product categories, more personalized pricing, and increasingly real-time service delivery and risk management from this development.
Given the many leverage points in insurance, it surprises that -driven digitization is not evolving more rapidly. When according to a recent Gartner[2] study, 85 % of data science projects fail, how can insurance companies make sure that their projects are among the successful ones[3]? This article and the corresponding White Paper[4]comprise a study among mainly the Swiss and German insurance industry[5], tackling a key business problem.
Legacy infrastructure, missing interoperability, and a lack of comprehensive knowledge about  and its use cases hinder the adoption of advanced, -based, ethical insurance and risk schemes.
Take the currently long, resource-intensive, error-prone process of underwriting as a case of applying  in risk management. Underwriting will massively benefit from -based automation, partially because technologies such as Natural Language Processing () are able to process the increasing volume of text- and -based data. Besides,  enables underwriters to assess increasingly complex risks of our time, such as cyber security or climate change risks, often more precisely, but certainly much faster than humans. Still, sophisticated -powered risk models have not yet been largely observed in the insurance market. Following the results of our study,  adoption is low and slow, due to data-centered, methodological and cultural issues.
1, In terms of data, several problems stand out. While many  applications rely on masses of data, large, clean data sets are hardly available in insurances. IT and data systems in insurances are heterogeneous and hardly interoperable. There is evidence that  will benefit tremendously from the growing data source of the industrial Internet of Things (IoT), but insurers currently lack ways of identifying information in the breadth and depth of sensor data. IoT enhances  as an alternative data source with complementing characteristics to existing data, e.g. allowing to monitor the state of insured assets or processes in real time. If integrated well with contractual and claims data,  can draw from a much richer context around insured subjects compared to today (Figure 1). IoT data fosters risk management of existing insurance contracts (e.g. in parametric insurance), in the risk estimation for underwriting new contracts, and in claims management (e.g. the last seconds in a car’s drive recorder before an accident).

Figure 1: The relation between Artificial Intelligence () and the Internet-of-Things (IoT)
2, Regarding methods, the devil is in the details. Whereas popular and hyped methods such as Deep Learning lack real impact in most insurance settings, Bayesian modeling (to tackle small data problems) and causal modeling (to select interventions to test putative causal relationships, and to improve decisions by leveraging knowledge of causal structure) have the power to step in. To process time- or event-series of IoT sensors, methods such as recurrent networks are suited to forecast IoT-backed risk-indicators. State-space methods apply if the time-series are aggregated to a state of a monitored asset or process.
3, Culture-wise, many companies talk about being “big-data-driven”, yet the actual number of firms acting on their culture and organization is much smaller. Digital cultures in organizations are often too hierarchical, lack cross-functionality and are organized in a top-down way. Many data science projects fail because they start by searching for the problem to be solved with a stipulated (-) method and not with the business problem that deserves most attention.
Insurers ought to resolve data-related, methodological and cultural issues to boost their entire digital transformation journey, not only particularly  integration. We propose the following:
  • Resolving data issues goes hand in hand with a shift of mindset away from complex integrated solutions towards an agile orchestration of micro-services. Insurers can undertake small initial steps to create initial return-on-investment and subsequently fragment monolithic data systems. This will result in modular, interoperable data systems that make data sources accessible for any application, even if the concrete use case and application is still unknown (Figure 2). In lieu of designing complex data processes, companies ought to focus on defining few and simple access rules for the company-internal data space and a secure but accessible interface to external data providers and aggregators.
Figure 2: Modular, semantized data lake architecture versus traditional enterprise data warehouse

  • Resolving methodological issues first necessitates the development of a clear understanding of the desired outcomes of specific  applications. This development must be triggered by a careful assessment of which process is appropriate for a given  technique, and of which data is truly required to deliver a working  solution. Not least, it must be checked if the data points are continuously available, as well as of sufficient quality, at the insurance or the external sources it can access. The expected volume of data determines if parameter-intense methods such as Deep Learning are applicable or if algorithms with lower need for data are favorable. A further ingredient for deciding which algorithm class to use in development, or which solution to buy on the market, is the necessary level of interpretability of the  model. Compliance factors such as GDPR and ethical values are to be accounted for.
  • Resolving cultural issues emerges from sorting out data issues. Helpful initiatives aim at bottom-up enablement, democratizing data access to employees rather than top-down communication of digital values. Insurance companies should empower their employees to interact with data and might experience growing engagement with digital transformation, when exposure to data translates into successful projects. Cultural change is key to evolve the business models from product sales to providing solution ecosystems to customers (e.g. “underwriting a commercial asset insurance” vs. “offering a risk management solution to clients that helps preventing losses”).
  • Ethical development of and responsibility for  by insurances is crucial for long-term business success, and thought leadership progresses steadily. Insurance companies may consider taking up elements from frameworks such as the recently published Algo.Rules[1], to find answers to ethical questions on  and to keep their social license to operate. We would envision a future where risk management is on assessing the risks of machine errors, as machines take over decision-making along insurers’ value chains. Would machines also be of crucial importance on the meta level, i.e. not only for risk evaluation in underwriting, but also for assessing  employed in underwriting processes? Do we wish to preserve a human element[2] in the game? Would we want insurance companies to be sufficiently powerful to dictate behavioral norms;[3] g. nudging customers to positively influence their scoring to prevent a constantly monitoring insurer from canceling a policy when sensors recorded “reckless” behavior?
The insurance companies we studied are positioned quite heterogeneously with regards to  maturity, with the entire industry actively exploring opportunities in their respective focus areas. Productive examples are parametric insurances, e.g. in the area of logistics or agriculture, where conditions are triggered by sensor signals. Forward-looking analytics in such contracts or for underwriting is hardly used today.
Data logistics is an essential prerequisite for  to be employed in underwriting and algorithm-based insurance schemes. In-house complex data systems must be converted into interoperable services, catering to data users in a flexible and adapting way. A changing value chain will position insurance companies in the center between customers and external data and services providers, with both the need and the opportunity for insurers to add valuable differentiating services to customers. With emerging frameworks such as the Algo.Rules, insurers now further have the unique opportunity to position themselves within a range of options from a purely compliance-driven operating model to a reflective, inclusive model that actively shapes answers to the societal challenges ahead raised by .

[2] https://gtnr.it/2oT1Mpn
[3] https://bit.ly/2I1ScuT
[4] The complete White paper can be accessed at the Swiss Alliance for Data Intensive Services.
[5] In alphabetic order: AXA, Generali, Mobiliar, Santam, SwissRe, Zurich Insurance Group
[6] https://algorules.org/startseite/
[8] Such a dystopia (without involving insurance companies) is also portrayed in Netflix’s Black Mirror episode Nosedive.
Featured article photo credit: Drew Graham on Unsplash

No comments:

Post a Comment

Racial bias in a medical algorithm favors white patients over sicker black patients

A widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimates the health needs of...