Numerous countries worldwide are currently facing major problems related to waste collection, particularly in urban areas, due to the large amount of waste generated daily by the population. Technology could play a significant role in tackling these issues, for instance, through the development of more effective tools to gather and collect garbage.
With this in mind, researchers at Vishwakarma Government Engineering College in India have recently created a cheap and effective system for automatic garbage detection and collection. Their system, presented in a paper pre-published on arXiv, uses artificial intelligence (AI) algorithms to detect and locate waste in its surroundings, then picks it up with a robotic gripper.
"Contemporaneous methods find it difficult to manage the volume of solid waste generated by the growing urban population," the researchers wrote in their paper. "We propose a system that is very hygienic and cheap that uses AI algorithms for detecting the garbage."
The waste management system, which the researchers refer to as AGDC (automatic garbage detection and collection), is composed of a robotic body (i.e. a base, a robotic arm and a drawer) and several machine learning algorithms. The system uses convolutional neural networks (CNNs) to detect rubbish on the ground and in its vicinity. Once it detects a piece of rubbish, it calculates its position by analyzing images collected by an integrated camera.
"Object detection refers to identifying instances of objects of a particular class (such as bottles, cat, dog or truck) in images and videos in digital format," the researchers explained. "AGDC uses object detection for classifying the garbage with the rest of the objects in the image/video. The object detection algorithm enables AGDC to identify places in the image or video where the object of interest (i.e. garbage) is resting."
Serial communication flow. Credit: Bansal et al.
Once the system's CNNs detect a piece of rubbish in its vicinity, another algorithm estimates the distance between the robot and the rubbish, while also generating instructions for the robot to reach the target location. The position of the debris and these instructions are then fed to a microcontroller, which essentially controls the robot's movements.
"After completing the task of object detection, the next task is to identify the distance of the object from the base of the robotic arm, which is necessary for allowing the robotic arm to pick up the garbage," the researchers explained.
Once the microcontroller receives information about where a piece of refuse is located, it moves the robot toward that location. When the robot finally reaches the garbage detected by the CNNs, it uses a robotic arm to collect it and drops it into a container (i.e. drawer) that is attached to its body.
"The design of the garbage collector can be split into three major parts: base, robotic arm and drawer," the researchers wrote. "The base drives the robot toward the garbage, the robotic arm collects the garbage and the drawer stores the garbage collected by the robotic arm."
The researchers have already developed a prototype of their waste detection system, which can currently collect up to 100-200g of garbage. In their future work, they plan to expand on this prototype, so that it can collect two to three kilograms of rubbish before emptying its drawer.
In addition, the team is thinking of developing and training a new CNN model that would allow AGDC to detect multiple pieces of rubbish simultaneously. Eventually, connecting the robot to the internet could also enable wider-scale implementations, for instance, creating an automated network of systems that collaborate to efficiently collect waste in specific areas.
As the world becomes digital, artificial intelligence is no longer something in the distant future. It’s a technology present in our daily lives in more ways than we can imagine.
But what does artificial intelligence mean?
According to the Britannica Encyclopedia, Artificial intelligence (AI), is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans.
The idea of a robot/machine emulating the cognitive functions of the human mind has open the discussion and raised the question, could AI-driven machines be smarter than humans someday?
A perspective from science fiction
For those who enjoy science fiction movies, you may well remember Steven Spielberg´s science-fiction drama movie AI Artificial Intelligence.
Released back in 2001 the movie portrays a vision of the 22nd century deeply affected by global warming, a future in which man and machines co-exist but not without conflict.
The movie centers on David, a child prototype capable of showing emotions, who develops abilities to have a will without being programmed.
Moreover, the film presents an ethical dilemma that transcends imagination.
Elon Musk on AI
Currently, there are no policies to regulate AI technology, but should we be concerned?
According to co-founder and CEO at Tesla´s Elon Musk, we should be.
The technological entrepreneur is also co-founder of OpenAI, a non-profit artificial intelligence research company.
In the words of the company: “Open AI´s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”
Last year, Musk responded to questions submitted by the audience on a series of different subjects at the South by Southwest® (SXSW®) Conference & Festivals, and once again spoke openly about his concerns regarding the use of this technology.
“I´m very close to the cutting edge in AI, and it scares the hell out of me. It´s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”
Musk on self-driving: “I think probably by intermixture self-driving will well encompass essentially all modes driving and be at least a hundred to two hundred percent safer than a person by the end of next year (2019).
The rate of improvement is dramatic, we have to figure out some way to ensure that the advent of digital super-intelligence is one which is symbiotic with humanity, I think that is the single biggest existential crisis that we face and the most pressing one.”
On regulatory oversight: “I´m not normally an advocate of regulation and oversight. I think it´s once you generally go inside minimizing those things, but this is a case where you have a very serious danger to the public, and it´s therefore there needs to be a public body that has insight and then oocytes on to confirm that everyone is developing AI safely, this is extremely important. Mock my words AI is far more dangerous than nukes.”
The Internet of Things
Technology evolves quite quickly, automation is present in the supermarket, in the bank, in health care, at home, at work.
Some industries are replacing man for machines while in others some jobs become obsolete.
The jobs of the future are mostly technology related, which will demand an even faster approach by educational institutions to be able to capacitate people that can fulfill those vacancies.
AI in the automotive industry
Usually, the first thought that comes to mind when talking about AI in the automotive industry is autonomous vehicles, but the possibilities are endless.
Defining the autonomous driving process
There are five levels of autonomous driving: Driving assistance, partly automated, highly automated, fully automated, and automated driving. In the latter, the driver becomes a passenger of the car.
Autonomous Waymo Chrysler Pacifica Hybrid minivan undergoing testing in Los Altos, California. Photo by: Dllu.
AI case uses examples in the automotive industry
Driver assistance system
BMW: The current BMW Personal CoPilot driver assistance systems support drivers on the road and help ensure additional safety and comfort. Examples of this include the Active Cruise Control with Stop&Go function, which independently adjusts the distance to the car in front of you. And then there is the Collision and Pedestrian Warning with City Brake Activation, which prevents collisions via automatic braking.
The German automaker received the Euro NCAP Advanced Award (recognizing advanced safety technologies) in the area of accident prevention and passenger protection.
Automated Parking
The Bosch Group, a leading global supplier of technology and services, is one of the companies that has developed and explored further this function.
According to Bosch, at the single touch of a button, its automated parking functions can be activated and controlled, navigating your car to your parking space, identifying obstacles and brake or dodge them, all fully automatically.
Automated valet parking
Last July, it was announced that Bosch and Daimler, have obtained approval for driverless parking without human supervision becoming
the world’s first fully automated driverless (SAE Level 4) parking function to be approved by the relevant authorities.
As part of this pilot project, in which both companies have remarked their priority on safety, the system will be in daily use in the Mercedes-Benz Museum parking garage in Stuttgart, Germany, with Bosch supplying the infrastructure and Daimler the vehicle technology.
“Driverless driving and parking are important building blocks for tomorrow’s mobility. The automated parking system shows just how far we have already progressed along this development path,”
said Dr. Markus Heyn, member of the board of management of Robert Bosch GmbH.
These automated functions share two key factors, safety, and comfort.
Startups in the Automotive Industry
There are over 500 startups developing technology towards the automotive sector, which only tends to increase along with progress.
Here are two examples of these data-driven innovative solutions.
Carfit is a global technology company that is bringing innovation with its self-diagnostic and predictive maintenance platform, applying advanced mathematical principles, tracking the actual performance of a vehicle based on vibrations and movement.
Among some of the recognitions, the company with headquarters in Palo Alto, California and offices in France, has been awarded is the LAURÉAT DU DIGITAL INDUSTRY AWARD DANS LA CATÉGORIE SMART PRODUCT last year in Paris, France.
German Autolabs developed the first digital assistance CHRIS, made for drivers, linking the phone via Bluetooth to safely operate apps by voice or gesture while driving.
German Autolabs has been distinguished with the iF Design Award 2019, one of the world’s most valued design competitions.
Shaping the future
Last month, the World Economic Forum´s Technology Pioneers of 2019 awarded 56 early to growth-stage companies at the Forum’s Annual Meeting of the New Champions that took place in the city of Dalian, China.
“Our new tech pioneers are at the cutting edge of many industries, using their innovations to address serious issues around the world,” says Fulvia Montresor, Head of Technology Pioneers at the Forum. “This year’s pioneers know that technology is about more than innovation – it is also about the application. This is why we believe they’ll shape the future.”
Among them are Perceptive Automata, which develops technology on human intuition for machines.
Founded in 2014 with headquarters in Somerville, USA, the company combines behavioral science, neuroscience, and computer vision to help autonomous understand how pedestrians, bikes, and drivers communicate on the road.
One of the achievements highlighted by the World Economic Forum (WEF) President Borge Brende, was the announcement that Bahrain will run a pilot scheme on the ethical procurement of Artificial Intelligence in the public sector.
“AI can deliver huge benefits to citizens, but it needs a robust framework for successful implementation, and this project with WEF will build a global knowledge-base that can be used by other governments to sustainably and responsibly introduce AI across their public sector institutions,” said Khalid Al Rumaihi, Chief Executive of the Bahrain Economic Development Board (EDB).
A team of researchers at Yonsei University and École Polytechnique Fédérale de Lausanne (EPFL) has recently developed a new technique that can recognize emotions by analyzing people's faces in images along with contextual features. They presented and outlined their deep learning-based architecture, called CAER-Net, in a paper pre-published on arXiv.
For several years, researchers worldwide have been trying to develop tools for automatically detecting human emotions by analyzing images, videos or audio clips. These tools could have numerous applications, for instance, improving robot-human interactions or helping doctors to identify signs of mental or neural disorders (e.g.,, based on atypical speech patterns, facial features, etc.).
So far, the majority of techniques for recognizing emotions in images have been based on the analysis of people's facial expressions, essentially assuming that these expressions best convey humans' emotional responses. As a result, most datasets for training and evaluating emotion recognition tools (e.g., the AFEW and FER2013 datasets) only contain cropped images of human faces.
A key limitation of conventional emotion recognition tools is that they fail to achieve satisfactory performance when emotional signals in people's faces are ambiguous or indistinguishable. In contrast with these approaches, human beings are able to recognize others' emotions based not only on their facial expressions, but also on contextual clues (e.g., the actions they are performing, their interactions with others, where they are, etc.).
Past studies suggests that analyzing both facial expressions and context-related features can significantly boost the performance of emotion recognition tools. Inspired by these findings, the researchers at Yonsei and EPFL set out to develop a deep learning-based architecture that can recognize people's emotions in images based on both their facial expressions and contextual information.
Examples of attention weights in the neural networks developed by the researchers. Credit: Lee et al.
"We present deep networks for context-aware emotion recognition, called CAER-Net, that exploit not only human facial expression, but also context information, in a joint and boosting manner," the researchers wrote in their paper. "The key idea is to hide human faces in a visual scene and seek other contexts based on an attention mechanism."
CAER-Net, the architecture developed by researchers, is composed of two key sub-networks and encoders that separately extract facial features and contextual regions in an image. These two types of features are then combined using adaptive fusion networks and analyzed together to predict the emotions of people in a given image.
In addition to CAER-Net, the researchers also introduced a new datasetfor context-aware emotion recognition, which they refer to as CAER. Images in this dataset portray both people's faces and their surroundings/context, hence it could serve as a more effective benchmark for training evaluating emotion recognition techniques.
The researchers evaluated their emotion recognition technique in a series of experiments, using both the dataset they compiled and the AFEW dataset. Their findings suggest that analyzing both facial expressions and contextual information can considerably boost the performance of emotion recognition tools, as indicated by previous studies.
"We hope that the results of this study will facilitate further advances in context-aware emotion recognition and its related tasks," the researchers wrote.
How can we use AI to help society? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.
Talking about far-reaching effects of AI when there is economic inequality, insufficient mental health care, and hatred/war seems like some party conversation in tech bubbletown(which Silicon Valley is sometimes blamed for).However, in the Valley’s defense, if we do not have far-reaching forward-thinking ideas we will never make progress for humanity. Here are a few areas where AI can have an impact.
Disease Diagnosis & Medication: Data privacy and regulatory barriers will cause a delay in disrupting this segment.
If the patient is able to access their own data, they should be able to use AI for diagnosis of their X-rays or MRI scans as a second opinion.
A soldier in war zones can get the AR/VR experience with instructions to help treat themselves and remove a bullet.
DNA based personalized medicine to extend the life of humans.
Robots to remind you to take medicine pills (e.g. Pillo)
Mental Health Support: Match trained counselors to people who are considering self-harm or suicide e.g.Crisis Text Linewhere algorithms can highlight the folks who are at risk based on the content of their text. There are also emotional health apps like Moodpath, Talklife, Youper, Calm harm etc. to get you away from the hot state.
Mobility: We all know someone in our live’s who has been hurt or killed by a human-error accident. While the Autonomous vehicles have not been accident-free today, it has the potential to increase safety on the streets and allow homebound seniors or disabled citizens to ride comfortably. On average with self-driving cars, we are experiencing one fatality for approximately 2 Million miles which is safer than the traditional driving numbers of 1 fatality for approximately 500,000 miles.
Education: A society moves forward when literacy grows and sheds new perspectives(Netflix Emmy award recommendation - Daughters of Destiny that shows how education is an equalizer.)
Personalized education track to get students to their destination with focus.
Virtual chatbot mentors who can guide you to complete the problems.
Refugee crisis: Computer vision solutions to determine the real truth with drones.
I am sure there are many more areas where AI can help like in microfinance and I encourage more entrepreneurs to think about AI applications for social good.
With regards, to the second part of learning about AI, I would recommend Coursera courses and lectures from schools such as MIT or Stanford.
This questionoriginally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter and Facebook. More questions:
Artificial intelligence is disrupting the publishing scenario. It is called “the age of the cutting-edge writer,” in light of the fact that the individuals who embrace these advances are probably going to succeed financially and professionally. While cynics may differ with utilizing AI in publishing and writing, it’s valuable to journalists and the publishing business from numerous points of view.
In principle, Big Data and data-driven analysis should have the option to make the “immaculate” piece of writing, which is both prevalent in artistic quality and top of the line. Authors needn’t fuss at this time, however. Researchers foresee that they will require an additional two many years of improvement, as the current algorithms need “instinct.”
Right now, we can profit by AI for increased comprehension of reading preferences, interfacing multi-kind books with readers, foreseeing top-rated books, creating data-driven works, and editing manuscripts with devices like ProWritingAid.
A recent survey directed by the Future of Life Institute’s AI Impacts project predicts that artificial intelligence will be fit for writing a smash hit by 2050. Yet, there’s no compelling reason to hold up that long to read literature composed by software.
Google has been working with Stanford University and the University of Massachusetts to improve machines’ natural language. To do as such, analysts have introduced artificial intelligence with more than 11,000 books. The initial step was for the software to comprehend the variations in human language. When this objective was accomplished, they gave it two sentences, a beginning sentence and an end sentence, from which the machine composed a few poems, as they clarify in their report.
Before, it was challenging to anticipate which books would sell well. Today, distributors can illuminate their decisions by analysing historical datasets.
To show, many Kindle independent publishers use Kindle Spy, an application that curates data from top-rated digital books. It’s a piece of research programming for finding worthwhile topics based on keywords occurrences in bestsellers, estimated royalties earned, and specialty possibilities.
When you’ve discovered the title of your next top of the line novel, check and compare the style with thousands of fiction works with ProWritingAid. It fills in as your critique partner, so you can fortify your composition with pacing and momentum, dialogue, word decision, and other complex issues before sending it off to a publisher or having it independently published. The plagiarism checker is amazingly significant to guarantee that you refer to properly and reduce the danger of any future copyright issues.
WASP is an artificial intelligence software that was made by Pablos Gervás, who holds a Ph.D. in Computer Science from the Complutense University of Madrid. This researcher has gone through 17 years idealizing his robot writer. WASP has figured out how to compose, inspired by works from Spain’s Golden Age. His maker says that the motivation behind his exploration is to comprehend the structure of poetry and study the innovative procedure, to make essayists’ work simpler. They are making an effort not to supplant artists, as their composition needs feeling.
Consolidating AI innovation that analyses reader preferences with the top of the line titles Kindle Spy application and a “book revelation” application like Booxby, independently published writers or publishers can profit by their gathered information to make books that are destined to be profoundly sought after. As it were, the present AI innovations give a strong establishment to legitimize publishing explicit titles. Callisto Media expresses that algorithms and Big Data are changing publishing with another period of consistency, benefit, and explosive development.
Numerous AI experts reading these books would without a doubt say that the writers are giving their minds a chance to run way ahead of themselves. The success of Google DeepMind in making neural networks to overcome one of the most grounded Go players in history might be a surprising accomplishment. However, that does not proclaim the upcoming entry of a supreme new living thing. Machine learning projects might turn out to be particularly great in many narrow areas, yet they are still amusingly weak as far as general intelligence is considered. All things considered, modern robots, rather like the machine-hybrids Daleks in the TV series Dr Who, still think that it is difficult to climb the stairs.
In Japan, computers are as of now taking part in literary competitions. The Nikkei Hoshi Shinichi Literary Award permits non-human authors to introduce their work without judges knowing the idea of the contenders. Of the 1,450 applications they got in the last edition, 11 were in part composed by a machine. One of them, “The Day a Computer Writes a Novel” made it past the first round of the challenge.
The judges of the challenge say that despite the fact that they are staggeringly well structured, novels of this sort are still truly inadequate with regards to depicting the psychology of the characters.
Regardless of how much we giggle at automated failings today, we should, in any case, wonder about how quick AI has developed over the previous decade and wonder how far it might yet evolve. We compliment ourselves that electronic knowledge will consistently come to fruition in humanoid robots, for example, McEwan’s Adam. However, it is undeniably bound to develop in incorporeal structures that we can barely comprehend.
Public’s ‘artificial intelligence hesitancy’ could hinder healthcare innovation and boost health inequalities, University of Westminster-led study finds
The study involving the University of Westminster, the University College London and the University of Southampton is the first to look at public attitudes towards AI in healthcare, and it comes at a crucial time following the £250 million funding announcement for AI in the NHS.
This new research developed a concept of ‘AI hesitancy’ which shows that a large proportion of the public is reluctant to use AI-led services for their healthcare, particularly for more serious illnesses. However, the newly announced NHS funding does not consider public acceptance of this technology. Therefore, the researchers warn that increased focus on AI in the NHS can increase health inequalities and may be detrimental to public health in the UK.
The study entitled “Acceptability of Artificial Intelligence (AI)-led chatbot services in healthcare: A mixed-method study” aimed to explore the public’s willingness to engage with AI-led health chatbots.
The first-of-its-kind research, published in the peer-reviewed journal Digital Health, used 29 semi-structured interviews to aid the development of an online survey of 216 participants which was advertised on social media. The survey explored a range of demographic and attitudinal variables, including questions about acceptability and perceived effectiveness of AI in healthcare.
The results identified three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’, defining public concern about accuracy, cyber-security and the inability of AI-led chatbots to sympathise.
Speaking about the findings in light of the £250 million NHS investment, the lead researcher from the University of Westminster Dr Tom Nadarzynski said: “Our research shows that at present a large proportion of the public is hesitant to use AI-led tools and services for their health, particularly for severe or stigmatised conditions. This is related to a lack of understanding of this technology, the concerns about privacy and confidentiality, as well as the perceived absence of empathy that is vital for patient-centred healthcare in the XXI century.
“We welcome the government’s initiative to set up ‘an Artificial Intelligence Lab’ within the NHS framework in England. Although we recognise the opportunities this technology may provide in terms of managing demand, supporting the development of new diagnostic tools and greater cost-effectiveness of services, we emphasise the importance of involving the public in the design and development of AI in healthcare. This way, the problem of ‘AI hesitancy’ hindering the improvement of healthcare provision could be addressed and the technology could make a real difference to the patients.”
Data Logistics and Artificial Intelligence in Insurance and Risk Management – Calls for Action or an Industry in Transition
Data is quickly becoming the most valuable asset in the insurance sector, given its tremendous volume in our digital era. Simultaneously, Artificial Intelligence (), harnessing big data and complex structures with Machine Learning () and other methods, is becoming more powerful. Insurers expect more efficient processes, new product categories, more personalized pricing, and increasingly real-time service delivery and risk management from this development.
Given the many leverage points in insurance, it surprises that -driven digitization is not evolving more rapidly. When according to a recent Gartner[2] study, 85 % of data science projects fail, how can insurance companies make sure that their projects are among the successful ones[3]? This article and the corresponding White Paper[4]comprise a study among mainly the Swiss and German insurance industry[5], tackling a key business problem.
Legacy infrastructure, missing interoperability, and a lack of comprehensive knowledge about and its use cases hinder the adoption of advanced, -based, ethical insurance and risk schemes.
Take the currently long, resource-intensive, error-prone process of underwriting as a case of applying in risk management. Underwriting will massively benefit from -based automation, partially because technologies such as Natural Language Processing () are able to process the increasing volume of text- and -based data. Besides, enables underwriters to assess increasingly complex risks of our time, such as cyber security or climate change risks, often more precisely, but certainly much faster than humans. Still, sophisticated -powered risk models have not yet been largely observed in the insurance market. Following the results of our study, adoption is low and slow, due to data-centered, methodological and cultural issues.
1, In terms of data, several problems stand out. While many applications rely on masses of data, large, clean data sets are hardly available in insurances. IT and data systems in insurances are heterogeneous and hardly interoperable. There is evidence that will benefit tremendously from the growing data source of the industrial Internet of Things (IoT), but insurers currently lack ways of identifying information in the breadth and depth of sensor data. IoT enhances as an alternative data source with complementing characteristics to existing data, e.g. allowing to monitor the state of insured assets or processes in real time. If integrated well with contractual and claims data, can draw from a much richer context around insured subjects compared to today (Figure 1). IoT data fosters risk management of existing insurance contracts (e.g. in parametric insurance), in the risk estimation for underwriting new contracts, and in claims management (e.g. the last seconds in a car’s drive recorder before an accident).
Figure 1: The relation between Artificial Intelligence () and the Internet-of-Things (IoT)
2, Regarding methods, the devil is in the details. Whereas popular and hyped methods such as Deep Learning lack real impact in most insurance settings, Bayesian modeling (to tackle small data problems) and causal modeling (to select interventions to test putative causal relationships, and to improve decisions by leveraging knowledge of causal structure) have the power to step in. To process time- or event-series of IoT sensors, methods such as recurrent networks are suited to forecast IoT-backed risk-indicators. State-space methods apply if the time-series are aggregated to a state of a monitored asset or process.
3, Culture-wise, many companies talk about being “big-data-driven”, yet the actual number of firms acting on their culture and organization is much smaller. Digital cultures in organizations are often too hierarchical, lack cross-functionality and are organized in a top-down way. Many data science projects fail because they start by searching for the problem to be solved with a stipulated (-) method and not with the business problem that deserves most attention.
Insurers ought to resolve data-related, methodological and cultural issues to boost their entire digital transformation journey, not only particularly integration. We propose the following:
Resolving data issues goes hand in hand with a shift of mindset away from complex integrated solutions towards an agile orchestration of micro-services. Insurers can undertake small initial steps to create initial return-on-investment and subsequently fragment monolithic data systems. This will result in modular, interoperable data systems that make data sources accessible for any application, even if the concrete use case and application is still unknown (Figure 2). In lieu of designing complex data processes, companies ought to focus on defining few and simple access rules for the company-internal data space and a secure but accessible interface to external data providers and aggregators.
Figure 2: Modular, semantized data lake architecture versus traditional enterprise data warehouse
Resolving methodological issues first necessitates the development of a clear understanding of the desired outcomes of specific applications. This development must be triggered by a careful assessment of which process is appropriate for a given technique, and of which data is truly required to deliver a working solution. Not least, it must be checked if the data points are continuously available, as well as of sufficient quality, at the insurance or the external sources it can access. The expected volume of data determines if parameter-intense methods such as Deep Learning are applicable or if algorithms with lower need for data are favorable. A further ingredient for deciding which algorithm class to use in development, or which solution to buy on the market, is the necessary level of interpretability of the model. Compliance factors such as GDPR and ethical values are to be accounted for.
Resolving cultural issues emerges from sorting out data issues. Helpful initiatives aim at bottom-up enablement, democratizing data access to employees rather than top-down communication of digital values. Insurance companies should empower their employees to interact with data and might experience growing engagement with digital transformation, when exposure to data translates into successful projects. Cultural change is key to evolve the business models from product sales to providing solution ecosystems to customers (e.g. “underwriting a commercial asset insurance” vs. “offering a risk management solution to clients that helps preventing losses”).
Ethical development of and responsibility for by insurances is crucial for long-term business success, and thought leadership progresses steadily. Insurance companies may consider taking up elements from frameworks such as the recently published Algo.Rules[1], to find answers to ethical questions on and to keep their social license to operate. We would envision a future where risk management is on assessing the risks of machine errors, as machines take over decision-making along insurers’ value chains. Would machines also be of crucial importance on the meta level, i.e. not only for risk evaluation in underwriting, but also for assessing employed in underwriting processes? Do we wish to preserve a human element[2] in the game? Would we want insurance companies to be sufficiently powerful to dictate behavioral norms;[3] g. nudging customers to positively influence their scoring to prevent a constantly monitoring insurer from canceling a policy when sensors recorded “reckless” behavior?
The insurance companies we studied are positioned quite heterogeneously with regards to maturity, with the entire industry actively exploring opportunities in their respective focus areas. Productive examples are parametric insurances, e.g. in the area of logistics or agriculture, where conditions are triggered by sensor signals. Forward-looking analytics in such contracts or for underwriting is hardly used today.
Data logistics is an essential prerequisite for to be employed in underwriting and algorithm-based insurance schemes. In-house complex data systems must be converted into interoperable services, catering to data users in a flexible and adapting way. A changing value chain will position insurance companies in the center between customers and external data and services providers, with both the need and the opportunity for insurers to add valuable differentiating services to customers. With emerging frameworks such as the Algo.Rules, insurers now further have the unique opportunity to position themselves within a range of options from a purely compliance-driven operating model to a reflective, inclusive model that actively shapes answers to the societal challenges ahead raised by .