IT House reported on December 19 that Suni Lehmann Jorgensen and his team at the Technical University of Denmark have developed a powerful artificial intelligence model that can be based on personal data **mortality rate, with an accuracy far beyond any existing model, even including those used in the insurance industry. According to the researchers, the model can provide early warning of health and social problems, and it is also necessary to be vigilant against their abuse by large enterprises.
Source: The Pexels team took a rich dataset of education, health, diagnosis, income, and occupation of 6 million people in Denmark (2008-2020) and turned it into text that can be used to train large language models. This model is similar to ChatGPT, which analyzes large amounts of text data to make the next most likely word as a way to infer the likelihood of a future event occurring. In the same vein, the researchers developed the "Life2Vec" model to analyze the sequence of events in an individual's life course, what is most likely to happen next.
In the experiment, the Life2vec model was trained with data from 2008-2016 only, and data from 2016-2020 was used for testing. The researchers divided people aged 35-65 into two groups, half of whom died between 2016-2020 and the other half survived. The Life2vec model** is 11% more accurate than existing AI models and mortality tables commonly used in the insurance industry.
IT House notes that the model can also be more accurate** personality test results for a subset of the population than an AI model specifically for personality testing. Jorgensen believes that the Life2Vec model has absorbed enough data to be applied to a wide range of health and social issues, such as early intervention in health problems, or to help close the gap between rich and poor. However, he also stressed that the model can be harmful if it is misused by businesses.
The life2vec model should not be used by insurance companies," Jorgensen said, "The essence of insurance is to share risk, and who will suffer an unfortunate event or death is contrary to the idea of mutual insurance." But he said similar technologies already exist and could be used by tech giants with vast amounts of user data to ** and influence user behavior.
Matthew Edwards of the Institute of Actuaries in the UK says insurers are indeed interested in the new approach, but current decision-making relies heavily on a simple AI technique known as generalized linear models.
Insurers have been analyzing existing data for hundreds of years to ** life," Edwards said, "but with policies that can be as long as 20-30 years, we are very cautious about adopting new methods because any major mistake can cause huge losses." While everything is changing, the pace of the insurance industry will slow down because no one wants to make a mistake. ”
The emergence of the life2vec model highlights the great potential of AI technology in terms of the future, but it also raises important ethical questions. How to ensure that this technology is used to improve people's lives, rather than exacerbate social injustice, is a challenge that needs to be addressed urgently.