The problem of altered digital media is not entirely new to insurance. While photo editors began to proliferate many years ago, deepfakes have complicated the problem exponentially by making it harder to detect digital media fraud.

How deepfakes threaten the automation of insurance claims

The problem of altered digital media is not entirely new to insurance companies. While photo editors began to proliferate many years ago, deepfakes have complicated the problem exponentially by making it more difficult to detect digital media fraud. (Illustration by Boris Semeniako)

From exaggerated claims to the creation of fake goods, AI-generated images and videos require protective action by insurers to defend against this burgeoning form of fraud. If you read Claims Magazine’s September / October 2021 article titled, Deepfakes: a threat to the insurance industryyou know the threat that deepfakes and synthetic media, in the form of photos and videos, pose to the insurance industry.

The relentless onslaught of fake media

The past few months have confirmed that deepfakes are more than just a fad. From fake images and videos used as war propaganda in Ukraine, synthetic creation of film actors, cryptocurrency scams, documents and identification fraud, instances where deepfakes establish false narratives and circumstances have steadily increased. Once limited to a social media novelty, deepfake fraud has emerged as a formidable threat in many industries. Its potential to impact the insurance industry, which already suffers from more than $ 80 billion in annual fraud in the United States, is immense.

In response to this threat, new and developing industry standards, such as C2PA and the Content Authenticity Initiative, have led to the release of specific proposals to protect the authenticity of photos. Other solutions, such as blockchain-based tamper evidence and artificial intelligence analytics, have continued to mature to better meet the scale required by claim processing, offering more options for companies willing to take action. In light of these potential mitigation strategies and solutions, how is the insurance industry reacting?

Awareness and Concern Among Insurance Professionals

The problem of altered digital media is not entirely new to insurance companies. While photo editors began to proliferate many years ago, deepfakes have complicated the problem exponentially by making it more difficult to detect digital media fraud.

In a recent survey conducted by Attestiv of insurance professionals, more than 80% of respondents expressed concern about altered or tampered digital media used for insurance transactions, such as claims. This is a clear acknowledgment of the threat and the fraudulent losses that can result from it. In fact, altered photos that falsely inflate claims were the main concern among the various types of media fraud. But when do these organizations plan to take steps to solve this problem?

When asked about the timing for digital photo implementation and media validation, 22% replied that they already have a system in place, while another 11% indicated they would have a system within the next year. Putting them together, a total of 33% of organizations indicated that they have or are close to having a validation solution, leaving 67% of organizations relatively exposed.

In step with touchless automation

While there is some movement by insurance organizations to close the deepfake fraud gap, the pace of touchless automation, in the form of self-service transactions and direct processing (STP), has been much faster and a little more furious. . Undoubtedly, COVID may have helped the shift to self-service transactions, as it was a natural choice for reporting claims during lockdowns. At the same time, this mostly welcome digital transformation increases the reliance on customer-supplied photos for resolving complaints.

Contrary to responses on digital media validation strategies, over 43% of respondents indicated they have self-service complaints available now, while another 7% indicated availability within the next year. Similarly, 49% of respondents indicated they now have STP complaint systems, while another 7% indicated availability within the next year.

The deeper implication of this relatively torrid pace of automation is that with customer-supplied photos and with virtually no human interaction on the side of processing complaints, the risk of fraud from altered, manipulated, or synthetic photos increases significantly. If we focus solely on the present, 49% of respondents using STP affirmations and only 22% of respondents using a photo and media validation system leave a large visibility gap. Ultimately, who cares about photos and what can be done about the inaction gap?

Industry experts speak

Experts in complaint processing, insurtech and industry tend to agree that the risk of image fraud has been minimal to date, possibly leading to a false sense of security. However, the problem is likely to increase over time.

“Overall, from the start of COVID in 2020 through 2021, the industry has seen an increase in user-submitted photos to streamline workflows,” said Ernie Bray, CEO of Auto Claims Direct (ACD). “I think now many insurers are starting to become receptive to adopting photo verification and if the real mission is to speed up claim processing, photo validation will be at the forefront.”

Others, like Michael Lewis, CEO of Claim Technology, suggest a very proactive approach to building counter-fraud, stating, “Customer self-service and digital counter-fraud are two sides of the same coin. You shouldn’t introduce the former. without first having implemented the second “.

Going one step further, another approach is to create an anti-fraud approach before automation. Guidewire Software chief evangelist Laura Drabik suggests, “The AI ​​technology of fake or altered media can augment the human being today. Rather than rely solely on the regulator, the technology can detect subtleties and patterns that the human eye cannot. “.

For those who choose inaction, on the other hand, Alan Pelz-Sharpe, founder of the analysis firm Deep Analysis, warns: “In a world of easy-to-access and use tools for the imaging doctor, it is all too easy to defraud. . The risk and regularity of this type of fraud is probably low today, but it will certainly increase significantly over the next few years ”.

So while all may be quiet for now, the cost of inaction could be high. “In all likelihood, few insurance companies have addressed this growing concern. Yet it should be a priority for them as once this takes off – and it will, and it will be hard to stop it, ”said Pelz-Sharpe.

Where are the solutions?

Those hoping to see the industry quickly rally around a common solution to the problem may be disappointed. In fact, respondents pointed to a huge difference in how organizations source and support photo and digital media validation solutions.

Not surprisingly, 24% of insurance respondents were unsure of their approach to a solution, pointing to nascent planning stages or perhaps avoidance of the looming problem. 36% said they would rely on in-house technology or their SIU to solve the problem. This is a relatively high percentage who would prefer to develop skills in-house or rely on groups that are already widely used. While 15% would consider an insurtech solution to solve the problem, nearly another 15% would prefer to outsource the solution to vendors or complaints systems. Finally, 10% are satisfied with doing nothing to solve the problem.

Bray advises: “Having AI-based anomaly detection integrated into a complaints process is another step in stopping fraud and increasing accurate complaint handling. Without any ability to verify the authenticity of photos, damage could be exaggerated and eventually carriers will pay for inflated or completely false losses. “

With such a high percentage sticking to the status quo, insecure or ignoring the issue, Lewis recalls: “A lot has been said about the importance of keeping an appraiser up to date to prevent fraud, but no appraiser can be trained to detect images or documents. tampering invisible to the naked eye “.

What does the future hold?

While fraud has always been a well-known challenge in the insurance industry, the pace of claims automation is far outpacing the pace of automated fraud prevention, opening up new risks and possibly new opportunities. Some insurance companies may be willing to risk fraud vulnerabilities in exchange for cost savings elsewhere and a better customer experience. Others may want to take a safer and more controlled approach, ensuring that the development of anti-fraud technologies keeps pace with the automation of claims to ensure that the company does not experience unexpected increases in losses.

“It really depends on the insurers who realize the real ROI on the authenticity of the photos. The cost of implementing a solution to analyze the validity of photos can pay off many times over with a statement or two, ”Bray said.

Regardless of whether solutions are adopted to combat deepfakes and synthetic media, the past couple of years have shown that contactless claims (and subscriptions) transactions are here to stay and how digital media can be compromised has become more elaborate. As a result, taking proactive steps to implement automated fraud prevention technology is fast becoming an important consideration for protecting the business metrics that matter most.

As Lewis concisely points out, “Running antivirus on incoming attachments is non-negotiable. Shouldn’t it be the same for running fraud checks on every image and document?”

The risk of not engaging in counter-fraud can be significant. As Drabik points out, “Ultimately, this will raise the price of insurance for everyone, including most people and families who do not commit fraud.”

So what will it take to get insurers to accelerate their media fraud prevention plans? “In fact, there will be great success by organizations and individuals in insurance fraud in the future,” Pelz-Sharpe said. “Unfortunately, it will likely take a major case to come to light and get some embarrassing headlines before companies take action to mitigate the risk.”

Ultimately, it’s hard to speculate on the adoption of deepfakes, synthetic media, and associated countermeasures in industries like insurance and whether scammers will see the technology as an opportunity to victimize companies that haven’t taken steps. What is certain is that the next few years will become a clear indicator that organizations taking proactive measures today have invested wisely.

Nico Vekiarides ([email protected]) is the CEO and co-founder of Attestiv. As CEO and entrepreneur, he has spent the past 20 years in corporate IT and the cloud, bringing innovative technologies to market.

Related:

Detect insurance fraud through graphical algorithms

The leader of the auto insurance scam is sentenced to 7 years in prison

The future of insurtech is direct processing

Leave a Comment

Your email address will not be published.