Upgrading Healthcare Security Strategies To Combat AI-Based Threats

Rom Hendler is CEO & cofounder of Trustifi, a provider of SaaS-based security and email encryption.

 

GETTY

Artificial intelligence (AI) and machine learning (ML) continue to gain headlines and impact the high-tech landscape at the boardroom level. Yet this technology is also rapidly becoming a destructive force causing considerable security risks in the highly regulated healthcare space.

Nefarious cyberattacks, including synthetic fraud and phishing attempts, continue to increase in scale and effectiveness thanks to AI-based tools now available to hackers. A synthetic fraud attack uses portions of a victim’s identity garnered from the internet to compile a false but convincing profile with which to commit fraud. This includes crimes like applying for patient financing under false pretenses, requesting medical services, or submitting fraudulent insurance claims.

This article highlights the rapid advancement of AI-assisted synthetic fraud attacks and medical identity theft as they affect healthcare providers and patients, along with updated recommendations for patient data privacy, AI-powered email security solutions, and data protection policies.

AI bots have long been a hacker favorite for collecting stolen credentials, accelerated by the rise of natural language generators like ChatGPT. Bots mine the internet for accurate information on a victim’s identity and use that in crafting their attacks, making it challenging for the victim to recognize these attempts as fraudulent. These emails typically include a link to a convincing imposter website that tricks recipients into providing usernames and passwords. When these breaches occur, identity theft can also result in reputational damage for the healthcare provider.

In the past, phishing attacks were easy to detect because of spelling mistakes and vague information requests. Present-day attacks incorporate persuasive personal details such as the user’s bank or healthcare provider’s name, and references to the user’s home city or local pharmacy. This is known as “geo-phishing.”

In the case of synthetic fraud that compiles information from different victims, it’s harder to determine that the resulting profile is fake since much of the data used to assemble it belongs to actual individuals. The cybercriminal uses the synthetic profile to apply for patient financing or otherwise scam large amounts of money and services. Material used to create synthetic fraud profiles is often acquired through compromised email data.

In addition, hackers can now leverage AI-based technology to rapidly create near-perfect scripts for phishing schemes using ChatGPT. Hackers will use the patient’s public email address and password from a successful phishing attack to log in to a medical provider/patient portal with these stolen credentials. If two-factor protection is not enabled on the EMR portal, the hacker can access patient records and correspondences with practitioners.

Once hackers can access a practitioner, they can request medication refills and submit false insurance claims to the provider or Medicare carriers. All this shows how critical it is for healthcare organizations to secure their patient data and infrastructures with the most up-to-date cybersecurity solutions, including AI-based tools to combat these very sophisticated (and similarly AI-powered) attacks. Traditional, legacy security capabilities in firewalls, secure email gateway applications, and first-generation multifactor authentication often fail to stop the advanced AI-powered threats hackers continue to develop.

Hackers can alter their attack methods, extend their attacks across several countries in mere seconds, and shut down their operations globally to avoid detection. Most healthcare providers’ current security adaptive controls still rely on reporting events into a Syslog file or an early generation security information and event management (SIEM) tool lacking AI capabilities. And too many solutions simply utilize blacklisting of known malicious IP addresses, which don’t detect sophisticated, contextually based threats.

So what strategies can healthcare organizations put into action to guard against increased AI-driven attacks? These guidelines can help companies remain HIPAA compliant and protect their patients against victimization.

• Abandon outdated technology and strategies: Medical providers can increase their protection by phasing out antiquated, legacy techniques. A surprising amount of healthcare organizations still use fax technology to transmit sensitive information. Companies are far better off implementing a modern protocol such as AES 256-bit encryption for the transfer of sensitive material like patient records. In addition, outdated operating systems like Windows XP (still in too many office environments) are often no longer supported by security updates and create vulnerabilities.

• Use AI to combat AI: The bright side of artificial intelligence is it can deliver as many effective capabilities to cybersecurity solutions as it does to criminals. Software should leverage sophisticated AI-based tools, including optical character recognition and algorithms that recognize “red flag” keywords that could indicate a breach.

• Deploy automation: Administrators can automate HIPAA compliance. Effective solutions allow admins to set their systems to automatically encrypt or flag sensitive email messages that fall under HIPAA statutes. This takes the burden of deciding what material needs to be encrypted out of the hands of employees, reducing human error.

• Implement security awareness policies: Since a majority of breaches start with a phishing attempt, make sure your staff is aware of the latest schemes and imposter email strategies. Create policies around vulnerable activities (e.g., establish a protocol where requests for wire transfers or financial login credentials are confirmed verbally).

• Focus on ease of use: Your security should be simple for employees to use. If measures are convoluted and laborious, users will abandon them, and your security investments will go by the wayside.

Preventing synthetic fraud and medical identity theft does not start with merely detecting threat vectors. Prevention begins by protecting patient information and network email data with advanced cybersecurity solutions powered by AI technology, which will help deter imposter attacks and compromised credentials, and will identity theft from the outset. It’s essential to deprive hackers and fraudsters of the key components that facilitate their attacks by protecting user attributes such as login information, Social Security numbers, and passwords, especially via email data.

Email security platforms continue to deliver integrated features, including mature AI-enabled engines that enable threat detection, email encryption, and data loss prevention. Organizations that want to enhance their email security while reducing complexity should continue to research these solutions, employing AI-driven, easy-to-use, next-generation technologies as strategies to combat the forthcoming level of AI-powered attacks.

 
 
 
 
Previous post
Back to list
Next post