artificial intelligence 1

AI Presents HIPAA Risks

Save time! Save money! Use artificial intelligence in your medical practice. Or…slow down and think through the potential risks to privacy and security. Make sure your cybersecurity defenses are as strong as they can be.

Artificial intelligence is pervasive and mainstream. It’s rapidly being adopted in nearly every field – a Google search reveals topics related to “education, medicine, literature, healthcare, business, marketing, agriculture, manufacturing, etc. Our everyday lives are affected by AI, in our homes, our cars, our phones and computers, whether we’ve been fully aware of it or not.

It can be controversial though, because it’s rapidly changing, bringing negative risks as well as rewards. It has also introduced ethical questions in education and privacy. Recently concerns have been raised about cybersecurity risks.

The Health Sector Cybersecurity Coordination Center (HC3) at the U.S. Department of Health and Human Services explains:

In November 2022, the artificial intelligence research laboratory OpenAI publicly released a chatbot known as ChatGPT, which is based on its GPT-3.5 language model which was trained on Microsoft Azure. As a chatbot, it’s designed to interact with humans and respond to their conversation and requests. Among other things, it can be used to answer questions, write essays, poetry or music, and compose e-mails and computer code.

Physician Uses TikTok to Explain How ChatGPT Helps Productivity

Palm Beach-based rheumatologist Dr. Clifford Stermer posted on TikTok how he had used ChatGPT to write a letter to UnitedHealthcare asking it to approve a costly anti-inflammatory for a pregnant patient.

“Save time. Save effort. Use these programs, ChatGPT, to help out in your medical practice,” he recites, after showing how he asked the bot to reference a study concluding that the prescription was an effective treatment for pregnant patients with Crohn’s disease. The video then showed the bot creating a letter to send to the insurance company.

The physician is not being accused of violating HIPAA for this incident, but it raises questions about whether ChatGPT has a place in a medical practice, and how extensively it should be used without strong cybersecurity protections.

Artificial Intelligence May Aid in Malware Development

The TikTok demonstration is persuasive. It appears that the ChatGPT tool could streamline correspondence and research, at a minimum.

But the HC3 published a report on January 17, 2023 explaining how AI is being used by cyber criminals to strengthen their attack tools, and healthcare is particularly vulnerable.

Artificial intelligence (AI) has now evolved to a point where it can be effectively used by threat actors to develop malware and phishing lures. While the use of AI is still very limited and requires a sophisticated user to make it effective, once this technology becomes more user-friendly, there will be a major paradigm shift in the development of malware. One of the key factors making AI particularly dangerous for the healthcare sector is the ability of a threat actor to use AI to easily and quickly customize attacks against the healthcare sector.

In just two months since its release, ChatGPT’s role in malware development by threat actors has been identified multiple times according to the HC3 report. The report also says:

Current artificial intelligence technologies are widely believed to only be at the very beginning of what will likely be a whole array of capabilities that will cut across industries and enter into people’s private lives. The cybersecurity community is far from developing mitigations and defenses for such malicious code, and it remains unclear if there will ever be ways to specifically prevent AI-generated malware from being successfully used in attacks. There are already debates and discussions on the ethical use of AI systems and the proper governing models that should be deployed to ensure they are confined appropriately.

The report then lists some of the resources that are exploring ethics as it applies to AI and appropriate governing models, notably at the Brookings Institution, the Harvard University Berkman Klein Center and the ETSI Specification Group on Securing Artificial Intelligence.

Statement from OCR on the use of AI

Although the Office for Civil Rights (OCR) has not issued formal guidance about the use of AI as it relates to HIPAA, an OCR spokesperson told the Information Security Media Group (ISMG), in response to a question:

HIPAA regulated entities should determine the potential risks and vulnerabilities to electronic protected health information before adding any new technology into their organization.

Boost Cybersecurity with Strong HIPAA Risk Management

Defending against malware of all kinds starts with HIPAA Risk Analysis. Use the findings to tailor a risk management plan for you. Stay up-to-date on recommended mitigation measures to fight against known threats. If you want to use AI in your medical practice, first look at potential patient data security and privacy issues to weigh the risks and benefits.

Share This Post

Maggie Hales

Maggie Hales is a lawyer focusing on health information privacy and security. As CEO of ET&C Group LLC she advises health care providers and business associates in 36 states, Canada, Egypt, India and the EU, using The HIPAA E-Tool® to deliver up to date policies, forms and training on everything related to HIPAA compliance.

Copyright © 2023 ET&C Group LLC.

The HIPAA E-Tool® and Protecting Patient Privacy is Our Job®
are registered trademarks of ET&C Group LLC

Terms of Use | Privacy Policy | Cookies Policy | Privacy Settings | HTML/XML Sitemap

Mailing Address
The HIPAA E-Tool
PO Box 179104
St. Louis, MO 63117-9104

Office
8820 Ladue Road Suite 200
St. Louis, MO 63124

Powered by JEMSU