Why should we care about AI policy when it comes to medical devices?
- meganjungers
- Feb 9
- 5 min read
We can understand why artificial intelligence is an exciting area for policy: the unprecedented rate at which different models are being created, trained on data, and immediately implemented is happening faster than any other form of advanced technology. From completing tasks faster and with fewer errors, to generating forms of art, AI is quickly finding its foothold into different areas of society. It is even slowly emerging on the front of healthcare, like assisting providers with more of their documenting tasks to alleviate their workloads and manage provider burnout. However, there is certainly tentativeness towards wholeheartedly embracing AI in medicine- for example, standard care practices do not yet recommend that AI devices own and develop treatment plans independent of providers.
Despite this, there have been mixed attitudes dictating how groups at different scales are responding (or not responding) to potential security, legal, and privacy concerns with policy direction. Medical technology policy ultimately holds the power to shape the healthcare landscape by determining what devices are eligible for patient support, and further which data is acquired and used in treating patients.
If a hospital is operating in conditions with more conservative, restrictive privacy regulations, their patients may have a very different care experience compared to a hospital operating with a fewer impositions and less policy guidance. Further, there are distinctive benefits and risks of both types.
Say that Hospital A is in a locality with stronger regulations. Patients may experience a better protections of their privacy, with more comprehensive insights into their data and what is being collected by means of more opportunities for informed consent. Generally speaking, they may have greater safety protections of data they consider personal and do not want in circulation, whether that be within the hospital's data bases or with any potential items that may be used in future research/ AI model training, even if it is de-identified. Providers may be more comfortable using AI medical devices as treatment tools and therapeutics, given the detailed legal accountability of who is responsible for building models and the devices themselves. However, there may be a potentially more beneficial technologies in other places, for which the provider may not be legally authorized to use, and patients may have to go to other another hospital if they want to seek the treatment that is available elsewhere. Further, the medical community may suffer as a whole due to the limitations on innovation imposed by what AI medical devices are eligible to be approved.
On the contrast, say that Hospital B is operating in an area with some AI guidelines in place for medical devices, but for the most part there are few limitations regarding data that is protected from using in such technology. Patients' data at this hospital may be in jeopardy of being accessed by third parties without a patient's knowledge, and further they may be at higher risk of having their personal data hacked by cybersecurity threats. While unlikely, it is still possible that these types of hacked information may lead to discrimination by insurance companies, job opportunities, and financial support, all while also subjecting them to targeted advertisements based on demographic and personal medical data. Further there may be challenges regarding who is liable for any device malfunctioning, due to limited compliance measures. However, patients may have better care opportunities and a larger variety of care options, and thus better outcomes due to more potential courses of treatment integrated with more advanced technologies.
The issue is further complicated if outside of the medical setting, after a patient receives a treatment. With less restrictive policy, there may be greater flexibility and ease of monitoring patient's lives post-care, as they may be able to have important vital signs and biometrics tracked virtually. Moreover, there could potentially be lower costs associated with this type of care, as there may be fewer necessary in-person visits to check on a patient's statistics, as well as transportation and accessibility factors. Alternatively, if these devices use third parties for elements of data sourcing that are not providers, HIPAA does not apply to the respective data acquired, and further may jeopardize what information is gathered and stored without a patient's knowledge. Rather, if there are more strict regulations in place, then the respective benefits and risks of such use after treatment received both are lower than with looser regulations.
Ultimately, the difference depends on the values and needs of the respective individuals to whom the regulation applies to. This is why it is imperative that the public be aware of the regulatory process and have the knowledge and resources to advocate on behalf of issue of most importance to them.

There is also the worry of potential commercial use of these devices beyond a prescribed treatment protocol by a provider. An individual may seek options to manage a medical condition that differ from traditional standards of care, what is legally available where they are receiving treatment, or what alternatives are accessible in other markets. Most governing bodies have begun creating their own definitions of what items must be regulated, but given the sensitive nature of the respective data that some of these devices gather, there may be a greater need to define the scope of which items fall under such legal classifications. There is also a potential challenge of regulating use of AI-enabled medical devices for use beyond their original purpose. Off-label use of medical devices is a potential item to also consider should these devices become more normalized1. Like regulated substances, there could be outlined potential dangers for using these devices for purposes differing from their intended source of treatment1.
Lastly, there is the worry of whether privacy policies are strict enough to protect against the inevitable evolutionary nature of data. A provider's ability to de-identify patient data to safeguard privacy in electronic records has helped preserve patient information in the event of cyberattacks, however there is already opportunities to re-identify information through data triangulation through other data sets2. This type of challenge makes for added obstacles to preserving control over data that a patient may not want to be disclosed beyond their provider. While certainly a difficulty to address, setting specific policy with regard to enforcing advanced de-identification mechanisms may proactively protect patient data from unauthorized disclosures of PHI.
The question truly becomes, what obligations do policy-makers owe to society when it comes to providing individuals the freedom to choose the best form of care for their personal circumstances, while also safeguarding their data, privacy, and safety?
Citations
Krishnamoorthy, M., Sjoding, M.W. & Wiens, J. "Off-label use of artificial intelligence models in healthcare." Nat Med 30, 1525–1527 (2024). https://doi.org/10.1038/s41591-024-02870-6
Price, W. Nicholson, and I. Glenn Cohen. 2019. “Privacy in the Age of Medical Big Data.” Nature Medicine 25 (1): 37–43. https://doi.org/10.1038/s41591-018-0272-7
Comments