EU Attitudes Towards AI Policy
- meganjungers
- Mar 3
- 4 min read
Updated: Mar 31
EU Legislation
The European Union has taken a comprehensive framework approach to regulating AI, as is demonstrated by its regulating AI systems across domains and applications1. The general risk-based approach aims to keep humans at the forefront of legislation, and prioritize trustworthiness of AI in the European Union. This does not stray far from previous approaches to regulating medical device technology, as the default legislation regarding medical technology prior to the enactment of AI-specific additions was the EU Regulation 2017/745 Medical Device Regulation (MDR) 2.
The MDR created regulations around all medical devices at the pre-market phase, subjecting high-risk devices to much higher levels of scrutinous review3. The EU's conservative approach to limiting risk and ensuring patient safety does come with some additional drawbacks. For one, having stricter standards creates challenges to working across markets with different regulations. In response to the EU’s MDR, the U.S. International Trade Commission (USITC) articulated that it foresaw the regulation as a limitation to the market by way of the obstacles created at nearly every stage of the regulatory approval process4. The USITC cited the impacts to “time to market” delays, challenges with manufacturing compliance, and resource and capacity strains on regulatory bodies as key items that would likely deter American medical device companies from trying to enter EU markets4. However, the EU’s values lie closer with ensuring correct and compliant use of medical devices, given the stakes of combining technology with healthcare. The general attitude towards managing risk over driving innovation sets the groundwork for subsequently important regulation, which was more specific to AI in medical devices.
The EU predated their MDR legislation with the General Data Protection Regulations (GDPR), which is considered one of the most structured and enforceable privacy regulations in the world5. The GDPR extensively outlines privacy rights of all EU citizens, including a definition of personal data, a requirement to opt-in for data use, and outlined fines for violating the respective measures6. Seven specific principles are further articulated, including6:
1) lawfulness, fairness and transparency
2) purpose limitation
3) data minimization
4) accuracy
5) storage limitation
6) integrity and confidentiality
7) accountability
The MDR also includes application of its protocols for a number of specific contexts 6. This included reclassified medical devices that use personal data as high-risk, which includes wearable technology and many smartphone apps 6.
With regard to aDBS with app interfaces, this flow of data is protected under the GDPR.
The GDPR’s far reaching regulation is one of the most significant steps towards developing trust in AI, and returns autonomy to EU citizens with regard to personal information and privacy. However, the challenge built from GDPR is how AI innovation is still driven despite barriers with regard to sharing or using information acquired about individuals. The “opt-in” process certainly creates an obstacle to gathering general information for training models, as well as the accountability element that requires explainability of outputs by the corresponding models5. Additionally the GDPR regulates not only data in the EU, but data of EU citizens regardless of where the data is located5. Further, the GDPR may be considered an optimist law that cannot meet expectations based on the future direction of AI.
Most recently, the EU AI Act set out to establish additional oversight of the AI landscape- through a legally binding statutory framework- to ban harmful practices that are considered a threat to people’s safety, propose regulation of “high-risk” AI, and give guidance for limited oversight on limited to low-risk AI systems4. The ultimate goal is to guide AI developers and deployers with clear requirements for AI applications4.
The EU AI Act also explicitly includes AI-based software for medical purposes as a high-risk AI system, which aDBS also falls under4.
If a device is classified as “high risk,” then it must comply with “risk-mitigation systems, high quality data sets, clear user information and human oversight4.”
The EU AI Act, GDPR, and even the MDR provide oversight and stricter regulations to drive a safer environment for data exchange. This to a degree reflects the collective values of the EU policy-makers.
Out of scope: EDHR (2025)
Respectively, here are associated risks and benefits of the discusses EU AI policies with how they pertain to aDBS. If your attitudes towards risk-management of personal informationare similar, than the EU framework may be the next direction for your medical device implementation or where you seek your aDBS care.

Citations:
Russo, Lucia, and Oder, Noah. 2023. "How countries are implementing the OECD Principles for Trustworthy AI." OECD AI Policy Observatory. https://oecd.ai/en/wonk/national-policies-2.
U.S. International Trade Commission. 2019. “EU Medical Device Regulation & U.S. Medical Device Industry.” U.S. International Trade Commission. https://www.usitc.gov/publications/332/journals/eu_medical_device_regulation_us_medical_device_industry.pdf.
European Commission. n.d. Medical Devices Expert Panels: Overview. https://health.ec.europa.eu/medical-devices-expert-panels/overview_en.
European Parliament. 2021. Artificial Intelligence Act: Europe’s Approach to AI Regulation. European Parliamentary Research Service.https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=To%20ensure%20safe%20and%20trustworthy,become%20effective%20in%20August%202025.
AI Collective. n.d. Putting Regulation into Place. https://www.aicollective.co/putting-regulation-into-place.
Extrahorizon. n.d. GDPR, Software, and MedTech: Navigating Medical Device Regulation and Data Privacy in the EU. https://www.extrahorizon.com/gdpr-software-medtech-medical-device-eu-regulation-mdr-data-privacy.
Comments