OECD Guiding Principles and CoE Framework for Trustworthy AI
- meganjungers
- Mar 14
- 2 min read
Updated: Apr 3
Given that there is no global standard for AI regulation, a number of international governing organizations (IGOs) have embraced the opportunity to project their own ideas into the regulatory space through the use of standards, frameworks, and guidelines. While theoretically not legally binding, these model what good practices in the global context look like for AI policy, and can be used as valuable references for future direction of more localized policies across the globe.
OECD AI Principles
The OECD (Organization for Economic Co-operations and Development) developed a list of 10 guiding principles to promote governmental AI policy to seize innovation opportunities while also balancing suggestions for responsible AI use1. The list was the first ever intergovernmental standard for trustworthy AI, and set a base to help other countries progress their AI policy to best reflect values, as is articulated in their 5 value-based principles:
"Values-based principles- The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Adopted in May 2019, they set standards for AI that are practical and flexible enough to stand the test of time.
Inclusive growth, sustainable development and well-being
Human rights and democratic values, including fairness and privacy
Transparency and explainability
Robustness, security and safety
Accountability (OECD, 2019)"
Further, they include specific items for addressing AI in policy:
"Recommendations for policy makers
Investing in AI research and development
Fostering an inclusive AI-enabling ecosystem
Shaping an enabling interoperable governance and policy environment for AI
Building human capacity and preparing for labour market transformation
International co-operation for trustworthy AI (OECD, 2019)"
Further, the OECD constantly tracks and updates the general public on what actions different countries are taking to work towards the respective 10 principles. This form of international oversight is beneficial for identifying areas of overlap in AI policy, and localizing where opportunities for cohesive international governance may be of value to the world.
The CoE Framework Convention on AI
Unlike the OECD, the Council of Europe contributed to the AI policy narrative through the first international treaty for guiding a framework for AI governance. While not explicitly applicable yet to SaMD- AI, there is a legally required action for compliance by the signees, whom the United States is a member2. While not in force yet a global regulation, the framework does establish a set of items to focus regulation around, including human dignity and human autonomy. It is anticipated that the EU is considered already compliant with these standards, given its extensive AI regulatory landscape, but this framework lays the foundation for countries with more liberal attitudes to embrace more opportunities to structure AI in their healthcare systems.


Ultimately, the international AI policy landscape is continuing to evolve, however precedent set with the CoE’s Framework establishes that there are avenues for international governance. Overall, though, the benefits of embracing this opportunity to create more cohesive guidelines severely outweigh the risks to ethical items like autonomy and data privacy that can continue to be a threat, without continued movement on the international regulatory front.
Citations
OECD.AI. 2025. Risk & Accountability Overview. OECD.A. https://oecd.ai/en/site/risk-accountability.
International Association of Privacy Professionals (IAPP). 2024. "Council of Europe's Framework Convention and Its Implications for Global AI Governance." IAPP News, 2024. https://iapp.org/news/a/council-of-europe-s-framework-convention-and-its-implications-for-global-ai-governance.
Commentaires