Innovation in accordance with the European Union Law on Artificial Intelligence

Modern office building with curved glass facade and multiple windows. The background features a digital, abstract pattern with blue and purple hues, resembling circuit boards or city lights.

Like ours Microsoft AI Tour to Brussels, Paris and Berlin at the end of last year, we met with European organizations that were excited about the possibilities of our latest AI technologies and engaged in deployment projects. They were also alerted to the fact that 2025 is the year key obligations under the European Union’s AI law come into force, opening a new chapter in digital regulation as the world’s first comprehensive AI law becomes a reality.

At Microsoft, we’re ready to help our customers do two things at once: innovate with AI and comply with the EU AI Act. We design our products and services to comply with our obligations under the EU AI Act and work with our customers to help them implement and use technology in compliance. We are also working with European policy makers to support the development of effective and efficient implementation procedures under the EU AI Act that are in line with emerging international standards.

We look at these efforts in more detail below. As the deadlines for compliance with the EU AI Act are pushed back and key implementation details are not yet finalised, we will be publishing information and tools on an ongoing basis. You can view our documentation on the EU AI Act at Microsoft Trust Center to stay up to date.

Creating Microsoft products and services that comply with EU AI law

Organizations around the world use Microsoft products and services for innovative AI solutions that enable them to achieve more. For these customers, especially those who operate globally and across multiple jurisdictions, compliance is paramount. That’s why Microsoft is committed to complying with all laws and regulations applicable to Microsoft in every customer agreement. This includes the EU Law on Artificial Intelligence. This is also why we made early decisions to build and continue to invest in our AI governance program.

As stated in our inaugural Transparency reportwe have adopted a risk management approach that covers the entire AI development life cycle. We use practices such as impact assessment and red-teaming to help us identify potential risks and ensure that the highest-risk modeling teams and systems receive additional oversight and support through governance processes such as our Sensitive Use program. After risk mapping, we use systematic measurement to evaluate the prevalence and severity of risks against defined metrics. We manage risks by implementing mitigations such as the classifiers that are included Azure AI content security and ensuring ongoing monitoring and incident response.

Our framework for guiding engineering teams in building Microsoft AI solutions – the Responsible AI standard—was designed with an early version of the EU AI Act in mind.

Building on these core components of our program, we have dedicated significant resources to implementing the EU AI Act across Microsoft. Cross-functional working groups combining AI governance, engineering, legal and public policy have been working for months to determine whether and how our internal standards and practices should be updated to reflect the final text of the EU AI law as well as the first indications. implementation details. They also identified any additional engineering work needed to ensure readiness.

For example, the provisions of the EU Prohibited Practices Act are among the first to come into force in February 2025. We have adopted a proactive, layered approach to compliance ahead of the newly established European Commission’s Office for Artificial Intelligence, which provides additional guidance. This includes:

  • Conducting a thorough review of Microsoft owned systems already in the market to identify any places where we might need to adjust our approach, including updating documentation or making technical mitigations. To do this, we developed a series of questions designed to determine whether an AI system might indicate a prohibited practice and sent this survey to our engineering teams through our central tool. Relevant experts reviewed the responses and followed up directly with the teams if further clarification or next steps were needed. These review questions remain in our central accountable AI workflow on an ongoing basis, so teams working on new AI systems answer them and engage review workflow as needed.
  • Creation of new restricted uses in our internal company policies to ensure that Microsoft does not design or deploy artificial intelligence systems for uses that are prohibited by EU AI law. We are also developing specific marketing and sales guidelines to ensure that our general purpose AI technologies are not marketed or sold for uses that could fall under the prohibited practices of EU AI law.
  • Updating our contracts, including ours Generative AI code of conductso that our customers clearly understand that they cannot engage in any prohibited practices. For example, the generative AI code of conduct now explicitly prohibits the use of social rating services.

We were also among the first organizations to sign up to the three core commitments in The AI ​​Pacta set of voluntary commitments developed by the AI ​​Authority to support regulatory readiness ahead of some upcoming EU AI law compliance deadlines. In addition to our regular rhythm of publishing annual Responsible AI Transparency Reports, you can find an overview of our approach to EU AI law and a more detailed summary of how we implement prohibited practices provisions in the Microsoft Trust Center.

Working with customers to help them adopt and use Microsoft products and services in compliance with the EU AI Act

One of the core concepts of EU AI law is that responsibilities must be distributed across the AI ​​supply chain. This means that an upstream regulated entity such as Microsoft in its capacity as a provider of AI tools, services and components must support downstream regulated entities such as our enterprise customers when they integrate a Microsoft tool into a high-risk AI system. We embrace this concept of shared responsibility and strive to support our customers in the development and implementation of AI by sharing our knowledge, providing documentation and offering tools. This all leads to AI Customer Commitments which we conducted last June to support our customers on their responsible AI journeys.

We will continue to post documentation and resources related to the EU AI Act on the Microsoft Trust Center to provide updates and answer customer questions. Our Responsible AI resource site it is also a rich source of tools, practices, templates and information that we believe will help many of our customers build the foundations of good governance to support compliance with EU AI law.

In terms of documentation, the 33 transparency notes we’ve published since 2019 provide background information about the capabilities and limitations of our AI tools, components, and services that our customers rely on as downstream implementers of Microsoft AI platform services. We’ve also published documentation on our AI systems, such as answers to frequently asked questions. Our Transparency note for Azure OpenAI serviceAI platform service and FAQ for Co-pilotAI system, are examples of our approach.

We expect that several secondary regulatory measures under the EU AI Act will provide additional guidance on model and system level documentation. These documentation and transparency standards are still maturing and would benefit from further definition in line with similar efforts Reporting Framework for the International Code of Conduct for Organizations Developing Advanced Artificial Intelligence Systems in Hiroshima. Microsoft is pleased to have contributed to this Reporting Framework through an OECD-approved process and looks forward to its upcoming public release.

Finally, because the tools are necessary to achieve consistent and effective compliance, we make available to our customers versions of the tools we use for our own internal purposes. These tools include Microsoft Purview Compliance Managerwhich helps customers understand and take action to improve compliance options across multiple regulatory domains, including the EU AI Act; Azure AI content security help mitigate the harm caused by the content; Azure AI Foundry help evaluate generative artificial intelligence applications; and Python Risk Identification Tool or PyrRITan open innovation framework that our independent AI Red Team uses to identify potential harms associated with our riskiest AI models and systems.

It helps develop effective, efficient and interoperable implementation practices

A unique feature of the EU AI Act is that there are more than 60 secondary regulatory measures that will have a substantial impact on defining implementation expectations and guiding organizational compliance. As many of these efforts are ongoing or just getting off the ground, we have a key opportunity to help establish implementation practices that are efficient, effective and in line with emerging international standards.

Microsoft is working with the EU’s central regulator, the AI ​​Office, and other relevant authorities in EU member states to share learnings from our experiences with AI development, governance and compliance, seek to clarify open questions, and defend practical outcomes. We are also involved in the development of the Codex for General AI Technical Model Providers and remain a long-standing contributor to standards developed by European standardization organizations such as CEN and CENELEC to address high-risk AI system requirements. in the EU law on artificial intelligence.

Our customers also play a key role in this implementation effort. By working with policymakers and industry groups to understand and provide input on evolving requirements, our customers have the opportunity to contribute valuable insights and help shape implementation practices that better reflect their circumstances and needs, recognizing the wide range of organizations in Europe. which are loaded with opportunities to innovate and grow with AI. A key question to be resolved in the coming months is when organizations that significantly tune their AI models will become downstream providers to meet the general AI model obligations in August.

We are moving forward

Microsoft will continue to make significant investments in products, tools and governance to help our customers innovate with AI in line with new laws such as the EU AI Act. Implementation procedures that are efficient, effective and interoperable internationally will be key to supporting useful and credible innovation on a global scale, so we will continue to rely on regulatory processes in Europe and around the world. We’re excited that the projects that animated our Microsoft AI Tour events in Brussels, Paris and Berlin are improving people’s lives and earning their trust, and we welcome feedback on how we can continue to support our customers in their efforts to comply with the new laws. like the EU Law on Artificial Intelligence.

Tags: AI, AI security policy, Azure OpenAI service, EU, European Union, responsible AI

Leave a Reply

Your email address will not be published. Required fields are marked *