top of page
Background_sub page yellow.png

Developer compliance

Are you developing and selling AI platforms, agents, or other products?
Use your responsible AI practices as a competitive advantage.

Services for AI developers

Build AI that is safe, defensible, and ready for regulatory scrutiny.

I was a Senior AI product manager for just over year, and I loved working with technical teams - and translating priorities between product, compliance, legal, and security. These services are designed specifically for companies developing AI and agentic AI. 

Developer‑focused services include:

  • Horizon scanning of upcoming regulatory needs

  • Compliance documentation: EU AI Act articles 9-15, 13, 27, 50, 53-54, 55, and 73

  • Translation of your responsible AI practices into customer- and market-facing assets

  • Governance of AI agents

  • Governance of multi-agentic systems

  • Recommendations for in-product compliance support

  • Customer-facing training and communication

  • Human oversight design embedded into workflows

  • Bias evaluation (see below)

  • Preparation for EU conformity assessments

Best for:

  • Vendors selling into public sector or regulated industries

  • Start-ups and scale-ups

  • Companies wanting to focus on responsible AI as a competitive advantage

Multiple formats are available:

Technical advisory • Pairing with your tech teams, or more client-facing • Project-based

Ethics, risk reviews and impact assessments for developers (prEN 18228)

Fundamental rights impact assessments (EU AI Act article 27), human oversight (article 14), and claims of exceptions (article 6). 

We have a unique AI ethics review methodology. This identifies and involves key stakeholders such as your end users who are historically likely to be negatively impacted, includes them in participatory risk brainstorming, and helps you create AI impact assessments and AI ethics reviews that demonstrate market leadership.

These assessments draw from my work developing CEN/CENELC's harmonized standard on risk management for high-risk AI systems (prEN 18228), which will presume conformity with the EU AI Act Article 9. 

Even if you aren't planning on pursuing certification with a harmonized standard - you can get the most out of its best practices. 

Fairness auditing for developers (prEN 18283 and 18284)

Given high-profile failures of popular AI systems, fairness and non-bias can be your competitive advantage. ​​​

One of our unique services: a methodology to help you document whether your AI system is “fair” (or "responsible", or other less commonly quantified values), in collaboration with your developers. These results can give you the metrics to demonstrate safety and responsibility to the market. 

These audits are directly informed by my role developing CEN/CENELC's harmonized standards on bias evaluation (prEN 18283 and 18284), which will presume conformity with the EU AI Act Article 10. 

bottom of page