top of page
Background_sub page grey.png

Risk and impact

Structured evaluations of AI systems, vendors, risk and impact, and your own policies — so your AI system use is ethical, and aligned with your values, and ready to meet documentation requirements of the EU AI Act. 

Ethics, risk reviews and AI impact assessments 

Assessing AI-specific risks is challenging - both for organizations with no set procedures, and for those with well-developed, privacy-focused risk management programs, which often lack the flexibility for the social and reputational risks of AI systems.

 

We help you conduct risk assessments of the AI systems you want to use, are already using, or have discovered being used after the fact. These can be effectively folded into DPIAs to dovetail GDPR with EU AI Act requirements. 

We bring a unique focus on discovering and tackling employee concerns, resistance, and potential for bias and under-performance, to strengthen the chances that your use of AI actually results in the impact you want.

 

For AI providers, we have a unique AI ethics review methodology. This identifies and involves key stakeholders such as your end users who are historically likely to be negatively impactedI, includes them in participatory risk brainstorming, and helps you create AI impact asssessments and AI ethics reviews that demonstrate market leadership. 

Ethical AI policy creation and review

Starting from scratch, or don’t know where to start? We help you build lean policies and governance frameworks, according to your role (user or provider), AI systems, and compliance obligations, and following ISO 42001 best practices. 

 

Or we can review and audit the procedures and documentation you already have. We benchmark your policies against the quality standards of your choice, such as the NIST AI RMF or the Singapore Model AI Governance Framework. 

EU AI Act readiness and documentation

Even organizations that only use MS Copilot have requirements under the EU AI Act.

Many consultants promise “AI Act compliance” - but compliance cannot be promised. The AI Act operates under a context-based risk classification, specific to how each AI system is being used or developed and the risks that could occur. If, for example, you are a public organization using MS Copilot to create citizen-facing information material, you could have obligations of a high-risk AI system deployer.  

 

What we promise instead, is to qualify your readiness and build your compliance capacity for the future. We classify the risk of your AI systems according to the AI Act, assess corresponding obligations and whether you can save time by merging these obligations with the GDPR’s requirement, and quality-control your compliance practices and documentation. We love working with data protection officers! 

bottom of page