May 4, 2026

How to carry out an ISO 42001 AI risk assessment 

Your business is likely one of a growing number using artificial intelligence (AI) in its operations. This introduces a new category of risk. Unlike traditional IT systems, AI can produce unpredictable and constantly adapting outputs, introduce bias or be misused in ways that can be difficult to control. 

That’s why it’s so essential to have a solid structure in place. ISO 42001 provides a framework for managing those risks. In this guide, our ISO consultants here at ISO QSL explain how to carry out an ISO 42001 AI risk assessment to help your AI systems flourish, while remaining audit ready.  

What an AI risk assessment means under ISO 42001 

An AI risk assessment focuses on how your AI systems could cause harm, and what you do to control that risk. Importantly, this goes beyond traditional IT risk. You need to consider issues such as:  

  • Bias 
  • Incorrect outputs or AI ‘hallucinations’ 
  • AI system ‘drift’ 
  • Lack of transparency 
  • Data privacy risks 
  • AI misuse  

In most cases, you can narrow these risks down to how you design, deploy and train your AI, as well as how you teach your users to interact with it. 

You should carry out an AI risk assessment whenever your use of AI changes or expands. 

This includes before deploying new AI systems, when updating models or datasets, during ISO 42001 implementation, and as part of your ongoing monitoring process. 

As with any risk assessment, you can never eliminate all risk. Rather, your goal here is to understand the risks, control them and demonstrate that your approach is reasonable, consistent and effective.  

How to carry out an ISO 42001 AI risk assessment 

Keep your approach simple. Focus only on real risks linked to how your AI operates. In fact, complex risk frameworks are more likely to fail, because your teams won’t use them consistently. Instead, build something they can understand and apply in their daily work. You can refine your approach over time. For now, the primary objective is usability. Here are some suggested steps to take:   

Define the scope of your AI system 

Start by defining what you’re assessing. Document what your AI does, where you use it, who interacts with it, and what decisions it influences. 

This creates the foundation for identifying (and controlling) all your relevant risks. If your scope is unclear, your risk assessment will be incomplete or inefficient.  

Identify AI-specific risks 

Next, identify the risks linked to that system. Again, focus on practical categories such as bias and discrimination, incorrect or unsafe outputs, lack of transparency, data privacy concerns, security vulnerabilities, misuse and over-reliance. These risks are consistent across almost all AI management systems. However, define the specifics for your specific AI systems or tools.   

Assess likelihood and impact 

Once you’ve identified the risks, assess their significance. Use a standard scoring system to evaluate how likely each risk is, and, separately, its impact if it does occur. A 1-5 scale, or a red-amber-green (RAG) rating is usually sufficient.  

Define your controls and mitigation measures 

With the risks assessed, you then need to define how you’ll control each risk. This could include the following (and more):  

  • Human oversight 
  • Testing and validation processes 
  • Data quality checks 
  • Access controls 
  • Output monitoring

Crucially, ensure you implement realistic, practical controls. If your staff don’t apply them in their day-to-day operations, your controls are effectively doing nothing. As such, this won’t meet ISO 42001’s requirements. 

Record the risks in a risk register 

Document all the risks you identify in a structured risk register. 

Include a clear description of the risk, its cause and impact, its score, the controls in place, and the person responsible for managing it. This creates a consistent, auditable record of how your organisation manages AI risk.  

Periodic reviews and updates 

The field of AI, how your organisation uses it, and your business goals will change over time, sometimes rapidly. Review your risk assessment whenever you update your AI systems, introduce new data or go through an AI-related incidents. An outdated risk assessment will result in ineffective control measures and nonconformities being identified during audit.  

What documents should an ISO 42001 risk assessment include? 

Here’s what your ISO 42001 AI risk assessment should include:  

  • A central risk register (primary document) 
  • Minimal supporting documentation 
  • Risk assessment procedure 
  • Risk scoring criteria 
  • Supporting controls documentation 
  • Version history/change log  

Once again, all your documents should be clear, consistent and easy to use, so your teams actually apply them in the workplace.  

Risk register 

Format your risk register as a structured, working table or spreadsheet. Use clear, concise entries, with no long paragraphs. Design it ready for regular updates. It should include the following:  

  • AI system or process 
  • Risk description, cause, and impact 
  • Likelihood and impact scores 
  • Overall risk rating 
  • Controls in place 
  • Risk owner 
  • Review date 

 Risk assessment procedure 

Produce a short, written document explaining your overall risk assessment procedure. It should be organised in a step-by-step format and aligned with your actual business processes. For most organisations, one to two pages should be enough detail.  

Risk scoring criteria 

This can be a simple table or matrix. You could develop it as a standalone document or embed it within the risk assessment procedure (above). It should include the following:  

  • Defined likelihood (1-5 scale) 
  • Defined impact (RAG rating) 
  • Guidance on consistently applying these scores 

 Supporting controls documentation 

Supporting controls documentation shows how your controls are applied in practice. Your risk register lists the control. This documentation is the proof and contains more detail. It may include simple procedures, checklists, records or system settings. Ensure your risk register clearly links each control to the relevant supporting documentation. Use existing documents where possible and ensure they reflect what your team actually does. There’s no need to write entirely new documents if many of your controls are already defined elsewhere.  

Change log 

Your change log contains the version history of your risk register. It can be a simple table within your risk register or a separate log. The important thing is that your team maintains it alongside the live risk register. It should include the date of any updates, a summary of any changes, and the person responsible.  

What ISO 42001 auditors will look for 

Auditors will focus on whether your approach is clear, consistent, and applied in practice. 

They will expect to see that you identify and assess any relevant AI risks, implement appropriate controls, assign clear ownership, and review and maintain your process over time.  

Need help with your ISO 42001 implementation? 

Carrying out an AI risk assessment is a key part of building an effective AI management system (AIMS). Getting it right sets the stage for the rest of your ISO 42001 implementation being much more straightforward. At ISO QSL, we help businesses like yours develop ISO-compliant AIMS frameworks. Get in touch with our expert ISO consultants today to build a practical ISO 42001 approach for your organisation.Â