Your business is likely one of a growing number using artificial intelligence (AI) in its operations. This introduces a new category of risk. Unlike traditional IT systems, AI can produce unpredictable and constantly adapting outputs, introduce bias or be misused in ways that can be difficult to control.Â
That’s why it’s so essential to have a solid structure in place. ISO 42001 provides a framework for managing those risks. In this guide, our ISO consultants here at ISO QSL explain how to carry out an ISO 42001 AI risk assessment to help your AI systems flourish, while remaining audit ready. Â
What an AI risk assessment means under ISO 42001Â
An AI risk assessment focuses on how your AI systems could cause harm, and what you do to control that risk. Importantly, this goes beyond traditional IT risk. You need to consider issues such as: Â
- BiasÂ
- Incorrect outputs or AI ‘hallucinations’Â
- AI system ‘drift’Â
- Lack of transparencyÂ
- Data privacy risksÂ
- AI misuse Â
In most cases, you can narrow these risks down to how you design, deploy and train your AI, as well as how you teach your users to interact with it.Â
You should carry out an AI risk assessment whenever your use of AI changes or expands.Â
This includes before deploying new AI systems, when updating models or datasets, during ISO 42001 implementation, and as part of your ongoing monitoring process.Â
As with any risk assessment, you can never eliminate all risk. Rather, your goal here is to understand the risks, control them and demonstrate that your approach is reasonable, consistent and effective. Â
How to carry out an ISO 42001 AI risk assessmentÂ
Keep your approach simple. Focus only on real risks linked to how your AI operates. In fact, complex risk frameworks are more likely to fail, because your teams won’t use them consistently. Instead, build something they can understand and apply in their daily work. You can refine your approach over time. For now, the primary objective is usability. Here are some suggested steps to take:  Â
Define the scope of your AI systemÂ
Start by defining what you’re assessing. Document what your AI does, where you use it, who interacts with it, and what decisions it influences.Â
This creates the foundation for identifying (and controlling) all your relevant risks. If your scope is unclear, your risk assessment will be incomplete or inefficient. Â
Identify AI-specific risksÂ
Next, identify the risks linked to that system. Again, focus on practical categories such as bias and discrimination, incorrect or unsafe outputs, lack of transparency, data privacy concerns, security vulnerabilities, misuse and over-reliance. These risks are consistent across almost all AI management systems. However, define the specifics for your specific AI systems or tools.  Â
Assess likelihood and impactÂ
Once you’ve identified the risks, assess their significance. Use a standard scoring system to evaluate how likely each risk is, and, separately, its impact if it does occur. A 1-5 scale, or a red-amber-green (RAG) rating is usually sufficient. Â
Define your controls and mitigation measuresÂ
With the risks assessed, you then need to define how you’ll control each risk. This could include the following (and more): Â
- Human oversightÂ
- Testing and validation processesÂ
- Data quality checksÂ
- Access controlsÂ
- Output monitoring
Crucially, ensure you implement realistic, practical controls. If your staff don’t apply them in their day-to-day operations, your controls are effectively doing nothing. As such, this won’t meet ISO 42001’s requirements.Â
Record the risks in a risk registerÂ
Document all the risks you identify in a structured risk register.Â
Include a clear description of the risk, its cause and impact, its score, the controls in place, and the person responsible for managing it. This creates a consistent, auditable record of how your organisation manages AI risk. Â
Periodic reviews and updatesÂ
The field of AI, how your organisation uses it, and your business goals will change over time, sometimes rapidly. Review your risk assessment whenever you update your AI systems, introduce new data or go through an AI-related incidents. An outdated risk assessment will result in ineffective control measures and nonconformities being identified during audit. Â
What documents should an ISO 42001 risk assessment include?Â
Here’s what your ISO 42001 AI risk assessment should include: Â
- A central risk register (primary document)Â
- Minimal supporting documentationÂ
- Risk assessment procedureÂ
- Risk scoring criteriaÂ
- Supporting controls documentationÂ
- Version history/change log Â
Once again, all your documents should be clear, consistent and easy to use, so your teams actually apply them in the workplace. Â
Risk registerÂ
Format your risk register as a structured, working table or spreadsheet. Use clear, concise entries, with no long paragraphs. Design it ready for regular updates. It should include the following: Â
- AI system or processÂ
- Risk description, cause, and impactÂ
- Likelihood and impact scoresÂ
- Overall risk ratingÂ
- Controls in placeÂ
- Risk ownerÂ
- Review dateÂ
 Risk assessment procedureÂ
Produce a short, written document explaining your overall risk assessment procedure. It should be organised in a step-by-step format and aligned with your actual business processes. For most organisations, one to two pages should be enough detail. Â
Risk scoring criteriaÂ
This can be a simple table or matrix. You could develop it as a standalone document or embed it within the risk assessment procedure (above). It should include the following: Â
- Defined likelihood (1-5Â scale)Â
- Defined impact (RAG rating)Â
- Guidance on consistently applying these scoresÂ
 Supporting controls documentationÂ
Supporting controls documentation shows how your controls are applied in practice. Your risk register lists the control. This documentation is the proof and contains more detail. It may include simple procedures, checklists, records or system settings. Ensure your risk register clearly links each control to the relevant supporting documentation. Use existing documents where possible and ensure they reflect what your team actually does. There’s no need to write entirely new documents if many of your controls are already defined elsewhere. Â
Change logÂ
Your change log contains the version history of your risk register. It can be a simple table within your risk register or a separate log. The important thing is that your team maintains it alongside the live risk register. It should include the date of any updates, a summary of any changes, and the person responsible. Â
What ISO 42001 auditors will look forÂ
Auditors will focus on whether your approach is clear, consistent, and applied in practice.Â
They will expect to see that you identify and assess any relevant AI risks, implement appropriate controls, assign clear ownership, and review and maintain your process over time. Â
Need help with your ISO 42001 implementation?Â
Carrying out an AI risk assessment is a key part of building an effective AI management system (AIMS). Getting it right sets the stage for the rest of your ISO 42001 implementation being much more straightforward. At ISO QSL, we help businesses like yours develop ISO-compliant AIMS frameworks. Get in touch with our expert ISO consultants today to build a practical ISO 42001 approach for your organisation.Â