There’s no doubt that adopting artificial intelligence (AI) into your business brings huge potential, from automating repetitive tasks to improving the quality and speed of decision-making.  However, with these benefits also come real risks, including data breaches, ethical concerns, compliance issues, and reputational damage.  Â
That’s why having a clear AI policy in place is more than just best practice; it’s essential.  Â
This guide is for anyone using or developing AI, or thinking about implementing the new ISO 42001 Artificial Intelligence Management System (AIMS), who doesn’t yet have a formal AI policy in place.  We’ll walk you through a step-by-step approach to help you create a practical, responsible and compliant AI policy that works for your business.Â
What is an AI policy, and why do you need one?
An AI policy sets out how your organisation uses AI, what’s allowed, what’s not and who is responsible.  It helps ensure AI is developed and used responsibly, ethically and in line with data protection laws and regulatory requirements. Â
A good AI policy should make clear:  Â
- What the rules are (e.g. don’t use ChatGPT to process personal or confidential data)
- Why those rules matter (e.g. to protect privacy and comply with GDPR)
- Who the rules apply to (e.g. developers, business users, suppliers)
Done right, an AI policy isn’t about red tape; it’s about protecting people, building trust, and keeping your business on the right side of the law.Â
How to create an AI policy
We’ve broken this process down into 12 manageable steps:
Step 1: Define the purposeÂ
Start by explaining why your organisation is using AI and why responsible use matters.  Be honest about the opportunities and risks. 
Your purpose statement should reflect your values and show your team that the policy is about building trust and accountability, not slowing innovation. Â
Step 2: Establish the scopeÂ
Be specific about what’s covered.  Does the policy apply to AI tools developed in-house, third-party platforms, or both? Does it affect the tech team, or include HR and customer service too? Â
The more clearly you define the scope, the easier it is for teams to apply the policy correctly.Â
Step 3: Set ethical considerations
Outline your organisation’s guiding principles for AI.  At a minimum, include commitments to: Â
- Fairness: ensuring AI doesn’t discriminate or cause harm
- Transparency: be clear about how AI systems work and make decisions
- Accountability: someone should always be responsible for outcomes
- Respect for human rights: protect privacy, dignity and freedom from discrimination
These values should guide every AI-related decision.    Â
Step 4: Protect data privacy and securityÂ
AI relies on data, which means data protection must be front and centre.  Your policy should explain:Â
- Which privacy laws apply and how your business complies (e.g. GDPR)
- How data is collected, used, stored and deleted
- When and how personal data can be used in AI tools
- What safeguards you put in place to prevent breaches
This is especially important when using tools like ChatGPT or other generative AI.  Â
Step 5: Mitigate fairness and bias
AI is only as good as the data it learns from, and biased data will create biased outcomes.  Show that your organisation is committed to:Â
- Reviewing models regularly for bias
- Using diverse and representative data sources
- Flagging and fixing issues through designed processes
- Monitoring outputs for unintended consequences
This protects individuals and your reputation.  Â
Step 6: Ensure accountability and reliabilityÂ
People need to trust the results AI delivers.  Your policy should cover: Â
- How systems are tested before going liveÂ
- What good performance looks likeÂ
- How you monitor AI systems over timeÂ
- What to do if something goes wrongÂ
Reliable systems lead to better decisions and fewer risks.  Â
Step 7: Clarify user consent and transparency Â
People should know when AI is being used and how their data is involved.  Make clear:Â
- When and how user consent is obtainedÂ
- What information users receive about AI decisions  Â
- How you communicate AI-driven outcomes clearlyÂ
Transparency builds trust and helps you meet legal requirements.Â
Step 8: Establish human oversight and intervention Â
AI is a tool, not a replacement for human judgment.  Set clear expectations around: Â
- Which decisions require human approval Â
- How people can override or stop AI systems Â
- Who is responsible for monitoring AI in action
This ensures accountability and reduces automated errors.  Â
Step 9: Promote continuous learning and development Â
AI is evolving rapidly, and so should your team’s skills. Use your policy to clarify: Â
- Who requires AI-related training, and on what topicsÂ
- How often training should occur  Â
- How you update teams on new tools, risks, and best practices
This empowers smarter, safer decisions across your organisation. 
Step 10: Set compliance and governance Â
You need clear oversight structures to ensure AI is used properly.  Your policy should cover:
- Who owns AI governance in your business Â
- How new tools and projects are approved Â
- How risks are assessed Â
- How non-compliance is reported and managed Â
This ensures accountability and alignment with your values, legal obligations, and business goals. Â
Step 11: Plan for regular reviews and updates Â
Technology, laws, and business needs change, and your policy should evolve too.  Explain:Â
- How often the policy will be reviewed (e.g. annually)Â
- Who is responsible for updatesÂ
- How lessons from audits or incidents inform improvements
- This keeps your policy relevant and effective.  Â
Step 12: Define implementation steps Â
A policy only works if people follow it. Make adoption easy by including:
- Clear internal communication Â
- Easy access to the policy Â
- Training and onboarding support Â
- Integration of key rules into daily workflowsÂ
This ensures the policy becomes part of your culture, not just another document. Â
An AI policy isn’t something you write once and forget about. It needs to evolve as your tools, people, and regulations do. When done right, it becomes more than just a set of rules; it’s a guide your team can rely on. It gives people clarity on what’s ok, confidence in their decision making, and reassurance that your business is using AI in a safe, fair, and responsible way.Â
What is ISO 42001 Artificial Management System (AIMS)?
ISO 42001 is the first international standard for managing artificial intelligence (AI) responsibly. It helps organisations build trust, stay compliant, and reduce risks like bias, security threats, and ethical concerns. Whether you’re developing AI or using third-party tools, ISO 42001 gives you a clear framework to manage AI in a safe, transparent, and future-ready way.Â