July 17, 2025

How to create an AI policy 

There’s no doubt that adopting artificial intelligence (AI) into your business brings huge potential, from automating repetitive tasks to improving the quality and speed of decision-making.  However, with these benefits also come real risks, including data breaches, ethical concerns, compliance issues, and reputational damage.   

That’s why having a clear AI policy in place is more than just best practice; it’s essential.   

This guide is for anyone using or developing AI, or thinking about implementing the new ISO 42001 Artificial Intelligence Management System (AIMS), who doesn’t yet have a formal AI policy in place.  We’ll walk you through a step-by-step approach to help you create a practical, responsible and compliant AI policy that works for your business. 

What is an AI policy, and why do you need one?

An AI policy sets out how your organisation uses AI, what’s allowed, what’s not and who is responsible.  It helps ensure AI is developed and used responsibly, ethically and in line with data protection laws and regulatory requirements.  

A good AI policy should make clear:   

  • What the rules are (e.g. don’t use ChatGPT to process personal or confidential data)
  • Why those rules matter (e.g. to protect privacy and comply with GDPR)
  • Who the rules apply to (e.g. developers, business users, suppliers)

Done right, an AI policy isn’t about red tape; it’s about protecting people, building trust, and keeping your business on the right side of the law. 

How to create an AI policy

We’ve broken this process down into 12 manageable steps:

Step 1: Define the purpose 

Start by explaining why your organisation is using AI and why responsible use matters.  Be honest about the opportunities and risks. 

Your purpose statement should reflect your values and show your team that the policy is about building trust and accountability, not slowing innovation.  

Step 2: Establish the scope 

Be specific about what’s covered.  Does the policy apply to AI tools developed in-house, third-party platforms, or both? Does it affect the tech team, or include HR and customer service too?  

The more clearly you define the scope, the easier it is for teams to apply the policy correctly. 

Step 3: Set ethical considerations

Outline your organisation’s guiding principles for AI.  At a minimum, include commitments to:  

  • Fairness: ensuring AI doesn’t discriminate or cause harm
  • Transparency: be clear about how AI systems work and make decisions
  • Accountability: someone should always be responsible for outcomes
  • Respect for human rights: protect privacy, dignity and freedom from discrimination

These values should guide every AI-related decision.     

Step 4: Protect data privacy and security 

AI relies on data, which means data protection must be front and centre.  Your policy should explain: 

  • Which privacy laws apply and how your business complies (e.g. GDPR)
  • How data is collected, used, stored and deleted
  • When and how personal data can be used in AI tools
  • What safeguards you put in place to prevent breaches

This is especially important when using tools like ChatGPT or other generative AI.   

Step 5: Mitigate fairness and bias

AI is only as good as the data it learns from, and biased data will create biased outcomes.  Show that your organisation is committed to: 

  • Reviewing models regularly for bias
  • Using diverse and representative data sources
  • Flagging and fixing issues through designed processes
  • Monitoring outputs for unintended consequences

This protects individuals and your reputation.   

Step 6: Ensure accountability and reliability 

People need to trust the results AI delivers.  Your policy should cover:  

  • How systems are tested before going live 
  • What good performance looks like 
  • How you monitor AI systems over time 
  • What to do if something goes wrong 

Reliable systems lead to better decisions and fewer risks.   

Step 7: Clarify user consent and transparency  

People should know when AI is being used and how their data is involved.  Make clear: 

  • When and how user consent is obtained 
  • What information users receive about AI decisions   
  • How you communicate AI-driven outcomes clearly 

Transparency builds trust and helps you meet legal requirements. 

Step 8: Establish human oversight and intervention  

AI is a tool, not a replacement for human judgment.  Set clear expectations around:  

  • Which decisions require human approval  
  • How people can override or stop AI systems  
  • Who is responsible for monitoring AI in action

This ensures accountability and reduces automated errors.   

Step 9: Promote continuous learning and development  

AI is evolving rapidly, and so should your team’s skills. Use your policy to clarify:  

  • Who requires AI-related training, and on what topics 
  • How often training should occur   
  • How you update teams on new tools, risks, and best practices

This empowers smarter, safer decisions across your organisation. 

Step 10: Set compliance and governance  

You need clear oversight structures to ensure AI is used properly.  Your policy should cover:

  • Who owns AI governance in your business  
  • How new tools and projects are approved  
  • How risks are assessed  
  • How non-compliance is reported and managed  

This ensures accountability and alignment with your values, legal obligations, and business goals.  

Step 11: Plan for regular reviews and updates  

Technology, laws, and business needs change, and your policy should evolve too.  Explain: 

  • How often the policy will be reviewed (e.g. annually) 
  • Who is responsible for updates 
  • How lessons from audits or incidents inform improvements
  • This keeps your policy relevant and effective.   

Step 12: Define implementation steps  

A policy only works if people follow it. Make adoption easy by including:

  • Clear internal communication  
  • Easy access to the policy  
  • Training and onboarding support  
  • Integration of key rules into daily workflows 

This ensures the policy becomes part of your culture, not just another document.  

An AI policy isn’t something you write once and forget about. It needs to evolve as your tools, people, and regulations do. When done right, it becomes more than just a set of rules; it’s a guide your team can rely on. It gives people clarity on what’s ok, confidence in their decision making, and reassurance that your business is using AI in a safe, fair, and responsible way. 

What is ISO 42001 Artificial Management System (AIMS)?

ISO 42001 is the first international standard for managing artificial intelligence (AI) responsibly. It helps organisations build trust, stay compliant, and reduce risks like bias, security threats, and ethical concerns. Whether you’re developing AI or using third-party tools, ISO 42001 gives you a clear framework to manage AI in a safe, transparent, and future-ready way. 

Jodie Purser - Website

About the author

Jodie Purser