The AI You're Already Using: Why Every Organisation Needs ISO 42001 Before It's Too Late
- Agnes Sopel

- Jan 6
- 29 min read

Your customer service team uses ChatGPT to draft email responses. Your marketing manager relies on Microsoft Copilot to write campaign content. Your HR department screens CVs through an applicant tracking system powered by AI algorithms. Your finance team automates invoice processing with AI-enabled software.
You haven't built a neural network. You haven't deployed machine learning models. You haven't hired data scientists. But make no mistake, your organisation is using artificial intelligence, and under emerging regulations worldwide, you're fully accountable for how that AI behaves.
Derek Mobley discovered this accountability the hard way. Over seven years, he applied for more than one hundred jobs through Workday's AI-powered applicant screening system. Despite his qualifications, he was rejected from nearly every position without an interview. In February 2024, he filed a class action lawsuit alleging that Workday's AI discriminated against him based on race, age, and disability.
In May 2025, a federal court granted conditional certification for the Age Discrimination in Employment Act claims, potentially covering hundreds of millions of applicants. The case sent shockwaves through every organisation using AI hiring tools, which, according to research, includes 492 of the Fortune 500 companies in 2024.
The court made a statement that should terrify every business leader delegating AI decisions to technology vendors: algorithmic decision-making receives the same legal scrutiny as human decision-making.
Drawing an artificial distinction between software and human decisionmakers would potentially gut anti-discrimination laws in the modern era. If your AI discriminates, you're liable. If your vendor's AI discriminates, you're liable. The technology doesn't shield you from responsibility, it extends your liability into territories you never anticipated.
The University of Washington Information School published research in 2024 revealing the stark reality of AI bias. Researchers tested AI resume screening across nine occupations using five hundred applications. The results were devastating: AI favored white-associated names in 85.1% of cases and female-associated names in only 11.1% of cases. In direct comparisons, Black male candidates were disadvantaged compared to white male counterparts in up to 100% of cases.
This isn't theoretical concern, it's measured discrimination occurring at scale in systems processing millions of job applications annually.
But employment discrimination represents just one category of AI risk facing every organisation today. The European Union's AI Act, which entered force in August 2024 with staged implementation through August 2026, creates comprehensive regulatory obligations for any organisation selling AI systems or AI-enabled products in the EU market.
UK firms trading with Europe face penalties up to 7% of global turnover or €35 million, whichever is higher, for non-compliance. The regulation doesn't distinguish between companies building advanced AI models and companies simply using AI tools in their operations. If AI touches your business, the regulation applies.
The uncomfortable truth is this: AI is no longer the exclusive domain of technology giants with machine learning departments. It's embedded in the software-as-a-service tools that small businesses use daily.
It's powering the chatbots answering customer questions. It's screening the CVs for your open positions. It's optimising the marketing campaigns your team runs. And under evolving regulations worldwide, your organisation bears full responsibility for how these AI systems behave, regardless of whether you built them, bought them, or barely understand how they work.
The Hidden AI in Every Organisation
The revolution occurred so quietly that most organisations didn't notice they'd become AI users. There was no implementation project, no board approval, no risk assessment. Employees simply started using tools that happened to be powered by artificial intelligence, and suddenly the organisation's operations depended on algorithmic decision-making without anyone establishing governance, understanding risks, or ensuring compliance.
Consider the scale of adoption. By September 2024, over one million paying business users across ChatGPT's Enterprise, Team, and Education tiers were using generative AI for business tasks.
ChatGPT Enterprise alone grew from 150,000 users in January 2024 to over 600,000 by April 2024. Microsoft reports that over 30% of Google Workspace customers use Gemini by September 2024, with millions of users across enterprises, small and medium-sized businesses, and educational institutions.
Microsoft 365 Copilot has been adopted by over 40% of Fortune 100 companies by early 2024, with reported efficiency gains including 20% effectiveness improvements and average time savings of ten hours per month on routine tasks.
These aren't specialised AI platforms requiring technical expertise, they're productivity tools embedded in the software employees already use.
Microsoft 365 Copilot sits inside Word, Excel, Outlook, and Teams. Google Gemini integrates with Gmail, Sheets, and Google Docs. ChatGPT operates as standalone interface or through API integrations.
The barrier to AI adoption collapsed to zero. Employees don't need permission, training, or technical knowledge. They simply start using AI the way they'd use any other software feature.
The business value drives rapid expansion. Quilter, a financial services firm, identified Microsoft 365 Copilot as making their teams 20% more effective, with tasks that previously required days now completed in hours.
Their 174 investment managers save an average of 45 minutes per meeting through AI-automated meeting notes. Raiffeisen Bank International automated repetitive tasks using Azure OpenAI Service, quickly summarising legal, regulatory, and banking documents.
Ramp built custom AI tools that saved 30,000 hours of manual work, processing 400,000 invoices and 5 million receipts monthly with 90% accuracy. Brisbane Catholic Education reported educators saving an average of 9.3 hours per week using AI tools.
Small and medium-sized businesses are accelerating adoption even faster than large enterprises. According to SMB Group research, AI ranked number one in the list of top ten SMB technology trends for 2025.
Of SMBs surveyed, 35% are slightly accelerating and 27% are significantly accelerating their technology investments due to AI. SMBs see AI as a path to better data analysis, stronger decision-making, improved access to information, summarising content, streamlining tasks, identifying outliers, and categorising data.
The adoption curve for small businesses isn't lagging behind enterprises, it's matching or exceeding it because AI democratises capabilities previously requiring substantial technical investment.
The market data confirms this explosion. Companies spent $37 billion on generative AI in 2025, up from $11.5 billion in 2024, representing 3.2x year-over-year increase. This spending represents more than 6% of the entire software market, achieved within just three years of ChatGPT's launch.
There are now at least ten products generating over $1 billion in annual recurring revenue and fifty products generating over $100 million in ARR. The AI market size reached $638.23 billion in 2024 and is projected to reach $3,680.47 billion by 2034, expanding at 19.20% compound annual growth rate.
Critically, 76% of AI use cases in 2025 are purchased rather than built internally, up from 53% in 2024. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value.
This shift means most organisations using AI aren't developing it, they're buying it from vendors, creating complex accountability questions when AI systems produce discriminatory, inaccurate, or harmful outcomes.
But here's what makes this adoption pattern so dangerous: organisations implementing AI tools typically focus entirely on business value, efficiency gains, cost savings, productivity improvements, without conducting any assessment of AI-specific risks. There's no evaluation of training data bias. No testing for discriminatory outcomes. No documentation of how AI makes decisions. No monitoring for accuracy degradation. No governance structure ensuring responsible use. Companies treat AI tools like any other software purchase, applying traditional vendor management and IT oversight rather than recognising that AI introduces fundamentally different risk categories requiring specialised governance.
The European Commission's voluntary Code of Practice for General Purpose AI, released in 2025 to support EU AI Act compliance, reveals the governance gap. The Code addresses transparency requirements, copyright compliance, safety measures, and risk mitigation for AI models.
Most organisations using AI tools have no idea whether their vendors comply with these requirements. They haven't asked vendors about training data sources. They haven't reviewed technical documentation. They haven't assessed whether AI systems could produce discriminatory outcomes. They simply assumed that commercial software must be compliant with relevant regulations.
This assumption is catastrophically wrong. The Mobley v. Workday case established that organisations cannot delegate AI accountability to vendors. When AI systems discriminate, both the vendor and the organisation using the system face liability.
The court explicitly rejected arguments that using a vendor's AI tool shields organisations from discrimination claims. The organisation made the decision to deploy the AI system. The organisation used AI outputs to make decisions affecting people. The organisation bears responsibility for ensuring AI behaves lawfully, regardless of who built it.
The scope of hidden AI extends beyond obvious tools like ChatGPT and Copilot. Applicant tracking systems used by 99% of medium and large companies now incorporate AI resume screening. Customer relationship management platforms use AI to prioritise leads and predict conversion likelihood.
Marketing automation tools employ AI to optimise send times, personalise content, and segment audiences. Financial software uses AI to detect anomalies, flag potential fraud, and automate reconciliation. Even basic functions like email spam filtering, spell-checking, and autocomplete rely on AI algorithms making decisions about what content users see and how their messages are interpreted.
The cumulative effect is that virtually every organisation, regardless of size, sector, or technical sophistication, now depends on AI systems making decisions that affect customers, employees, suppliers, and other stakeholders.
These aren't hypothetical future scenarios. This is current operational reality. And under regulations taking effect worldwide, organisations face comprehensive accountability for AI behaviour without having established any governance framework ensuring responsible, lawful, ethical use.
The Regulatory Tsunami: Why 2026 Changes Everything
August 2, 2026 represents a compliance cliff that most organisations haven't recognised. On this date, the European Union's AI Act requirements for high-risk AI systems become fully enforceable, with penalties reaching up to 7% of global turnover or €35 million, whichever is higher.
For UK companies selling to EU markets or using AI systems processing EU data, this isn't foreign regulation they can ignore, it's binding obligation with severe financial consequences for non-compliance.
The EU AI Act classifies AI systems into four risk categories, with escalating regulatory requirements. Unacceptable risk AI, including social scoring systems and manipulative AI, was prohibited from February 2, 2025.
High-risk AI systems, encompassing applications in employment, education, law enforcement, critical infrastructure, and other sensitive domains, face comprehensive compliance requirements including conformity assessments, logging, human oversight, post-market monitoring, and registration in the EU's high-risk AI database. These requirements became enforceable from August 2026.
General Purpose AI models, which include systems like ChatGPT, Claude, Gemini, and similar foundation models capable of performing wide range of tasks, face specific obligations that took effect August 2, 2025.
Providers must publish technical documentation, training-data summaries, model cards, and for models with systemic risk (those trained using more than 10^25 floating-point operations), systemic-risk-mitigation plans. As of February 2025, only fifteen models globally surpass the computational threshold qualifying as systemic risk, including GPT-4o, Gemini 1.0 Ultra, and similar advanced models. However, the transparency requirements apply to all general purpose AI models, not just those with systemic risk.
The compliance timeline creates immediate obligations even for organisations not directly providing AI systems. Companies deploying high-risk AI must establish quality-management systems, human-oversight structures, and post-market monitoring by August 2, 2026.
Providers of generative AI systems released before August 2, 2026 have until February 2, 2027 to retrofit systems to meet transparency obligations, including marking artificially generated or manipulated content using watermarks, metadata, or digital tags. The grace period provides limited relief but doesn't eliminate compliance obligations.
UK organisations trading with the EU cannot escape these requirements through geographical technicalities. The AI Act applies to any organisation selling AI systems in the EU market or using AI systems that affect people located in the EU. The extraterritorial reach mirrors GDPR's scope, meaning UK businesses face EU AI Act compliance obligations when their operations touch European markets or European individuals, regardless of where the company is based or where AI systems are operated.
The enforcement mechanism combines regulatory oversight with substantial penalties. Member States must designate national competent authorities responsible for supervising AI Act compliance by August 2, 2025. These authorities possess investigatory powers, can conduct inspections, require documentation, and impose administrative fines. The European AI Office, officially operational from August 2, 2025, coordinates enforcement across member states and provides guidance on compliance interpretation. The AI Board, consisting of Member State representatives, advises the Commission and facilitates consistent application across jurisdictions.
The penalty structure ensures financial consequences for non-compliance are severe enough to command board-level attention. Up to €35 million or 7% of global annual turnover for prohibited AI practices. Up to €15 million or 3% of global annual turnover for infringements of most AI Act obligations. Up to €7.5 million or 1% for supplying incorrect, incomplete, or misleading information to authorities.
These aren't theoretical maximums, they're regulatory tools designed to make non-compliance economically catastrophic.
The November 2025 "Digital Omnibus on AI" proposal sought to streamline implementation and ease compliance burdens ahead of August 2026 full application. The proposal delays some high-risk system requirements, making application conditional on readiness of harmonised standards, common specifications, or guidelines. A new long-stop date means high-risk system rules would apply at latest from December 2, 2027 for Annex III systems and August 2, 2028 for systems embedded into regulated products, even if standards lag.
However, this extension only applies where adequate compliance support measures don't exist, the European Commission could bring requirements forward if standards are ready.
Beyond the EU, AI regulation is developing globally. According to the May 2025 Global AI legislation tracker, countries worldwide are developing and implementing AI governance legislation and policies through comprehensive laws, specific regulations for particular use cases, and voluntary guidelines and standards.
Three-quarters of executives surveyed by BCG rank AI as a top-three strategic focus for 2025. McKinsey reports that 78% of organisations already use AI in at least one business function, while 84% of CEOs plan to increase AI investments. The regulatory response to this rapid adoption is accelerating across jurisdictions.
Colorado became the first US state to enact AI bias legislation in May 2024, requiring developers and deployers of high-risk AI systems to use reasonable care protecting consumers from algorithmic discrimination.
Illinois enacted legislation in August 2024 making it a civil rights violation to use AI for employment decisions that subjects employees to discrimination based on protected classes or using zip code as proxy for protected class status. The legislation also mandates notifying employees when AI is used for employment decision purposes. These state-level regulations create patchwork compliance obligations even for organisations operating exclusively in the US.
The UK government, while taking a more innovation-friendly approach than the EU, is developing AI governance frameworks through sector-specific regulators rather than comprehensive omnibus legislation. However, UK organisations selling to EU markets face EU AI Act compliance regardless of UK domestic policy. The practical effect is that EU regulations establish de facto global standards because organisations serving international markets must meet the most stringent requirements of any jurisdiction where they operate.
What makes 2026 such a critical inflection point is the convergence of multiple regulatory timelines. EU AI Act high-risk requirements. EU AI Act transparency obligations for generative AI. Evolving US state-level AI discrimination laws. Growing enforcement of existing anti-discrimination statutes against AI systems. Increasing litigation establishing AI vendor and deployer liability. Expanding insurance requirements for AI risk management.
Organisations that haven't established AI governance by mid-2026 will face simultaneous compliance gaps across multiple regulatory frameworks with severe financial and operational consequences.
Perhaps most critically, the regulatory trajectory is unmistakably toward expanding obligations rather than relaxing them. Early AI Act implementation focused on the most egregious risks, prohibited practices, systemic risk models, high-risk applications. Future iterations will almost certainly expand coverage to medium-risk and lower-risk systems as regulators gain experience and public awareness of AI risks increases. The organisations establishing robust AI governance now position themselves for compliance with future requirements. Those waiting for regulatory clarity will perpetually lag behind evolving obligations.
The Liability Explosion: When Your AI Discriminates
The Mobley v. Workday litigation represents merely the most visible case in an emerging category of AI discrimination lawsuits reshaping organisational liability. At least six major discrimination cases were filed or progressed significantly in 2024-2025, establishing legal precedents that every organisation using AI must understand. Courts are systematically rejecting defences that AI is too complex to audit, that vendors bear sole responsibility, or that algorithmic decisions deserve different treatment than human decisions.
The cases create a pattern revealing how AI liability materialises. iTutorGroup paid $365,000 in September 2023 to settle an Equal Employment Opportunity Commission lawsuit alleging the company programmed its AI recruitment software to automatically reject applications from female candidates aged 55 or older and male candidates aged 60 or older.
According to the case, the AI software rejected over 200 qualified applicants based purely on age, violating the Age Discrimination in Employment Act. This was the EEOC's first AI hiring discrimination lawsuit, establishing the agency's willingness to pursue algorithmic discrimination claims.
SafeRent agreed to pay more than $2 million in 2024 to settle litigation alleging its AI-powered tenant screening algorithm disparately impacted Black and Hispanic housing applicants. The court rejected SafeRent's argument that it couldn't be liable under the Fair Housing Act because it didn't make final housing decisions. Instead, the court held that SafeRent had liability because its SafeRent Scores product claimed to "automate human judgment" by making housing recommendations based on undisclosed algorithms that housing providers couldn't alter. The case established that AI tool providers face direct liability even when they don't make final decisions if their systems substantially influence outcomes.
In July 2024, CVS privately settled a proposed class action lawsuit filed by a job applicant claiming the company violated Massachusetts law by requiring prospective employees to take what legally amounted to a lie detector test. The lawsuit alleged that applicants underwent HireVue video interviews using Affectiva's AI technology to track facial expressions, smiles, smirks, and assign each candidate an "employability score" measuring conscientiousness, responsibility, and integrity.
The case highlighted AI systems' capacity to conduct psychological assessments that would be illegal if conducted through traditional means, raising questions about whether deploying AI circumvents employment law protections.
In March 2025, civil rights groups including the ACLU and Public Justice filed a complaint against Intuit and HireVue alleging discrimination against a deaf Indigenous woman who applied for a promotion and was screened using HireVue's automated speech recognition and assessment system. According to the complaint, the system penalised the applicant due to her speech patterns and lack of typical vocal cues, biases the AI was never trained to handle. The legal argument points to multiple violations including the Americans with Disabilities Act, Title VII of the Civil Rights Act, and Colorado's Anti-Discrimination Act.
The case exemplifies how AI systems trained on majority populations systematically disadvantage individuals with disabilities or other characteristics underrepresented in training data.
The Mobley case achieved unprecedented scale when the court granted conditional certification in May 2025, potentially covering hundreds of millions of applicants over age 40 who applied through Workday's system. The plaintiff alleged that Workday's algorithm-based applicant screening tools discriminated based on race, age, and disability. After Workday's motion to dismiss was denied in July 2024, the court allowed disparate impact claims to proceed under the Age Discrimination in Employment Act and Americans with Disabilities Act, holding that Workday had liability as an agent of employers using its AI product.
The court's reasoning in Mobley establishes critical precedent: "Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one."
The court warned that drawing artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.
This precedent extends liability beyond obvious discrimination. Organisations using AI face claims even when discrimination wasn't intentional if AI systems produce disparate impact on protected classes. The legal standard doesn't require proving that organisations deliberately programmed bias into algorithms. It requires only showing that facially neutral AI processes produce discriminatory effects on protected groups. This disparate impact theory shifts burden to organisations to prove that discriminatory AI outcomes are justified by business necessity and that no less discriminatory alternative exists.
The scale of potential liability is staggering. An October 2024 survey found that roughly seven in ten companies allow AI tools to reject candidates without any human oversight. The University of Washington research showing 85.1% preference for white-associated names and systematic disadvantaging of Black male candidates means that potentially millions of applicants have been subject to unlawful discrimination. Class action certification allows plaintiffs to aggregate these individual instances into massive collective claims with damages multiplied across thousands or millions of affected individuals.
Beyond employment discrimination, AI liability extends to housing, credit, insurance, healthcare, education, and any other domain where AI systems make decisions affecting people's fundamental rights and opportunities.
The Fair Housing Act, Equal Credit Opportunity Act, Americans with Disabilities Act, and similar statutes all prohibit discrimination regardless of whether decisions are made by humans or algorithms. Courts are consistently holding that deploying AI doesn't exempt organisations from anti-discrimination obligations, it extends those obligations to algorithmic decision-making processes.
The practical effect is that organisations using AI tools for consequential decisions bear full investigatory burden proving their systems don't discriminate. This requires understanding how AI makes decisions, the training data, the features considered, the weights assigned, the decision thresholds applied.
For most organisations using commercial AI tools, this information isn't readily available. Vendors often claim algorithms are proprietary, making it impossible for deploying organisations to audit for bias. Courts are unsympathetic to this defence, holding that organisations choosing to deploy opaque AI systems remain liable for discriminatory outcomes regardless of whether they can explain how discrimination occurred.
The insurance implications compound financial exposure. Cyber insurance underwriters increasingly require evidence of AI risk management before offering coverage. Organisations deploying AI without governance frameworks face coverage denial or exclusions for AI-related claims. When discrimination lawsuits arise, insurers aggressively pursue policy exclusions based on inadequate due diligence, failure to audit AI systems, or known risks that weren't mitigated. Insurance, rather than providing financial protection, becomes another liability source when AI governance is absent.
The reputational damage extends beyond financial penalties. Organisations identified as using discriminatory AI face intense media scrutiny, employee backlash, customer defection, and investor concern. Workday, despite denying discrimination claims and announcing third-party AI responsibility accreditations, faces ongoing reputational risk from the Mobley litigation regardless of ultimate legal outcome. The public narrative that a company's AI discriminates against older workers, racial minorities, or people with disabilities creates lasting brand damage that settlement payments cannot repair.
ISO 42001: The Governance Framework Every Organisation Actually Needs
ISO/IEC 42001:2023, published in December 2023, represents the world's first certifiable international standard for AI management systems. Unlike technical standards specifying AI model requirements or algorithmic specifications, ISO 42001 establishes governance framework ensuring organisations can manage AI responsibly regardless of whether they build AI systems internally or purchase them from vendors. The standard applies equally to a five-person consultancy using ChatGPT for client communications and a multinational corporation deploying custom machine learning models.
The genius of ISO 42001 lies in recognising that AI governance is fundamentally about decision-making and accountability, not technical AI expertise. Organisations don't need machine learning PhDs to implement ISO 42001. They need governance structures ensuring that decisions about AI deployment, monitoring, and risk management occur at appropriate levels with appropriate information and appropriate accountability.
The standard provides framework for these governance structures that works for organisations at any scale with any level of AI sophistication.
The standard comprises ten clauses following the familiar Plan-Do-Check-Act methodology used in ISO management system standards.
Clauses 1-3 cover scope, normative references, and terms and definitions. Clause 4 requires organisations to understand their context, including internal and external issues affecting AI use and needs and expectations of stakeholders. Clause 5 mandates leadership and commitment from top management, who must establish AI policy, ensure AI requirements integrate with business processes, and promote culture supporting responsible AI usage.
This leadership requirement immediately addresses the delegation problem plaguing most organisations. Under ISO 42001, top management cannot delegate AI governance to IT departments or technology teams and consider the matter handled. Leadership must demonstrate active commitment by establishing policy, ensuring resources, communicating importance, ensuring AIMS achieves intended outcomes, and supporting personnel contributing to effectiveness. This creates board-level and executive-level accountability that current AI deployments almost universally lack.
Clause 6 addresses planning, requiring organisations to identify and assess risks and opportunities associated with AI and develop plans addressing them. This risk assessment extends beyond traditional IT risk to encompass AI-specific concerns including bias, transparency, accountability, data protection, safety, security, and societal impact. The standard doesn't prescribe specific risk treatments but requires organisations to establish risk acceptance criteria reflecting organisational risk appetite—a fundamentally strategic decision that only business leaders can make.
The risk assessment requirement solves the hidden AI problem by forcing organisations to inventory AI systems they're using and assess risks each system creates. Many organisations implementing ISO 42001 discover they're using far more AI than they realised, with AI embedded in tools that employees adopted without formal approval.
The systematic review required by the standard brings this shadow AI into formal governance, ensuring all AI use receives appropriate oversight regardless of how it entered the organisation.
Clause 7 covers support, including resources, competence, awareness, and communication. Organisations must ensure adequate resources for establishing, implementing, maintaining, and improving the AI management system. Personnel involved with AI must possess necessary competence based on education, training, or experience. The standard requires promoting AI literacy throughout the organisation, ensuring that employees understand AI capabilities, limitations, and appropriate use even if they lack technical AI expertise.
This AI literacy requirement directly addresses the workforce gap identified by the World Economic Forum, which suggested 40% of workers will need reskilling by 2025 due to AI advancements. ISO 42001 ensures organisations don't just deploy AI tools and hope employees use them responsibly, they actively develop workforce competence ensuring AI use aligns with organisational policies, legal requirements, and ethical standards.
Clause 8 addresses operation, establishing processes for AI system lifecycle management from development or acquisition through deployment, monitoring, and decommissioning. For organisations purchasing AI tools rather than building them, this includes evaluating vendor capabilities, ensuring contractual terms address AI-specific risks, and establishing oversight mechanisms confirming vendor AI systems behave as expected. The standard requires human oversight of AI systems, particularly for high-risk applications, ensuring that AI recommendations or decisions receive appropriate human review before affecting people.
The human oversight requirement responds directly to the October 2024 survey finding that 70% of companies allow AI tools to reject candidates without human involvement. ISO 42001 makes this practice non-compliant with the standard by requiring human oversight mechanisms appropriate to risk level. High-risk AI decisions affecting employment, education, housing, credit, or similar consequential domains must receive meaningful human review, not rubber-stamping of algorithmic outputs.
Clause 9 establishes performance evaluation through monitoring, measurement, analysis, evaluation, internal audit, and management review. Organisations must establish what needs monitoring and measurement, methods for valid results, when monitoring occurs, and who analyses and evaluates results. Internal audits must be conducted at planned intervals by persons independent of area being audited. Top management must review AI management system at planned intervals to ensure continuing suitability, adequacy, and effectiveness.
The management review creates structured mechanism for boards and executives to receive objective information about AI system performance, risks materialising, audit findings, and needed improvements. Rather than receiving filtered information from teams with vested interest in portraying AI deployments as successful, leadership receives structured evidence about actual AI performance including failures, near-misses, stakeholder complaints, and emerging risks. This transparency enables informed decision-making about AI strategy, resourcing, and risk acceptance.
Clause 10 addresses improvement, requiring organisations to identify improvement opportunities and take action addressing nonconformities and preventing recurrence. When nonconformities occur, such as discovering AI system produced biased outcomes, organisations must react promptly, evaluate need for corrective action, implement necessary actions, review effectiveness of corrective actions, and update AI management system as needed. This continual improvement cycle ensures that AI governance evolves as organisations learn from experience and as AI capabilities, risks, and regulations change.
Annex A of ISO 42001 provides 38 AI-specific controls that organisations implement based on risk assessment. These controls address the full AI lifecycle including AI system impact assessment, data quality and governance, transparency and explainability, human oversight, robustness and accuracy, privacy and data protection, accountability, training and competence, third-party management, and incident management. Organisations select applicable controls based on their specific AI use cases, risk profile, and regulatory environment rather than implementing all controls universally.
The third-party management controls directly address vendor AI tools that most organisations depend on. Organisations must identify AI-related supply chain risks, ensure supplier security controls, establish supplier agreements addressing AI-specific requirements, and monitor supplier performance. This forces organisations to ask hard questions about vendors' AI systems that most currently ignore: What training data was used? How was bias tested and mitigated? What accuracy rates does the system achieve for different demographic groups? How does the system handle edge cases and exceptions? What documentation exists explaining how the system makes decisions?
The certification process provides independent validation of governance effectiveness. Organisations pursuing ISO 42001 certification undergo initial assessment evaluating AIMS against standard requirements, stage two audit confirming implementation and operational effectiveness, and if successful, certification valid for three years with annual surveillance audits and full recertification in year three. This external oversight creates stakeholder confidence that AI governance isn't merely documented on paper but operates effectively in practice.
Integration with existing frameworks makes ISO 42001 practical for organisations already managing information security, privacy, or quality. The standard aligns closely with ISO 27001 for information security management, ISO 27701 for privacy management, and the NIST AI Risk Management Framework.
Organisations with existing ISO 27001 certification find ISO 42001 governance structure familiar, enabling efficient implementation by building on established management system foundations rather than creating entirely separate AI governance from scratch.
For EU AI Act compliance, ISO 42001 provides readiness framework. While ISO 42001 certification doesn't automatically guarantee EU AI Act compliance, the standard is voluntary while the Act is regulatory, implementing ISO 42001 addresses many AI Act requirements including governance structures, risk management, transparency, human oversight, and post-market monitoring.
The August 2025 alignment between ISO 42001 and EU AI Act general purpose AI requirements means organisations implementing the standard position themselves well for regulatory compliance alongside demonstrating responsible AI governance to stakeholders.
The market adoption validates ISO 42001's relevance. BSI became the first certification body accredited by UKAS to certify ISO 42001 in the UK, with RvA accreditation in the Netherlands confirming international standards for impartiality, competence, and consistency.
The number of organisations achieving ISO certification increased 20% worldwide in 2024 compared to 2023. Cloud Security Alliance reported in 2025 that 76% of organisations in their compliance benchmark plan to pursue ISO 42001 or similar frameworks soon. This adoption trend reflects recognition that AI governance is business imperative, not optional nice-to-have.
Why Every Organisation—Not Just AI Developers—Needs This Now
The common objection to ISO 42001 runs something like this: "We're not an AI company. We just use a few tools. Surely this standard is for organisations building complex AI systems, not for us." This objection fundamentally misunderstands both how AI governance works and what ISO 42001 addresses.
The standard isn't primarily about technical AI development, it's about ensuring organisations using AI in any capacity manage associated risks responsibly. The five-person consultancy using ChatGPT faces different scale but identical category of risks as the enterprise deploying custom machine learning.
Consider the consultancy scenario concretely. A small business development consultancy helps clients secure government contracts. They use ChatGPT to draft proposal content, analyse tender documents, and research client backgrounds. One consultant uploads a confidential client document to ChatGPT seeking a summary. The document contains sensitive business information the client hasn't authorised for external sharing. ChatGPT's training data might now include this confidential information. The client later discovers their confidential information appears in responses ChatGPT provides to competitors.
Under GDPR, the consultancy faces potential enforcement action for unlawful data processing. Under contractual obligations with the client, they face breach of confidentiality claims. Under professional standards, they face reputational damage and loss of client relationships. The consultancy's defence that they didn't know ChatGPT might use uploaded content for training will fail because reasonable AI governance requires understanding how AI tools handle data before uploading confidential information. ISO 42001 would have required the consultancy to establish data handling policies for AI tools, ensuring employees understood what content could appropriately be processed by external AI systems.
The employment discrimination scenario applies equally to small and large organisations. A fifteen-person marketing agency uses an applicant tracking system incorporating AI resume screening when hiring for a new position. The system recommends five candidates, all of whom happen to be under thirty years old. Several qualified candidates over forty weren't recommended despite relevant experience. The agency hires from the AI-recommended pool without reviewing other applicants. A rejected candidate over forty files an age discrimination claim.
The agency faces EEOC investigation and potential lawsuit identical to Mobley v. Workday regardless of company size. Their defence that they relied on vendor AI will fail under the legal precedent that organisations deploying AI bear responsibility for discriminatory outcomes. ISO 42001 would have required the agency to assess AI-specific risks in their applicant tracking system, establish human oversight of AI hiring recommendations, and periodically audit for disparate impact on protected classes.
The standard's requirements scale to the organisation's size and risk profile but apply regardless of whether the company has five employees or five thousand.
The financial exposure scales with turnover but the liability categories remain constant. A £2 million revenue consultancy and a £200 million revenue enterprise both face discrimination lawsuits if their AI disadvantages protected classes. Both face regulatory enforcement if AI processing violates data protection rules. Both face contractual claims if AI breaches confidentiality or professional obligations. Both face reputational damage if AI produces inappropriate, inaccurate, or harmful outputs.
The consultancy's smaller revenue doesn't reduce liability, it increases vulnerability because defending discrimination claims or regulatory enforcement can easily consume their entire operational capacity.
The EU AI Act explicitly addresses small and medium-sized enterprises, recognising that AI regulation must account for organisations with limited resources. The Act's proportionality principle requires that compliance obligations consider organisation size and resources. However, proportionality doesn't mean SMEs are exempt, it means compliance approaches should be practical for organisations without dedicated compliance teams. ISO 42001 provides exactly this practical framework by establishing governance processes that work at any scale rather than prescribing resource-intensive technical requirements.
Consider regulatory sandboxes the EU AI Act mandates each Member State establish by August 2, 2026. These sandboxes specifically aim to help SMEs and small mid-caps pilot AI solutions under regulatory guidance in real-world conditions. The existence of SME-specific regulatory support mechanisms confirms that EU AI Act applies to small organisations, not just technology giants. ISO 42001 helps SMEs prepare for regulatory engagement by establishing baseline governance proving they manage AI responsibly even at early stages.
The competitive advantage dimension particularly benefits smaller organisations. Large enterprises increasingly require suppliers to demonstrate AI governance as procurement criterion. Government contracts may mandate AI compliance frameworks. Industry partnerships may require AI risk management evidence. SMEs competing for these opportunities without ISO 42001 certification face systematic exclusion. Early certification provides market differentiator that compensates for size disadvantage, positioning small organisations as responsible AI users that large customers can trust.
The insurance access dimension creates survival imperative. Cyber insurance and professional indemnity insurance increasingly requires AI risk management evidence. Organisations deploying AI without governance frameworks face coverage denial, exclusions for AI-related claims, or prohibitively expensive premiums.
For SMEs with limited financial reserves, uninsured AI liability represents existential threat. ISO 42001 certification provides insurers with independent verification of AI risk management, making coverage available at reasonable cost.
The talent attraction and retention particularly challenges smaller organisations competing with large enterprises for skilled workers. Knowledge workers increasingly seek employers demonstrating ethical technology use and responsible innovation. Organisations with ISO 42001 certification signal to prospective employees that they manage AI thoughtfully, creating safer, more ethical workplace. This becomes recruiting advantage for SMEs unable to match enterprise salary levels but able to demonstrate superior governance and values alignment.
The implementation investment scales appropriately. A five-person consultancy implementing ISO 42001 won't establish elaborate governance committees, conduct extensive impact assessments, or maintain comprehensive documentation libraries. They'll establish simple but effective processes appropriate to their AI use: policy covering what AI tools are approved for what purposes, basic training ensuring employees understand responsible AI use, straightforward risk assessment of AI tools they deploy, simple monitoring confirming AI behaves as expected, and clear incident response for problems.
This might require several days of work initially and few hours quarterly for maintenance—material investment but not prohibitive for organisation generating reasonable revenue.
Contrast this investment with the cost of single discrimination lawsuit, regulatory enforcement action, or major client breach of confidentiality. Legal defence costs alone typically exceed £50,000 even for cases that don't proceed to trial. Regulatory fines under GDPR or EU AI Act can reach hundreds of thousands of pounds even for SMEs based on percentage-of-turnover calculations.
Reputational damage from AI failures can destroy client relationships that took years to build. Insurance premium increases or coverage denial compounds ongoing costs. The business case for ISO 42001 investment isn't whether organisations can afford implementation, it's whether they can afford the consequences of not implementing.
The implementation timeline enables rapid deployment. Unlike complex technical projects requiring months or years, ISO 42001 governance can be established in weeks for organisations with straightforward AI use. The standard provides framework and organisations populate it with their specific content.
A consultancy using ChatGPT, Microsoft Copilot, and an AI-enabled CRM can inventory these tools, assess associated risks, establish usage policies, train employees, and implement basic monitoring within a month.
Certification might take three to six months including preparation, external audit, and addressing any findings, but operational governance improvement begins immediately.
The universality of AI risk makes ISO 42001 relevant regardless of sector, size, or business model. Employment agencies face discrimination risk from AI recruiting tools.
Healthcare providers face patient safety risk from AI diagnostic systems. Financial services face regulatory risk from AI credit decisions. Retailers face customer service risk from AI chatbots. Professional services face confidentiality risk from AI content generation. Manufacturers face product liability risk from AI quality control. Every sector using AI, which increasingly means every sector, faces AI-specific risks requiring governance that traditional IT management or vendor management doesn't address.
The timing imperative combines regulatory deadlines, litigation trends, insurance requirements, and competitive dynamics into narrow window for action. August 2026 EU AI Act enforcement creates hard deadline for organisations selling into European markets. Expanding AI discrimination litigation creates urgent need for organisations to audit AI systems and eliminate bias before lawsuits materialise.
Insurance market changes require governance evidence for coverage renewal. Customer procurement requirements increasingly mandate AI risk management. Organisations delaying implementation find themselves simultaneously non-compliant with regulations, vulnerable to litigation, unable to secure insurance, and excluded from opportunities. Starting ISO 42001 implementation today provides the time needed for thoughtful deployment before external pressures force rushed, inadequate responses.
The Choice You're Making Right Now
Every day your organisation continues using AI without governance, you're making a choice. It's not passive inaction, it's active decision to accept AI-specific risks without mitigation.
You're choosing to deploy systems that might discriminate without checking whether they do. You're choosing to process data through AI without confirming that processing complies with privacy regulations. You're choosing to make consequential decisions based on AI recommendations without understanding how those recommendations were generated.
You're choosing to trust vendor AI systems without verifying they behave responsibly.
This choice carries measurable consequences that are accelerating. The Mobley v. Workday class certification potentially covering hundreds of millions of applicants establishes that AI discrimination liability scales to enterprise-destroying levels.
The EU AI Act penalties of up to 7% of global turnover or €35 million create financial exposure that few organisations can absorb. The expanding US state-level AI discrimination laws create multiplying compliance obligations across jurisdictions. The insurance market changes reduce coverage availability for unmanaged AI risk. The competitive dynamics exclude organisations without AI governance from growing opportunities.
But the choice isn't binary between maintaining current AI use without governance and abandoning AI entirely. ISO 42001 provides framework for responsible AI use that captures efficiency gains and innovation benefits while managing risks to acceptable levels.
Organisations implementing the standard report enhanced stakeholder trust, improved risk management, competitive advantage, and validation of ethical AI practices alongside continued realisation of AI's business value.
The implementation journey begins with recognition that AI governance is governance responsibility, not technology task. Boards and executives must acknowledge that AI systems making decisions affecting employees, customers, suppliers, and other stakeholders require the same oversight, accountability, and risk management as any other significant business capability.
Delegating AI to IT departments or trusting vendors created the current vulnerability. Establishing top-management-led governance through ISO 42001 creates sustainable solution.
The practical first steps are achievable for any organisation regardless of size or technical sophistication. Inventory AI systems currently in use, including tools employees adopted without formal approval.
Assess risks each AI system creates specific to your business context, sector, and stakeholder relationships. Establish basic policy covering approved AI use, prohibited AI applications, data handling requirements, and human oversight expectations. Train employees on responsible AI use appropriate to their roles. Implement simple monitoring confirming AI behaves as expected and identify anomalies requiring investigation.
These foundational steps provide immediate risk reduction and establish trajectory toward ISO 42001 certification. The organisation doesn't need to achieve perfect implementation before benefiting from governance framework. Every policy established, every employee trained, every risk assessed, and every monitoring mechanism implemented reduces likelihood of catastrophic AI failure and demonstrates diligence if problems occur despite precautions.
The certification decision depends on organisational context. Organisations selling to enterprise customers, operating in regulated sectors, or seeking competitive differentiation benefit substantially from ISO 42001 certification's independent validation.
Organisations with less external pressure might implement ISO 42001 framework without pursuing certification, gaining governance benefits while deferring certification costs until business case strengthens. Either approach dramatically exceeds current common practice of using AI without any governance framework.
The window for early mover advantage is closing rapidly. The 76% of organisations planning to pursue ISO 42001 or similar frameworks creates competitive dynamic where having AI governance becomes table stakes rather than differentiator. Organisations implementing now position themselves as responsible leaders demonstrating proactive risk management. Organisations waiting until regulatory enforcement, litigation, or customer requirements force implementation will lag behind market expectations and face perception as reluctant compliance-driven followers.
The message from regulators, courts, insurers, and markets is converging and unmistakable: AI governance is no longer optional for organisations using AI in any capacity. The regulatory requirements are taking effect. The litigation precedents are established. The insurance conditions are hardening. The competitive expectations are rising.
Organisations continuing to use AI without governance aren't delaying inevitable implementation, they're accumulating liability that will materialise in ways destroying significantly more value than governance investment would have cost.
The question facing every organisation today isn't whether AI governance is necessary. Courts, regulators, and markets have definitively answered that question: it is necessary. The question is whether organisations will establish governance proactively as strategic investment positioning them for sustainable AI use, or reactively as crisis response after discrimination lawsuit, regulatory enforcement, client breach, or insurance denial demonstrates that unmanaged AI risk was never acceptable business practice.
ISO 42001 provides the framework for proactive governance. The only remaining question is when your organisation will begin implementation. Every day of delay is a choice whose consequences you may not recognise until they become catastrophic.
References
A-LIGN (2025). Understanding ISO 42001: The World's First AI Management System Standard. Retrieved from https://www.a-lign.com/articles/understanding-iso-42001
AARC-360 (2025). Strengthening AI Governance and Supporting ISO/IEC 42001. Retrieved from https://www.aarc-360.com/understanding-iso-iec-42005-2025/
Alumio (n.d.). Comparing best AI tools for business 2025. Retrieved from https://www.alumio.com/blog/comparing-best-business-ai-tools-2025
American Bar Association (2024). Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies. Retrieved from https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-april/navigating-ai-employment-bias-maze/
American Bar Association (2024-2025). Regulation by the EEOC and the States of Algorithmic Bias in High-Risk Use Cases. Retrieved from https://www.americanbar.org/groups/business_law/resources/business-lawyer/2024-2025-winter/eeoc-states-regulation-algorithmic-bias-high-risk/
American Bar Association (2025). Recent Developments in Artificial Intelligence Cases and Legislation 2025. Retrieved from https://www.americanbar.org/groups/business_law/resources/business-law-today/2025-august/recent-developments-artificial-intelligence-cases-legislation/
BizTech Magazine (2025). AI Tools for Small Business in 2025: Stay Ahead of the Curve. Retrieved from https://biztechmagazine.com/article/2025/05/ai-tools-small-business-are-helping-smbs-compete-larger-scale-perfcon
BSI (n.d.). ISO 42001 - AI Management System. Retrieved from https://www.bsigroup.com/en-US/products-and-services/standards/iso-42001-ai-management-system/
ClassAction.org (2025). AI Job Screening, Interview & Hiring Lawsuits. Retrieved from https://www.classaction.org/ai-interview-screening-lawsuits
Cloud Security Alliance (2025). ISO 42001: Auditing and Implementing Framework. Retrieved from https://cloudsecurityalliance.org/blog/2025/05/08/iso-42001-lessons-learned-from-auditing-and-implementing-the-framework
Cooley (2025). EU AI Act: Proposed 'Digital Omnibus on AI' Will Impact Businesses' AI Compliance Roadmaps. Retrieved from https://www.cooley.com/news/insight/2025/2025-11-24-eu-ai-act-proposed-digital-omnibus-on-ai-will-impact-businesses-ai-compliance-roadmaps
Deloitte (2025). ISO 42001 Standard for AI Governance and Risk Management. Retrieved from https://www.deloitte.com/us/en/services/consulting/articles/iso-42001-standard-ai-governance-risk-management.html
DLA Piper (2025). Latest wave of obligations under the EU AI Act take effect: Key considerations. Retrieved from https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect
Domo (n.d.). Top 10 AI Automation Platforms to Transform Your Business in 2025. Retrieved from https://www.domo.com/learn/article/ai-automation-platforms
European Commission (n.d.). AI Act. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
EU Artificial Intelligence Act (n.d.). EU Artificial Intelligence Act. Retrieved from https://artificialintelligenceact.eu/
EU Artificial Intelligence Act (n.d.). Small Businesses' Guide to the AI Act. Retrieved from https://artificialintelligenceact.eu/small-businesses-guide-to-the-ai-act/
EY (2025). ISO 42001: paving the way for ethical AI. Retrieved from https://www.ey.com/en_us/insights/ai/iso-42001-paving-the-way-for-ethical-ai
eyreACT (2025). Does the EU AI Act Apply to the UK? A Comprehensive Analysis. Retrieved from https://www.eyreact.com/ai-act-uk/
Fortune (2025). Workday, Amazon AI employment bias claims add to growing concerns about the tech's hiring discrimination. Retrieved from https://fortune.com/2025/07/05/workday-amazon-alleged-ai-employment-bias-hiring-discrimination/
ISACA (2025). ISO 42001 Balancing AI Speed Safety. Retrieved from https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2025/iso-42001-balancing-ai-speed-safety
ISO (n.d.). ISO/IEC 42001:2023 - AI management systems. Retrieved from https://www.iso.org/standard/42001
Journal of Technology and Intellectual Property (2025). Algorithmic Bias in AI Employment Decisions. Retrieved from https://jtip.law.northwestern.edu/2025/01/30/algorithmic-bias-in-ai-employment-decisions/
KPMG (2025). ISO/IEC 42001: a new standard for AI governance. Retrieved from https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html
Lathrop GPM (2025). Lawsuits Alleging Systemic Bias in AI Algorithmic Screening Tools Should Serve as Cautionary Tale. Retrieved from https://www.lathropgpm.com/insights/lawsuits-alleging-systemic-bias-in-ai-algorithmic-screening-tools-should-serve-as-cautionary-tale/
Menlo Ventures (n.d.). 2025: The State of Generative AI in the Enterprise. Retrieved from https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/
Microsoft (2025). AI-powered success—with more than 1,000 stories of customer transformation and innovation. Retrieved from https://blogs.microsoft.com/blog/2025/04/22/https-blogs-microsoft-com-blog-2024-11-12-how-real-world-businesses-are-transforming-with-ai/
Microsoft (n.d.). What's New in Copilot Studio: November 2025 Updates and Features. Retrieved from https://www.microsoft.com/en-us/microsoft-copilot/blog/copilot-studio/whats-new-in-microsoft-copilot-studio-november-2025/
Prompt Security (2025). Understanding the ISO/IEC 42001 for AI Management Systems. Retrieved from https://prompt.security/blog/understanding-the-iso-iec-42001
Quinn Emanuel (2025). When Machines Discriminate: The Rise of AI Bias Lawsuits. Retrieved from https://www.quinnemanuel.com/the-firm/publications/when-machines-discriminate-the-rise-of-ai-bias-lawsuits/
Raconteur (n.d.). EU AI Act: deadlines fast approaching for UK firms. Retrieved from https://www.raconteur.net/risk-regulation/244125
SIG (2025). A comprehensive EU AI Act Summary. Retrieved from https://www.softwareimprovementgroup.com/blog/eu-ai-act-summary/
Superprompt (2025). Best AI Tools for Small Business Automation in 2025. Retrieved from https://superprompt.com/blog/best-ai-tools-small-business-automation-2025-save-time-money
The Interview Guys (2025). 85% of AI Resume Screeners Prefer White Names: Why 2025 Is The Year Hiring Discrimination Lawsuits Exploded. Retrieved from https://blog.theinterviewguys.com/85-of-ai-resume-screeners-prefer-white-names/
Traverse Legal (2025). Recent Lawsuits Against AI Companies: Beyond Copyright Infringement. Retrieved from https://www.traverselegal.com/blog/ai-litigation-beyond-copyright/
TTMS (2025). EU AI Act Update 2025. Retrieved from https://ttms.com/eu-ai-act-update-2025-code-of-practice-enforcement-industry-reactions/
Ventum Consulting (n.d.). EU AI Act 2026: What companies need to know now. Retrieved from https://www.ventum-consulting.com/en/news/eu-ai-act-2026-what-companies-need-to-prepare-for-in-2026/
VisualSP (2025). Copilot or ChatGPT: Which AI Tool Is Better for Your Business. Retrieved from https://www.visualsp.com/blog/copilot-or-chatgpt-which-ai-tool-is-better-for-your-business/




Comments