KRUCEK>Expert Articles>New ISO/IEC 42001 – Information Technology – Artificial Intelligence – Management System

New ISO/IEC 42001 – Information Technology – Artificial Intelligence – Management System

At the end of 2023, a brand new standard, ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System, was released that specifies requirements and provides recommendations for the creation, implementation, maintenance and continuous improvement of an artificial intelligence management system (AIMS). The standard is applicable to any organisation that provides or uses products or services using artificial intelligence systems.

An artificial intelligence system is “an engineering system that generates outputs such as content, predictions, recommendations, or decisions for a given set of human-defined goals”.

Artificial intelligence (AI) has enormous potential to boost the economy and society across a wide range of sectors and social life. However, AI brings with it new risks. Therefore, as AI systems develop, there is a growing need for effective standardisation and regulation to ensure its responsible use.

An AI Management System (AIMS) creates an environment that helps organisations to responsibly fulfil their role in relation to AI systems (e.g. to use, develop, monitor or deliver products) and offers a systematic and integrated approach to managing AI projects, from setting objectives, assessing risks and impacts, to effectively addressing those risks.

AIMS is applicable to all organisations that develop, provide or use products or services using an AI system. It is therefore potentially applicable to a large number of products and services in different sectors that are subject to obligations (e.g. see the AI Act), good practice, expectations or contractual commitments to stakeholders. Examples of sectors are:

  • health and social services,
  • defence and homeland security,
  • transport,
  • finance and insurance,
  • employment,
  • energy.

The Artificial Intelligence Act

THE ARTIFICIAL INTELLIGENCE ACT (REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL ESTABLISHING HARMONISED RULES FOR ARTIFICIAL INTELLIGENCE AND AMENDING CERTAIN LEGISLATIVE ACTS OF THE UNION) provides for (a) harmonised rules for the placing on the market and putting into service of AI systems and for their use in the Union; (b) the prohibition of certain practices in the field of artificial intelligence; (c) specific requirements for high-risk AI systems and the obligations of operators of such systems; (d) harmonised transparency rules for AI systems designed to interact with natural persons, for emotion recognition systems and for biometric categorisation systems, as well as for AI systems used to create or manipulate image, audio or video content; (e) rules for market monitoring and surveillance.

High-risk AI systems are defined as systems in the following areas:

  • Biometric identification and categorisation of natural persons
  • Critical infrastructure management and operation
  • Education and training
  • Employment, management of workers and access to self-employment
  • Access to basic private and public services and benefits
  • Law enforcement
  • Migration, asylum and border management
  • Judicial governance and democratic processes

The AI Act requires, among other things, that high-risk AI systems comply with the regulation, have a risk management system in place, use high-quality training, validation and test data, or maintain technical documentation and relevant records.

Providers of high-risk UI systems must, inter alia, have a quality management system in place that meets the relevant requirements of the Regulation.

Structure of the standard

ISO/IEC 42001 uses a harmonized structure (same article numbers, article titles, text and common terms and basic definitions) that was developed to increase consistency between management system standards (MSS). This common approach facilitates implementation and consistency with other management system standards.

  • Chapter 4: The context of the organisation
  • Chapter 5: Leadership and Commitment
  • Chapter 6: Planning
  • Chapter 7: Support
  • Chapter 8: Operation
  • Chapter 9: Performance evaluation
  • Chapter 10: Improvement

The standard contains 4 annexes, 2 normative and 2 informative.

  • Annex A (normative): reference objectives for measures and actions to be used by organisations to manage risk and achieve AI objectives.
  • Annex B (normative): Guidelines for implementing the measures in Annex A to meet the objectives of the measures.
  • Annex C (informative): Potential organisational objectives and sources of risk related to AI to be considered in the risk management process.
  • Annex D (informative): Use of an AI management system across areas or sectors

Integration of the AI management system with other management system standards

In providing or using AI systems, an organization may have objectives or obligations related to aspects that are covered by other management system standards, particularly with respect to information and cybersecurity, privacy, quality. These topics are covered in the standards:

  • ISO/IEC 27001 – Requirements for Information Security Management Systems (ISMS)
  • ISO/IEC 27701 – Requirements and guidelines for extending ISO/IEC 27001 and ISO/IEC 27002 standards for privacy information management
  • ISO 9001 – requirements for quality management systems

ISO/IEC 42001 uses a harmonized structure (same article numbers, article titles, text and common terms and basic definitions) that was developed to increase consistency between management system standards (MSS). This common approach facilitates implementation and consistency with other management system standards.

Are you interested?


    Privacy Statement