Artificial Intelligence

An artificial intelligence system is “an engineering system that generates outputs such as content, predictions, recommendations, or decisions for a given set of human-defined goals”. Artificial intelligence (AI) has enormous potential to support the economy and society across a wide range of sectors and social life. However, AI brings with it new risks. Therefore, as AI systems develop, there is a growing need for effective standardisation and regulation to ensure its responsible use.

The Artificial Intelligence Act

The ARTIFICIAL INTELLIGENCE ACT (REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL ESTABLISHING HARMONISED RULES FOR ARTIFICIAL INTELLIGENCE AND AMENDING CERTAIN LEGISLATIVE ACTS OF THE UNION) provides for (a) harmonised rules for the marketing and operation of AI systems and for their use in the Union; (b) the prohibition of certain AI practices; (c) specific requirements for high-risk AI systems and the obligations of operators of such systems; (d) harmonised transparency rules for AI systems designed to interact with natural persons, for emotion recognition systems and for biometric categorisation systems, as well as for AI systems used to create or manipulate image, audio or video content; (e) rules for market monitoring and surveillance.

High-risk AI systems means systems in the following areas:

  • Biometric identification and categorisation of natural persons
  • Critical infrastructure management and operations
  • Education and training
  • Employment, workforce management and access to self-employment
  • Access to essential private and public services and benefits
  • Law enforcement
  • Migration, asylum and border management
  • Administration of justice and democratic processes

The AI Act requires, inter alia, that high-risk AI systems comply with the Regulation, have a risk management system in place, use quality training, validation and test data, or maintain technical documentation and relevant records.

Providers of high-risk AI systems must, inter alia, have a quality management system in place that meets the relevant requirements of the Regulation.

ISO/IEC 42001

ISO/IEC 42001 specifies the requirements and provides guidance for the establishment, implementation, maintenance and continuous improvement of an organisation’s AI management system. ISO/IEC 42001 is intended to help an organisation to responsibly develop, provide or use AI systems to meet its objectives and to meet relevant requirements, stakeholder obligations and expectations. It is applicable to any organisation that provides or uses products or services using AI systems.

Artificial intelligence management system according to ISO/IEC 42001

Governance and management of AI systems can be ensured by using an Artificial Intelligence Management System (AIMS) based on ISO/IEC 42001. AIMS provides an environment that helps organisations to responsibly fulfill their role in relation to AI systems (e.g. to use, develop, monitor or provide products).

Benefits of AIMS for the organisation

  • Increase the credibility and competitiveness of the organisation
  • Strengthening the long-term resilience of the organisation
  • Reducing the risks to the business and the impact on individuals, groups or societies associated with AI systems
  • Effective organizational leadership and increased return on investment
  • Compliance with regulatory, contractual and other societal needs and expectations

The scope of our professional services

1 | Analysis of the existing system and ISMS project planning

Analysis of the organization’s context and gap analysis of the current state  Development of an AIMS project plan

2 | AIMS development and implementation

Identify and describe the boundaries and scope of AIMS Define the organizational structure, roles and responsibilities of individuals and relevant committees Design an AI policy  Set up and document AIMS processes

3 | Risk Management and Management Actions

Selection and documentation of risk management methodology Identification, analysis and assessment of risks Selection of risk treatment options and measures Development of Statement of Applicability (SoA) Management of risk treatment plans Impact assessment of AI systems

4 | Documentation of topic-specific policies and procedures

Design structure and management of AIMS documentation Design and documentation of topic-specific policies and procedures Support for implementation of some measures Design and implementation of training and awareness activities

5 | Internal audit, supplier audit and certification audit support

Design and documentation of AIMS internal audit status Design of AIMS audit programme and planning of audit activities Implementation of internal and supplier audit Support for follow-up activities and post-audit actions Preparation for certification audit and support for certification audit

Use of advanced GRC applications

The complexity of implementing AIMS processes increases with the size of the organization and the maturity of the ISMS and security measures. For complex organizations with complex management systems, we recommend the use of advanced modular tools. For more information, please see the Applications section.

Quality of our services

In the course of providing consulting services, the quality standards for consulting services according to ISO 20700, information security according to ISO/IEC 27001 and project management according to ISO 21502 are applied. Competence of our consultants:

  • Certified ISO/IEC 42001 Lead Implementer.

When performing an internal audit (first-party audit) or an external third-party audit (second-party audit), the best practices for auditing management systems as defined by ISO 19011 and other relevant standards are applied. Competence of our auditors:

  • Certified ISO/IEC 42001 Lead Auditor *

* in the certification process.

Are you interested?


    Privacy Statement