Obblighi AI per uso generale: in Europa emergono le differenze fra destra e sinistra

Parlamento Europeo

Il secondo incontro tecnico sui sistemi di intelligenza artificiale per scopi generali, tenutosi il 21 marzo, ha fatto emergere differenze politiche riguardo agli obblighi derivanti da tali sistemi, fra cui rientrano anche i modelli fondazionali come quello alla base di ChatGPT (vedere il ns. articolo).

Il 15 marzo, durante il precedente incontro, i correlatori avevano proposto di assoggettare tali sistemi a requisiti di valutazione del rischio e di governance dei dati (vedi sotto). Mentre il centro-sinistra si è dichiarato favorevole a tali disposizioni, il centro-destra ritiene che gli obblighi siano “troppo difficili da rispettare”, come ha detto una fonte parlamentare all’agenzia francese Contexte.

Secondo la stessa fonte, potrebbero comunque essere raggiunti alcuni punti di accordo, come l’obbligo di registrare i sistemi a scopi generali nella banca dati (articolo 28b.1.e).

Il gruppo di negoziatori si sta inoltre orientando verso la cancellazione dell’Allegato IXa (vedi sotto), in quanto la maggior parte dei gruppi ritiene che questo elenco di informazioni che il fornitore originario deve trasmettere ai fornitori a valle non abbia alcun valore aggiunto, essendo solo indicativo.

La prossima riunione tecnica è prevista per il 27 marzo, secondo un calendario provvisorio pubblicato dal gruppo del deputato Axel Voss (PPE). Attualmente sono in programma tre riunioni politiche: 29 marzo, 13 aprile e 19 aprile.

Di seguito il testo completo dell’articolo 28b sugli obblighi dei fornitori dei sistemi di intelligenza artificiale per scopi generici, come da compromesso del 15 marzo 2023.

Article 28b (new)
Obligations of the provider of a general purpose AI system

1. Without prejudice to Articles 5 and 52 of this Regulation; a provider of a general purpose AI system shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the following requirements, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or distributed through open source, API or both, as well as other distribution channels. When fulfilling those requirements, the generally acknowledged state of the art shall be taken into account, including as reflected in relevant harmonised standards or common specifications.

  • (a) General purpose AI systems shall be able to perform consistently with the objective of this Regulation of ensuring safety and respect of existing law on fundamental rights and Union values. This shall be demonstrated through appropriate design, testing and analysis that ensure identification, reduction and mitigation of use-agnostic risks in line with Article 9, mutatis mutandis, prior and throughout development, and documentation of non-mitigable risks remaining after development and reasonably foreseeable misuse.
  • (aa) the data on which the general purpose AI systems are developed shall be subject to appropriate data governance measures, including:
    • (i) the relevant design choices;
    • (ia) formulation of assumptions, notably with respect to the information that the data are supposed to measure and represent;
    • (ii) assessment of the suitability of the data sets;
    • (iii) examination in view of possible biases and appropriate mitigation measures;
    • (iv) the identification of possible data gaps or shortcomings;
    • (v) measures to ensure that the data are representative and appropriately vetted for errors;
  • (b) General purpose AI systems shall be designed and developed in such a way as to achieve throughout their lifetime use-agnostic levels of statistical performance, predictability, interpretability, corrigibility, safety and cybersecurity in line with Article 15 of this Regulation. These levels shall be assessed through model evaluation by competent external independent experts selected in consultation with the AI Office and documented analysis and testing during conceptualisation, design, and development, in line with the latest assessment and measurement methods, reflected notably in benchmarking guidance and capabilities referred to in Article 58a (new).
  • (c) General purpose AI systems shall be accompanied by intelligible instructions in line with Article 13.2 and 13.3, mutatis mutandis, in order to enable prospective providers comply with their obligations pursuant to Article 28.2.
  • (d) When trained to be used to generate, autonomously or on the basis of limited human input, complex text content that would falsely appear to a person to be human generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, general purpose AI systems shall in addition be subject to the obligations outlined in Article 10 and Article 52(x), with the exception of such AI systems used exclusively for content that undergoes human review and for the publication of which a natural or legal person is liable or holds editorial responsibility.
  • (e) Before placing on the market or putting into service a general purpose AI system, providers of that system shall register that general purpose AI system in the EU database referred to in Article 60, in accordance with the instructions outlined in Annex VIII paragraph C.

2. A provider of a general purpose AI system shall establish a quality management system as described in Article 17 and draw up technical documentation as referred to in Article 11 to ensure and document compliance with this Article, and can experiment in fulfilling these requirements provided they make their best efforts to ensure an equivalent level of compliance.

3. For the purpose of complying with the obligations set out in this Article, providers of such systems shall follow the conformity assessment procedure based on internal control set out in Annex VI, points 3 and 4.

4. Providers of such systems shall also keep the technical documentation referred to in paragraph 2 at the disposal of the national competent authorities for a period ending ten years after the general purpose AI system is placed on the Union market or put into service in the Union.

Di seguito il testo completo dell’allegato IXa, come da compromesso del 15 marzo 2023, che si intende rimuovere perché le informazioni inviate sarebbero “senza valore aggiunto”.

ANNEX IXa (new)
EXAMPLES OF INFORMATION AND OTHER ASSISTANCE BY THE GENERAL PURPOSE AI PROVIDER TO DOWNSTREAM OPERATORS
The following information shall be taken into account by the provider of a general purpose AI system to comply with the obligations laid down in Article 28 paragraph 2 of this Regulation:

To enable compliance with downstream providers’ risk management obligations under Article 9 of the Regulation:

  • Information about the capabilities and limitations of the general purpose AI system, including a description of the functionality it offers;
  • Instructions for how the general purpose AI system should be used;
  • A detailed description of any relevant testing that has been done by or on behalf of the provider of the general purpose AI system with respect to the system’s performance, including a summary of the testing methodology used;
  • Information about steps taken by the provider of the general purpose AI system to identify and mitigate the known and reasonably foreseeable risks that can be reasonably mitigated through the development or supply of the general purpose AI system, as applicable;
  • Any relevant information to assist providers of high-risk AI systems conducting performance testing as required by this Regulation.

To enable compliance with downstream providers’ data governance obligations under
Article 10 of the Regulation:

  • An overview of the relevant design choices as well as a summary of the data sources on which the general purpose AI system was trained, as applicable;
  • An overview of how the training data was collected and processed,
  • The formulation of relevant assumptions in relation to the data, notably with respect to the information that the data are supposed to measure and represent;
  • An assessment of known or reasonably foreseeable biases in the data;
  • The identification of known possible gaps or shortcomings in the data and how they may be addressed.

To enable compliance with downstream providers technical documentation obligations under Article 11 of the Regulation:

  • The name of the general purpose AI system provider, registered trade name or registered trademark, the address at which it can be contacted;
  • The date and version of the general purpose AI system, how its architecture interacts or can be used to interact with hardware or software that is not part of the AI system itself, versions of relevant software or firmware, the description of hardware on which the AI system is intended to run;
  • The design specifications, including the general logic of the general purpose AI system and its algorithms, its key design choices including the rationale and assumptions made, and the main classification choices;
  • The expected lifetime of the general purpose AI system and any necessary maintenance and care measures to ensure the proper functioning of that system, including as regards software updates;
  • The known or foreseeable circumstance, related to the envisioned use of the general purpose AI system at the time of design and training, which may lead later to risks to the health and safety or fundamental rights, democracy and rule of law or the environment, as well as installed mitigation measures based on the generally acknowledged state of the art to manage the risks associated with the design of the system.

To enable compliance with ’downstream providers’ record keeping obligations under Article 12 of the Regulation:

  • Documentation about the nature and format of the general purpose AI system’s input and output data.

To enable compliance with downstream providers’ transparency and human oversight obligations under Articles 13 and 14 of the Regulation:

  • Relevant and appropriate information to help providers draft instructions that allow a trained deployer to understand the system’s output and perform human oversight;
  • An overview of the design and development choices that could have an effect on the potential inclusion of human oversight mechanisms in a high risk AI system.

To enable compliance with downstream providers’ accuracy, robustness, and cybersecurity obligations under Article 15 of the Regulation:

  • A detailed description of any relevant testing that has been done by or on behalf of the provider of the general purpose AI system with respect to its performance, including a summary of the testing methodology used;
  • Any relevant information to assist providers of high-risk AI systems with conducting performance testing as required by this Regulation.

To enable compliance with relevant aspects of the downstream providers’ obligation to establish a quality management system under Article 17:

  • A description of design, design control, design verification, quality control, quality assurance, examination, test and validation actions or procedures carried out before, during and after the development of the general purpose AI system, in accordance with generally acknowledged state of the art in these domains;
  • Where relevant, such as when the general purpose AI system is provided through an API, risk management measures and procedures undertaken by the general purpose AI system provider while the AI system is in use as well as measures and procedures for the general purpose AI system provider to report serious incidents of the general purpose AI system.

Other relevant information downstream providers require in order to comply with its obligations, including the obligation to undertake a conformity assessment under Article 43 of this Regulation, or take corrective actions under Articles 21, 65 or 67 of this Regulation.