XAIR Data and Metadata Principles

KGA-WG-XAIR-24.4

The « Explainable-AI-ready data and metadata principles » (XAIR) WG embarks on a mission to define principles that ensure data and models are inherently explainable, aligning with the urgent need for transparency and interpretability in AI applications. This group emphasizes that « Data are XAIR to the degree that they are semantically enriched so that best use can be made of interpretable learning techniques, » including both inductive learning and logical deduction. With the advent of the European Commission’s Battery Regulation mandating digital product passports (DPPs) for an expanding range of products, the necessity for data to be explainable-AI-ready (XAIR) is not just a preference but a burgeoning legal requirement. The initiative encapsulates its ethos in the slogan “FAIR and XAIR data,” advocating for a future where data’s reliability is underpinned by its explainability, ensuring it meets both ethical standards and regulatory compliance.

Scope and Purpose

The WG is focused on establishing clear, actionable principles for enhancing the explainability of data and models in the face of evolving applications and regulatory requirements. Its primary mission is to facilitate the creation of data and models that are not only interpretable and fair but also compliant with emerging legal mandates, such as those relating to digital product passports. This initiative, involving key collaborators like GIPSA, B CARD, and the German national research data infrastructure for mathematics, aims to:

– Identify and harmonize core concepts related to data and model explainability across different communities and philosophical perspectives.
– Develop and refine ontologies that cater to the specific needs of explainable AI, fostering good practices within the community.
– Produce stable versions of these ontologies for practical application, ensuring data is FAIR (Findable, Accessible, Interoperable, and Reusable) and XAIR (Explainable-AI-ready), thus meeting both ethical and regulatory standards.

Operating over a 40-month period, the group is set on creating a framework that encourages transparency and accountability in AI systems, setting a precedent for the necessity of explainable AI in legal and regulatory contexts.

Expected outcome of the WG

The expected outcomes of the WG are multifaceted, aiming to set a new standard for the development and use of AI systems. The group intends to:

  1. Develop Clear Principles: Formulate clear, actionable principles that guide the creation and use of data and models to ensure they are inherently explainable, catering to both new applications and regulatory requirements.

  2. Harmonize Core Concepts: Identify and harmonize core concepts related to explainability across various communities and philosophical perspectives, promoting a unified understanding that facilitates cross-disciplinary collaboration.

  3. Ontologies Development: Develop and refine ontologies specifically designed for explainable AI, ensuring that data and models are semantically enriched to maximize the use of interpretable learning techniques, including both inductive learning (XAI in the narrow sense) and logical reasoning (deduction).

  4. Establish Good Practices: Advocate and establish best practices for creating, handling, and sharing data and models that are explainable by design, thereby setting benchmarks for transparency and accountability in AI.

  5. Release Stable Ontology Versions: Produce and release stable versions of these ontologies for practical application, ensuring that they are accessible, interoperable, and reusable (FAIR), in addition to being explainable (XAIR).

  6. Legal and Ethical Compliance: Address and fulfill emerging legal requirements, such as those related to digital product passports under the European Commission’s regulations, by ensuring data and AI models are explainable, trustworthy, and hence compliant.

  7. Community Engagement and Feedback: Engage with the broader community through requests for comments on the developed principles and ontologies, fostering an inclusive environment that values feedback and collaborative refinement.

  8. Evaluation of Objectives Achievement: Conduct evaluations at critical milestones, such as the end of Horizon projects, to assess the achievement of objectives and the impact of the group’s efforts on making AI more explainable and ethically sound.

By achieving these outcomes, the XAIR Working Group aims to significantly contribute to the advancement of explainable AI, ensuring that data and AI models are not only technically proficient but also ethically responsible and legally compliant, in line with the slogan “FAIR and XAIR data.”

Tentative schedule

The WG has the following roadmap for next 40 months

  • Months 1 to 10: Synopsis of literature work on core concepts:
    • Identify the core concepts, analyse and summarize the literature characterizing these concepts.
    • M10, report, public, « Synopsis of XAIR core concepts. »
  • Months 11 to 16: Work on actionable core concepts:
    • Discussion of use cases; annotation of examples; draft of mid-level and domain ontologies.
    • M16, report, public, « Request for comments on actionable XAIR core concepts. »
  • Months 17 to 22: Work and consultation on XAIR principles:
    • Community discussion toward “XAIR principles.”
    • M22, report, public, « Request for comments on XAIR data and metadata […] principles […] ».
  • Months 23 to 34: Stable release of mid and domain ontologies:
    • Ontologies for XAIR are finalized to the extent that no major changes will occur in the future.
    • M34, ontologies, public, « […] Ontologies covering the XAIR key concepts […] ».
  • Months 35 to 40: XAIR forward plan and ontology governance:
    • Critical analysis of accomplishments and forward-looking discussion.
    • M40, report, public, « XAIR forward plan and ontology governance. »

Member relationship

Members of this WG are expected to actively contribute to the group’s goals through various means, reflecting a commitment to advancing the principles of explainable AI. Their responsibilities include:

  1. Active Participation: Engage regularly in discussions, workshops, and meetings to collaboratively identify, refine, and harmonize core concepts and principles related to explainable AI.

  2. Expert Contribution: Provide expertise in specific areas relevant to the working group, such as ontology development, AI ethics, legal frameworks, or specific domain knowledge, to guide the creation of clear, actionable principles and best practices.

  3. Community Engagement: Facilitate and participate in community feedback initiatives, such as requests for comments on draft principles and ontologies, to ensure the group’s outputs are robust, widely accepted, and applicable.

  4. Documentation and Publication: Assist in documenting the group’s findings, methodologies, and ontologies, contributing to publications, reports, and guidelines that will disseminate the working group’s efforts to a broader audience.

  5. Implementation and Testing: Where applicable, implement and test the developed principles and ontologies within their own projects or organizations to provide practical feedback and insights, ensuring the group’s outputs are not only theoretically sound but also practically viable.

As for crediting their contributions, members can expect:

  1. Co-authorship: Members who significantly contribute to publications, reports, and guidelines will be credited as co-authors, recognizing their intellectual contributions.

  2. Acknowledgment: Contributions to discussions, feedback, and implementation efforts will be acknowledged in relevant documents and presentations, ensuring recognition of the diverse expertise brought by the members.

  3. Professional Visibility: Participation in this high-profile working group may enhance members’ professional visibility and credibility within their fields, attributed to their contributions to advancing explainable AI principles.

  4. Networking Opportunities: Members will have opportunities to connect with leading experts and organizations, fostering professional relationships that can lead to future collaborations beyond the life of the working group.

  5. Contribution to Standards: By contributing to the development of new standards for explainable AI, members will be recognized as pioneers in shaping the future of AI practices, potentially influencing legislation, industry standards, and ethical guidelines.

The recognition of members’ contributions aims to ensure that their expertise and efforts are duly acknowledged, promoting a sense of ownership and achievement that benefits both the individual contributors and the wider AI community.

Introductory Video

Join the discussion at:

Contact:

Martin Thomas Horsch

Associate Professor, Norwegian University of Life Sciences