Artificial Intelligence

Businesses deploy and use Artificial Intelligence (AI) responsively and transparently. Legal, social, and environmental implications must be considered when expanding the application of AI systems. The implementation of AI systems should increase people’s scope of action in a value-based manner. Principles of fundamental rights, autonomy, and self-determination must be respected consistently.


Best Practices

Description

Technological advancement follows not only the premise of optimization and economic efficiency. Deploying useful technologies can also expand people’s scope of action and improve their working lives considerably. While AI and other digital technologies only execute a supportive role, attention must be drawn to the preservation of autonomy and self-determination.

External Stakeholder: Investors, Startups, politics and international regulators

No conditionality regarding the development and usage of trustworthy AI systems exist so fare.

  • Responsibility: Defining responsibilities for AI (sub)system.
  • Enhanced clarity: A framework promotes transparency about the purpose and functioning of AI systems
  • Security and protection: Information technology is robust and mechanisms on risk assessment exist
  • Self-determination: Protection of data and IT against manipulation
  • Equal opportunities and fairness: Discrimination through the deployment of information technology is actively prevented.

  • Enhance management commitment for the use of AI systems
  • Definition of AI governance and red lines
  • Inclusion of diverse stakeholder groups
  • Check for the added value of AI systems
  • Conduct expectation management
  • Create interdisciplinary exchange among stakeholders
  • Define and classify risks
  • Establish monitoring systems that review and audit AI systems
  • Empower employees and consumers to assess implications of AI systems
  • Subject AI systems to regular assessments (CIP // PDCA)

  • CDR Self-Commitment of the Federal Ministry of Justice and Consumer Protection
  • Report of the Data Ethics Commission
  • Report of the commission of enquiry on AI
  • AI Strategy of the Federal Republic of Germany
  • Data strategy of the Federal Government
  • DIN/DKE: German Standardization Roadmap Artificial Intelligence
  • Bitkom: A look into the black box
  • Fraunhofer IAIS: Trustworthy use of AI
  • Bertelsmann Stiftung: “From Principles to practice: How can we make AI ethics measurable?”
  • EU: Work of the AI HLEG and the European Commission’s Expert Group
  • EP and Council proposal to establish harmonized rules for AI (Artificial Intelligence Act) of April 21, 2021 (“EU AI Act”)
  • IEEE: Trustworthiness of Artificial Intelligence
  • DKE, DIN, VDE: Ethics and AI: What can technical norms and standards achieve?

  • AI Campus (pilot project of the BMBF, Stifterverband: German Research Center for Artificial Intelligence (DFKI), the Hasso Plattner Institute (HPI), NEOCOSMO and the mmb Institute)
  • Elements of AI (training offer of the University of Helsinki, in German language with support of the IHK under the auspices of the BMWi (more information under this

 

  • Explainable AI, e.g., DALEX or Anaconda
  • Explainable AI Toolset: DrWhy
  • Google Explainable AI