Deep Learning World Expert Round 4: Explainable & Responsible AI

Date:

Monday, June 14, 2021

Time:

1:00 pm

Track:

Summary:

1. Introduction to Explainable AI Using a Real World Demo (Julia Brosig)

Explainable AI helps to understand decisions of black box models and thus improves confidence in machine learning models. XAI methods provide global or local insights into the black box model’s decisions. For this presentation, various global and local interpretable machine learning methods were applied to a Munich rent index based use case. The winning methods were implemented and visualized in a dashboard, understandable even for non-statisticians.

2. The State of Secure and Trusted AI 2021 (Alex Polyakov)

To raise security awareness in the field of Trusted AI, we started more than a year ago a project to analyze the past decade of academic, industry, and governmental progress. The eye-opening results reveal an exponential growth of interest in testing AI systems for security and bias and the absence of adequate defenses. This research aims to make companies aware of insecure and malicious AI by sharing insights formed based on almost 2000 research papers.

3. Responsible AI: A Cross-Industry Overview (Aishwarya Srinivasan)

With the accelerated use of Machine Learning and AI technologies, having a comprehensive view of the use of these technologies, the data involved, and how the technology interacts with the users have become diagnostically complicated. The presentation shows varied areas where industries are building user-facing AI applications, understand the responsibility of the organization to govern and follow certain regulations around building these.

Ready to attend?

Register now! Join your peers.

Register nowView Agenda
Newsletter Knowledge is everything! Sign up for our newsletter to receive:
  • 10% off your first ticket!
  • insights, interviews, tips, news, and much more about Machine Learning Week Europe
  • price break reminders