Registration opens April 2019 San Diego,CA September 18-21, 2019

2019 Tapia Conference

Moving From Black Boxes to Explainable Artificial Intelligence

Friday, September 21, 2018 — 10:30AM - 12:00PM

Seemingly constant advancements in deep learning are supporting a rising tide of autonomous decision-based systems and services. We are already enjoying the benefits of services such as voice-based assistants and image correction and annotation. However, applying large complex neural network models to more sensitive applications like human behavior prediction, vehicle automation, medical diagnosis, and military engagement raises significant concerns. While deep learning has demonstrated breakthrough performance for some of the aforementioned applications, the rationale for automated decisions is often uninterpretable to critical stakeholders (e.g., doctors, patients, justice officials). Hence, explainable artificial intelligence becomes a very important topic. This includes (but is not limited to) advancements in research areas such as visual analytics for intuitively expressing deep learning model behavior and natural language processing (and other HCI techniques) for expressing rationale for deep learning model output. Improving AI explain-ability also requires fundamental innovations in deep learning model approaches (i.e., investigating alternatives to increasingly complex network structures). Overall, it is prudent that we recognize the risks and face the challenges associated with “black box” decision-making for critical applications. This panel will discuss such risks, the challenges/feasibility of explainable AI, as well as current advancements and untapped opportunities in developing explainable AI.

This panel will discuss such risks as well as current advancements and untapped opportunities in developing explainable AI.

Dr. Joel Branch, Lucd, Inc.


Maria Alvarez, Microsoft
Mehdi Nourbakhsh, Autodesk
Javona White Bear, MIT Lincoln Labs
Meg Pirrung, Pacific Northwest National Laboratory