Advertisement

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

  • Wojciech Samek
  • Grégoire Montavon
  • Andrea Vedaldi
  • Lars Kai Hansen
  • Klaus-Robert Müller

Part of the Lecture Notes in Computer Science book series (LNCS, volume 11700)

Also part of the Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 11700)

Table of contents

  1. Front Matter
    Pages i-xi
  2. Part I Towards AI Transparency

    1. Front Matter
      Pages 1-3
    2. Wojciech Samek, Klaus-Robert Müller
      Pages 5-22
    3. Adrian Weller
      Pages 23-40
    4. Lars Kai Hansen, Laura Rieger
      Pages 41-49
  3. Part II Methods for Interpreting AI Systems

    1. Front Matter
      Pages 51-53
    2. Anh Nguyen, Jason Yosinski, Jeff Clune
      Pages 55-76
    3. Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee
      Pages 77-95
    4. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama
      Pages 97-119
    5. Seong Joon Oh, Bernt Schiele, Mario Fritz
      Pages 121-144
  4. Part III Explaining the Decisions of AI Systems

    1. Front Matter
      Pages 145-147
    2. Ruth Fong, Andrea Vedaldi
      Pages 149-167
    3. Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross
      Pages 169-191
    4. Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller
      Pages 193-209
    5. Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller et al.
      Pages 211-238
  5. Part IV Evaluating Interpretability and Explanations

    1. Front Matter
      Pages 239-241
    2. Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba
      Pages 243-252
    3. Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne et al.
      Pages 267-280
  6. Part V Applications of Explainable AI

    1. Front Matter
      Pages 281-284
    2. Markus Hofmarcher, Thomas Unterthiner, José Arjona-Medina, Günter Klambauer, Sepp Hochreiter, Bernhard Nessler
      Pages 285-296
    3. Christopher J. Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller
      Pages 297-309
    4. Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller
      Pages 311-330
    5. Kristina Preuer, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, Thomas Unterthiner
      Pages 331-345
    6. Frederik Kratzert, Mathew Herrnegger, Daniel Klotz, Sepp Hochreiter, Günter Klambauer
      Pages 347-362
    7. Marcel A. J. van Gerven, Katja Seeliger, Umut Güçlü, Yağmur Güçlütürk
      Pages 379-394
  7. Part VI Software for Explainable AI

    1. Front Matter
      Pages 395-397
  8. Back Matter
    Pages 435-439

About this book

Introduction

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.

Keywords

artificial intelligence computer vision deep Learning explainable AI explanation Methods fuzzy control fuzzy models fuzzy rules fuzzy systems image processing interpretability machine learning neural networks semantics transparency visualization

Editors and affiliations

  1. 1.Fraunhofer Heinrich Hertz InstituteBerlinGermany
  2. 2.Technische Universität BerlinBerlinGermany
  3. 3.University of OxfordOxfordUK
  4. 4.Technical University of DenmarkKgs. LyngbyDenmark
  5. 5.Sekretariat MAR 4-1Technical University of BerlinBerlinGermany

Bibliographic information

  • DOI https://doi.org/10.1007/978-3-030-28954-6
  • Copyright Information Springer Nature Switzerland AG 2019
  • Publisher Name Springer, Cham
  • eBook Packages Computer Science
  • Print ISBN 978-3-030-28953-9
  • Online ISBN 978-3-030-28954-6
  • Series Print ISSN 0302-9743
  • Series Online ISSN 1611-3349
  • Buy this book on publisher's site
Industry Sectors
Pharma
Materials & Steel
Automotive
Chemical Manufacturing
Health & Hospitals
Biotechnology
Finance, Business & Banking
Electronics
IT & Software
Telecommunications
Consumer Packaged Goods
Energy, Utilities & Environment
Aerospace
Oil, Gas & Geosciences
Engineering