"Deep neural networks remain for the most part black boxes"
Applied in a wide variety of domains ranging from genomics to autonomous driving, from speech recognition to gaming, neural network-based solutions require a validation, or at least some explanation, of how the system makes its decisions. This is especially true in the medical domain where such decisions can contribute to the survival or death of a patient. “Unfortunately, the very large number of parameters required by deep neural networks is extremely challenging to cope with for explanation methods, and these networks remain for the most part black boxes. This demonstrates the real need for accurate explanation methods able to scale with this large quantity of parameters and to provide useful information to a potential user,” explains Prof. Pena Carlos Andrés from the HEIG-VD.
The professor was the keynote speaker of the 5th Valais/Wallis AI Workshop held at Idiap. If you missed it, you can watch his talk “Rule and knowledge extraction from deep neural networks” below:
WEBCAST
Find hereafter the talk of the Keynote speaker Prof. Pena Carlos Andrés from HEIG-VD.
To watch all the workshop's talks, please click on the links below.
- Keynote speech: Prof. Pena Carlos Andrés, HEIG-VD Methods for Rule and Knowledge Extraction from Deep Neural Networks - Q&A
- Hannah Muckenhirn, Idiap Research Institute Visualizing and understanding raw speech modeling with convolutional neural networks - Q&A
- Mara Graziani, HES-SO Valais-Wallis Concept Measures to Explain Deep Learning Predictions in Medical Imaging
- Suraj Srinivas, Idiap Research Institute What do neural network saliency maps encode?
- Dr Vincent Andrearczyk, HES-SO Valais-Wallis Transparency of rotation-equivariant CNNs via local geometric priors - Q&A
- Dr Sylvain Calinon, Idiap Research Institute Interpretable models of robot motion learned from few demonstrations - Q&A
- Xavier Ouvrard, University of Geneva / CERN The HyperBagGraph DataEdron: An Enriched Browsing Experience of Scientific Publication Databa
- Seyed Moosavi from Signal Processing Laboratory 4 (LTS4), EPFL Improving robustness to build more interpretable classifiers - Q&A
- Sooho Kim from UniGe Interpretation of End-to-end one Dimension Convolutional Neural Network for Fault Diagnosis on a Planetary Gearbox