Towards interpreting deep neural networks via layer behavior understanding
From MaRDI portal
Publication:2673336
DOI10.1007/s10994-021-06074-8zbMath1491.68178OpenAlexW4220673036WikidataQ114955313 ScholiaQ114955313MaRDI QIDQ2673336
Mingkui Tan, Xiping Hu, Xiangmiao Wu, Jiezhang Cao, Jincheng Li
Publication date: 9 June 2022
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-021-06074-8
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Computational Optimal Transport: With Applications to Data Science
- Gradient descent optimizes over-parameterized deep ReLU networks
- Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach
- Spanning attack: reinforce black-box attacks with unlabeled data
- The Sinkhorn–Knopp Algorithm: Convergence and Applications
- On the information bottleneck theory of deep learning
- Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*
- Optimal Transport