Interpretable Architecture Neural Networks for Function Visualization
From MaRDI portal
Publication:6181398
DOI10.1080/10618600.2023.2195461arXiv2303.03393MaRDI QIDQ6181398
Shengtong Zhang, Daniel W. Apley
Publication date: 22 January 2024
Published in: Journal of Computational and Graphical Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2303.03393
Cites Work
- Unnamed Item
- All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
- Greedy function approximation: A gradient boosting machine.
- Quasi-regression
- Design and analysis of computer experiments. With comments and a rejoinder by the authors
- Multilayer feedforward networks are universal approximators
- Bayesian Calibration of Computer Models
- Enhanced Topology-Sensitive Clustering by Reeb Graph Shattering
- Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction
- Predicting the output from a complex computer code when fast approximations are available
- Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
- Approximation by superpositions of a sigmoidal function
- Random forests
This page was built for publication: Interpretable Architecture Neural Networks for Function Visualization