References
Aas, Kjersti, Martin Jullum, and Anders Løland. 2021. “Explaining
Individual Predictions When Features Are Dependent: More
Accurate Approximations to Shapley Values.”
Artificial Intelligence 298 (September): 103502. https://doi.org/10.1016/j.artint.2021.103502.
Adadi, Amina, and Mohammed Berrada. 2018. “Peeking
Inside the Black-Box: A Survey on
Explainable Artificial Intelligence
(XAI).” IEEE Access 6: 52138–60. https://doi.org/10.1109/ACCESS.2018.2870052.
Adebayo, Julius, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz
Hardt, and Been Kim. 2018. “Sanity Checks for Saliency
Maps.” Advances in Neural Information Processing Systems
31. https://doi.org/10.1609/aaai.v34i04.6064.
Alemohammad, Sina, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz
Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, and Richard G
Baraniuk. 2023. “Self-Consuming Generative Models Go Mad.”
arXiv Preprint arXiv:2307.01850.
Alimohamadi, Yousef, Mojtaba Sepandi, Maryam Taghdir, and Hadiseh
Hosamirudsari. 2020. “Determine the Most Common Clinical Symptoms
in COVID-19 Patients: A Systematic Review and Meta-Analysis.”
Journal of Preventive Medicine and Hygiene 61 (3): E304. https://doi.org/10.15167/2421-4248/jpmh2020.61.3.1530.
“AlphaFold DB Website.” 2023. https://alphafold.ebi.ac.uk/.
Anderson, Chris. 2008. “The End of Theory: The Data Deluge Makes
the Scientific Method Obsolete.” Wired Magazine 16 (7):
16–07.
Antoniou, Antreas, Amos Storkey, and Harrison Edwards. 2017. “Data
Augmentation Generative Adversarial Networks.” arXiv Preprint
arXiv:1711.04340.
Apley, Daniel W., and Jingyu Zhu. 2020. “Visualizing the
Effects of Predictor Variables in Black
Box Supervised Learning Models.” Journal of the Royal
Statistical Society Series B: Statistical Methodology 82 (4):
1059–86. https://doi.org/10.1111/rssb.12377.
Arjovsky, Martin, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz.
2019. “Invariant Risk Minimization.” arXiv Preprint
arXiv:1907.02893.
Arnold, Becky, Louise Bowler, Sarah Gibson, Patricia Herterich, Rosie
Higman, Anna Krystalli, Alexander Morley, Martin O’Reilly, Kirstie
Whitaker, et al. 2019. “The Turing Way: A Handbook for
Reproducible Data Science.” Zenodo.
Athey, Susan, Julie Tibshirani, and Stefan Wager. 2019.
“Generalized Random Forests.” https://doi.org/10.1214/18-aos1709.
Ayoub, Fares, Toshiro Sato, and Atsushi Sakuraba. 2021. “Football
and COVID-19 Risk: Correlation Is Not Causation.” Clinical
Microbiology and Infection 27 (2): 291–92. https://doi.org/10.1016/j.cmi.2020.08.034
.
Bach, Victor Chernozhukov, Malte S. Kurz, and Martin Spindler. 2022.
“DoubleML – An Object-Oriented
Implementation of Double Machine Learning in
Python.” Journal of Machine Learning
Research 23 (53): 1–6. https://doi.org/10.18637/jss.v108.i03.
Bach, V. Chernozhukov, M. S. Kurz, and M. Spindler. 2021.
“DoubleML – An Object-Oriented
Implementation of Double Machine Learning in R.” https://doi.org/10.32614/cran.package.doubleml.
Balestriero, Randall, Jerome Pesenti, and Yann LeCun. 2021.
“Learning in High Dimension Always Amounts to
Extrapolation.” arXiv Preprint arXiv:2110.09485.
Barnard, Etienne, and LFA Wessels. 1992. “Extrapolation and
Interpolation in Neural Network Classifiers.” IEEE Control
Systems Magazine 12 (5): 50–53. https://doi.org/10.1109/37.158898.
Bartlett, Peter L., Philip M. Long, Gábor Lugosi, and Alexander Tsigler.
2020. “Benign Overfitting in Linear
Regression.” Proceedings of the National Academy
of Sciences 117 (48): 30063–70. https://doi.org/10.1073/pnas.1907378117.
Basso, Bruno, and Lin Liu. 2019. “Seasonal Crop Yield Forecast:
Methods, Applications, and Accuracies.” In
Advances in Agronomy, 154:201–55.
Elsevier. https://doi.org/10.1016/bs.agron.2018.11.002.
Bates, Stephen, Trevor Hastie, and Robert Tibshirani. 2023.
“Cross-Validation: What Does It Estimate and How Well Does It Do
It?” Journal of the American Statistical Association,
1–12. https://doi.org/10.1080/01621459.2023.2197686.
Battaglia, Peter W, Jessica B Hamrick, Victor Bapst, Alvaro
Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea
Tacchetti, et al. 2018. “Relational Inductive Biases, Deep
Learning, and Graph Networks.” arXiv Preprint
arXiv:1806.01261.
Beckers, Sander, and Joseph Y Halpern. 2019. “Abstracting Causal
Models.” In Proceedings of the Aaai Conference on Artificial
Intelligence, 33:2678–85. 01. https://doi.org/10.1609/aaai.v33i01.33012678.
Beckers, Sander, and Joost Vennekens. 2018. “A Principled Approach
to Defining Actual Causation.” Synthese 195 (2): 835–62.
https://doi.org/10.1007/s11229-016-1247-1.
Begoli, Edmon, Tanmoy Bhattacharya, and Dimitri Kusnezov. 2019.
“The Need for Uncertainty Quantification in Machine-Assisted
Medical Decision Making.” Nature Machine Intelligence 1
(1): 20–23. https://doi.org/10.1038/s42256-018-0004-1.
Belkin, Mikhail. 2021. “Fit Without Fear: Remarkable Mathematical
Phenomena of Deep Learning Through the Prism of Interpolation.”
Acta Numerica 30: 203–48. https://doi.org/10.1017/s0962492921000039.
Belkin, Mikhail, Daniel Hsu, Siyuan Ma, and Soumik Mandal. 2019.
“Reconciling Modern Machine-Learning Practice and the Classical
Bias–Variance Trade-Off.” Proceedings of the National Academy
of Sciences of the United States of America 116 (32): 15849–54. https://doi.org/10.1073/pnas.1903070116.
Bengio, Yoshua, Aaron Courville, and Pascal Vincent. 2013.
“Representation Learning: A Review and New Perspectives.”
IEEE Transactions on Pattern Analysis and Machine Intelligence
35 (8): 1798–828. https://doi.org/10.1109/TPAMI.2013.50.
Bird, Steven. 2006. “NLTK: The Natural Language Toolkit.”
In Proceedings of the COLING/ACL 2006 Interactive Presentation
Sessions, 69–72. https://doi.org/10.3115/1225403.1225421.
Bishop, Christopher M, and Nasser M Nasrabadi. 2006. Pattern
Recognition and Machine Learning. Vol. 4. 4. Springer. https://doi.org/10.1007/978-0-387-45528-0.
Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. 2017.
“Augmentor: An Image Augmentation Library for Machine
Learning.” Journal of Open Source Software 2 (19): 432.
https://doi.org/10.21105/joss.00432.
Bottou, Léon. 2010. “Large-Scale Machine
Learning with Stochastic Gradient
Descent.” In Proceedings of
COMPSTAT’2010, edited by Yves Lechevallier and Gilbert
Saporta, 177–86. Heidelberg: Physica-Verlag HD. https://doi.org/10.1007/978-3-7908-2604-3_16.
Breiman, Leo. 2001. “Random Forests.”
Machine Learning 45 (1): 5–32. https://doi.org/10.1023/A:1010933404324.
———. 2017. Classification and Regression
Trees. New York: Routledge. https://doi.org/10.1201/9781315139470.
Carpentras, Dino. 2024. “We Urgently Need a Culture of
Multi-Operationalization in Psychological Research.”
Communications Psychology 2 (1): 32.
Chakravartty, Anjan. 2017. “Scientific
Realism.” In The Stanford Encyclopedia of
Philosophy, edited by Edward N. Zalta, Summer 2017. https://plato.stanford.edu/archives/sum2017/entries/scientific-realism/;
Metaphysics Research Lab, Stanford University.
Chalapathy, Raghavendra, and Sanjay Chawla. 2019. “Deep Learning
for Anomaly Detection: A Survey.” arXiv Preprint
arXiv:1901.03407.
Chandrashekar, Girish, and Ferat Sahin. 2014. “A Survey on Feature
Selection Methods.” Computers & Electrical
Engineering 40 (1): 16–28. https://doi.org/10.1016/j.compeleceng.2013.11.024.
Chasalow, Kyla, and Karen Levy. 2021. “Representativeness in
Statistics, Politics, and Machine
Learning.” In Proceedings of the 2021
ACM Conference on Fairness,
Accountability, and Transparency, 77–89.
FAccT ’21. New York, NY, USA: Association for Computing
Machinery. https://doi.org/10.1145/3442188.3445872.
Chattopadhyay, Prithvijit, Ramakrishna Vedantam, Ramprasaath R
Selvaraju, Dhruv Batra, and Devi Parikh. 2017. “Counting Everyday
Objects in Everyday Scenes.” In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 1135–44. https://doi.org/10.1109/cvpr.2017.471.
Chernozhukov, Victor, Denis Chetverikov, Mert Demirer, Esther Duflo,
Christian Hansen, Whitney Newey, and James Robins. 2018.
“Double/Debiased Machine Learning for Treatment and Structural
Parameters.” Oxford University Press Oxford, UK. https://doi.org/10.1111/ectj.12097.
Chomsky, Noam, Ian Roberts, and Jeffrey Watumull. 2023. “Noam
Chomsky: The False Promise of ChatGPT.” The New York
Times 8.
Ciravegna, Gabriele, Frédéric Precioso, Alessandro Betti, Kevin Mottin,
and Marco Gori. 2023. “Knowledge-Driven Active Learning.”
In Joint European Conference on Machine Learning and Knowledge
Discovery in Databases, 38–54. Springer. https://doi.org/10.1007/978-3-031-43412-9_3.
Clemmensen, Line H., and Rune D. Kjærsgaard. 2023. “Data
Representativity for Machine
Learning and AI Systems.”
arXiv. http://arxiv.org/abs/2203.04706.
Collaboration, Open Science. 2015. “Estimating the Reproducibility
of Psychological Science.” Science 349 (6251): aac4716.
https://doi.org/10.1126/science.aac4716.
Collins, Gary S, Karel G M Moons, Paula Dhiman, Richard D Riley, Andrew
L Beam, Ben Van Calster, Marzyeh Ghassemi, et al. 2024.
“TRIPOD+AI Statement: Updated Guidance
for Reporting Clinical Prediction Models That Use Regression or Machine
Learning Methods.” BMJ (Clinical Research Ed.) 385. https://doi.org/10.1136/bmj-2023-078378.
Corso, Gabriele, Hannes Stark, Stefanie Jegelka, Tommi Jaakkola, and
Regina Barzilay. 2024. “Graph Neural Networks.” Nature
Reviews Methods Primers 4 (1): 17.
Covert, Ian C., Scott Lundberg, and Su-In Lee. 2020.
“Understanding Global Feature Contributions with Additive
Importance Measures.” In Proceedings of the 34th
International Conference on Neural Information
Processing Systems, 17212–23. NIPS’20.
Red Hook, NY, USA: Curran Associates Inc.
Cranmer, Kyle, Johann Brehmer, and Gilles Louppe. 2020. “The
Frontier of Simulation-Based Inference.” Proceedings of the
National Academy of Sciences 117 (48): 30055–62. https://doi.org/10.1073/pnas.1912789117.
Cybenko, George. 1989. “Approximation by Superpositions of a
Sigmoidal Function.” Mathematics of Control, Signals and
Systems 2 (4): 303–14. https://doi.org/10.1007/BF02551274.
D’Ecclesiis, Oriana, Costanza Gavioli, Chiara Martinoli, Sara Raimondi,
Susanna Chiocca, Claudia Miccolo, Paolo Bossi, et al. 2022.
“Vitamin d and SARS-CoV2 Infection, Severity and Mortality: A
Systematic Review and Meta-Analysis.” PLoS One 17 (7):
e0268396. https://doi.org/10.1371/journal.pone.0268396.
Dandl, Susanne. 2023. “Causality Concepts in Machine Learning:
Heterogeneous Treatment Effect Estimation with Machine Learning &
Model Interpretation with Counterfactual and Semi-Factual
Explanations.” PhD thesis, lmu. https://doi.org/10.5282/edoc.32947.
Danka, Tivadar, and Peter Horvath. 2018. “modAL: A Modular Active
Learning Framework for Python.” arXiv Preprint
arXiv:1805.00979.
De Regt, Henk W. 2020. “Understanding, Values, and the Aims of
Science.” Philosophy of Science 87 (5): 921–32. https://doi.org/10.1086/710520.
De Sarkar, Sohan, Fan Yang, and Arjun Mukherjee. 2018. “Attending
Sentences to Detect Satirical Fake News.” In Proceedings of
the 27th International Conference on Computational Linguistics,
3371–80.
Dennis, Brian, Jose Miguel Ponciano, Mark L Taper, and Subhash R Lele.
2019. “Errors in Statistical Inference Under Model
Misspecification: Evidence, Hypothesis Testing, and AIC.”
Frontiers in Ecology and Evolution 7: 372. https://doi.org/10.3389/fevo.2019.00372.
Denouden, Taylor, Rick Salay, Krzysztof Czarnecki, Vahdat Abdelzad, Buu
Phan, and Sachin Vernekar. 2018. “Improving Reconstruction
Autoencoder Out-of-Distribution Detection with Mahalanobis
Distance.” arXiv Preprint arXiv:1812.02765.
Diamantopoulos, Adamantios, Petra Riefler, and Katharina P Roth. 2008.
“Advancing Formative Measurement Models.” Journal of
Business Research 61 (12): 1203–18. https://doi.org/10.1016/j.jbusres.2008.01.009.
Diciccio, Thomas J, and Joseph P Romano. 1988. “A Review of
Bootstrap Confidence Intervals.” Journal of the Royal
Statistical Society Series B: Statistical Methodology 50 (3):
338–54.
Domingos, Pedro. 2000. “A Unified Bias-Variance
Decomposition.” In Proceedings of 17th International
Conference on Machine Learning, 231–38. Morgan Kaufmann Stanford.
Doshi-Velez, Finale, and Been Kim. 2017. “Towards A Rigorous
Science of Interpretable Machine Learning.”
arXiv. http://arxiv.org/abs/1702.08608.
Douglas, Heather E. 2009. “Reintroducing Prediction to
Explanation.” Philosophy of Science 76 (4): 444–63. https://doi.org/10.1086/648111.
Earman, John. 1992. “Bayes or Bust?: A Critical Examination of
Bayesian Confirmation Theory.”
Eberhardt, Frederick, Clark Glymour, and Richard Scheines. 2005.
“On the Number of Experiments Sufficient and in the Worst Case
Necessary to Identify All Causal Relations Among n Variables.” In
Proceedings of the Twenty-First Conference on Uncertainty in
Artificial Intelligence, 178–84. UAI’05. Arlington, Virginia, USA:
AUAI Press.
Ellenberg, Jordan. 2014. How Not to Be Wrong: The Hidden Maths of
Everyday Life. Penguin UK.
Erickson, Bradley J, Panagiotis Korfiatis, Zeynettin Akkus, and Timothy
L Kline. 2017. “Machine Learning for Medical Imaging.”
Radiographics 37 (2): 505–15. https://doi.org/10.1148/rg.2017160130.
Feng, Steven, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi,
Teruko Mitamura, and Eduard Hovy. 2021. “A Survey of Data
Augmentation Approaches for NLP.” In Findings of the
Association for Computational Linguistics: ACL-IJCNLP 2021.
Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.findings-acl.84.
Fischer, Alain. 2020. “Resistance of Children to Covid-19.
How?” Mucosal Immunology 13 (4): 563–65. https://doi.org/
10.1038/s41385-020-0303-9.
Fisher, Aaron, Cynthia Rudin, and Francesca Dominici. 2019. “All
Models Are Wrong, but Many Are
Useful: Learning a Variable’s
Importance by Studying an Entire
Class of Prediction Models
Simultaneously.” Journal of Machine Learning
Research : JMLR 20: 177. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8323609/.
Fleming, Sean W, Velimir V Vesselinov, and Angus G Goodbody. 2021.
“Augmenting Geophysical Interpretation of Data-Driven Operational
Water Supply Forecast Modeling for a Western US River Using a Hybrid
Machine Learning Approach.” Journal of Hydrology 597:
126327.
Flora, Montgomery, Corey Potvin, Amy McGovern, and Shawn Handler. 2022.
“Comparing Explanation Methods for Traditional
Machine Learning Models Part 1: An Overview of
Current Methods and Quantifying Their
Disagreement.” arXiv. https://doi.org/10.48550/arXiv.2211.08943.
Frankle, Jonathan, and Michael Carbin. 2019. “The
Lottery Ticket Hypothesis:
Finding Sparse, Trainable
Neural Networks.” arXiv. https://doi.org/10.48550/arXiv.1803.03635.
Freiesleben, Timo. 2022. “The Intriguing Relation Between
Counterfactual Explanations and Adversarial Examples.” Minds
and Machines 32 (1): 77–109. https://doi.org/10.1007/s11023-021-09580-9.
———. 2023. “Artificial Neural Nets and the Representation of Human
Concepts.” arXiv Preprint arXiv:2312.05337.
https://doi.org/
10.48550/arXiv.2312.05337.
Freiesleben, Timo, and Thomas Grote. 2023. “Beyond Generalization:
A Theory of Robustness in Machine Learning.” Synthese
202 (4): 109. https://doi.org/10.1007/s11229-023-04334-9.
Freiesleben, Timo, Gunnar König, Christoph Molnar, and Álvaro
Tejero-Cantero. 2024. “Scientific Inference with Interpretable
Machine Learning: Analyzing Models to Learn about Real-World
Phenomena.” Minds and Machines 34 (3): 32. https://doi.org/10.1007/s11023-024-09691-z.
Friedman, Jerome H. 2001. “Greedy Function Approximation:
A Gradient Boosting Machine.” The Annals of
Statistics 29 (5): 1189–1232. https://doi.org/10.1214/aos/1013203451.
Friedman, Jerome H., and Bogdan E. Popescu. 2008. “Predictive
Learning via Rule Ensembles.” The
Annals of Applied Statistics 2 (3): 916–54. https://doi.org/10.1214/07-AOAS148.
Frigg, Roman, and Stephan Hartmann. 2020. “Models in Science.” In The
Stanford Encyclopedia of Philosophy, edited by Edward
N. Zalta, Spring 2020. https://plato.stanford.edu/archives/spr2020/entries/models-science/;
Metaphysics Research Lab, Stanford University.
Fukumizu, Kenji, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf.
2007. “Kernel Measures of Conditional Dependence.”
Advances in Neural Information Processing Systems 20.
Gal, Yarin et al. 2016. “Uncertainty in Deep Learning.”
Gal, Yarin, and Zoubin Ghahramani. 2016. “Dropout as a Bayesian
Approximation: Representing Model Uncertainty in Deep Learning.”
In International Conference on Machine Learning, 1050–59. PMLR.
Gao, Yue, Guang-Yao Cai, Wei Fang, Hua-Yi Li, Si-Yuan Wang, Lingxi Chen,
Yang Yu, et al. 2020. “Machine Learning Based Early Warning System
Enables Accurate Mortality Risk Prediction for COVID-19.”
Nature Communications 11 (1): 5033. https://doi.org/10.5281/zenodo.3991113.
Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman
Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021.
“Datasheets for Datasets.” Communications of the
ACM 64 (12): 86–92. https://doi.org/10.1145/3458723.
Ghai, Bhavya, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus
Mueller. 2021. “Explainable Active Learning (Xal) Toward Ai
Explanations as Interfaces for Machine Teachers.” Proceedings
of the ACM on Human-Computer Interaction 4 (CSCW3): 1–28. https://doi.org/10.1145/3432934.
Gibbons, Jason B, Edward C Norton, Jeffrey S McCullough, David O
Meltzer, Jill Lavigne, Virginia C Fiedler, and Robert D Gibbons. 2022.
“Association Between Vitamin d Supplementation and COVID-19
Infection and Mortality.” Scientific Reports 12 (1):
19397. https://doi.org/10.1038/s41598-022-24053-4.
Goldstein, Alex, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015.
“Peeking Inside the Black Box:
Visualizing Statistical Learning With Plots of
Individual Conditional Expectation.” Journal of
Computational and Graphical Statistics 24 (1): 44–65. https://doi.org/10.1080/10618600.2014.907095.
Gong, Yue, and Guochang Zhao. 2022. “Wealth, Health, and Beyond:
Is COVID-19 Less Likely to Spread in Rich Neighborhoods?”
Plos One 17 (5): e0267487. https://doi.org/10.1371%2Fjournal.pone.0267487.
Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014.
“Explaining and Harnessing Adversarial Examples.” arXiv
Preprint arXiv:1412.6572. https://doi.org/10.48550/arXiv.1412.6572.
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep
Learning. MIT press.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David
Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014.
“Generative Adversarial Nets.” Advances in Neural
Information Processing Systems 27.
Goschenhofer, Jann, Franz MJ Pfister, Kamer Ali Yuksel, Bernd Bischl,
Urban Fietzek, and Janek Thomas. 2020. “Wearable-Based Parkinson’s
Disease Severity Monitoring Using Deep Learning.” In Machine
Learning and Knowledge Discovery in Databases: European Conference, ECML
PKDD 2019, würzburg, Germany, September 16–20, 2019,
Proceedings, Part III, 400–415. Springer. https://doi.org/10.1007/978-3-030-46133-1_24.
Gottfredson, Linda S. 1998. “The General Intelligence
Factor.” Scientific American, Incorporated.
Gruber, Cornelia, Patrick Oliver Schenk, Malte Schierholz, Frauke
Kreuter, and Göran Kauermann. 2023. “Sources of
Uncertainty in Machine Learning –
A Statisticians’ View.”
arXiv. https://doi.org/10.48550/arXiv.2305.16703.
Gupta, Meghna, Caleigh M Azumaya, Michelle Moritz, Sergei Pourmal, Amy
Diallo, Gregory E Merz, Gwendolyn Jang, et al. 2021. “CryoEM and
AI Reveal a Structure of SARS-CoV-2 Nsp2, a Multifunctional Protein
Involved in Key Host Processes.” Research Square. https://doi.org/10.1101/2021.05.10.443524.
Haley, Pamela J, and DONALD Soloway. 1992. “Extrapolation
Limitations of Multilayer Feedforward Neural Networks.” In
[Proceedings 1992] IJCNN International Joint Conference on Neural
Networks, 4:25–30. IEEE. https://doi.org/10.1109/IJCNN.1992.227294.
Halmos, Paul R. 2013. Measure Theory. Vol. 18. Springer. https://doi.org/10.1007/978-1-4684-9440-2.
Han, Sicong, Chenhao Lin, Chao Shen, Qian Wang, and Xiaohong Guan. 2023.
“Interpreting Adversarial Examples in Deep Learning: A
Review.” ACM Computing Surveys 55 (14s): 1–38. https://doi.org/10.1145/3594869.
Hardt, Moritz, and Benjamin Recht. 2022. Patterns, Predictions, and
Actions: Foundations of Machine Learning. Princeton University
Press.
Hasson, Uri, Samuel A Nastase, and Ariel Goldstein. 2020. “Direct
Fit to Nature: An Evolutionary Perspective on Biological and Artificial
Neural Networks.” Neuron 105 (3): 416–34. https://doi.org/10.1016/j.neuron.2019.12.002.
Hastie, Trevor, Robert Tibshirani, Jerome H Friedman, and Jerome H
Friedman. 2009. The Elements of Statistical Learning: Data Mining,
Inference, and Prediction. Vol. 2. Springer.
He, Yang-Hui. 2017. “Machine-Learning the String
Landscape.” Physics Letters B 774: 564–68. https://doi.org/10.1016/j.physletb.2017.10.024.
Hendrycks, Dan, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang,
Evan Dorundo, Rahul Desai, et al. 2021. “The Many Faces of
Robustness: A Critical Analysis of Out-of-Distribution
Generalization.” In Proceedings of the IEEE/CVF International
Conference on Computer Vision, 8340–49. https://doi.org/10.1109/ICCV48922.2021.00823.
Hitchcock, Christopher, and Miklós Rédei. 2021. “Reichenbach’s Common Cause Principle.” In
The Stanford Encyclopedia of Philosophy, edited by
Edward N. Zalta, Summer 2021. https://plato.stanford.edu/archives/sum2021/entries/physics-Rpcc/;
Metaphysics Research Lab, Stanford University.
Hoffmann, Sabine, Felix Schönbrodt, Ralf Elsas, Rory Wilson, Ulrich
Strasser, and Anne-Laure Boulesteix. 2021. “The Multiplicity of
Analysis Strategies Jeopardizes Replicability: Lessons Learned Across
Disciplines.” Royal Society Open Science 8 (4): 201925.
https://doi.org/10.1098/rsos.201925.
Hofner, Benjamin, Andreas Mayr, Nikolay Robinzonov, and Matthias Schmid.
2014. “Model-Based Boosting in r: A Hands-on Tutorial Using the r
Package Mboost.” Computational Statistics 29: 3–35. https://doi.org/10.1007/s00180-012-0382-5.
Holland, Paul W. 1986. “Statistics and Causal Inference.”
Journal of the American Statistical Association 81 (396):
945–60. https://doi.org/10.2307/2289064.
Hornik, Kurt. 1991. “Approximation Capabilities of Multilayer
Feedforward Networks.” Neural Networks 4 (2): 251–57. https://doi.org/10.1016/0893-6080(91)90009-T.
Howard, Jeremy, Austin Huang, Zhiyuan Li, Zeynep Tufekci, Vladimir
Zdimal, Helene-Mari Van Der Westhuizen, Arne Von Delft, et al. 2021.
“An Evidence Review of Face Masks Against COVID-19.”
Proceedings of the National Academy of Sciences 118 (4):
e2014564118. https://doi.org/10.1073/pnas.2014564118.
Hu, Shaoping, Yuan Gao, Zhangming Niu, Yinghui Jiang, Lao Li, Xianglu
Xiao, Minhao Wang, et al. 2020. “Weakly Supervised Deep Learning
for Covid-19 Infection Detection and Classification from Ct
Images.” IEEE Access 8: 118869–83. https://doi.org/10.1109/access.2020.3005510.
Hu, Weihua, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen
Liu, Michele Catasta, and Jure Leskovec. 2020. “Open Graph
Benchmark: Datasets for Machine Learning on Graphs.” Advances
in Neural Information Processing Systems 33: 22118–33. https://doi.org/doi/10.5555/3495724.3497579.
Hüllermeier, Eyke, and Willem Waegeman. 2021. “Aleatoric and
Epistemic Uncertainty in Machine Learning: An Introduction to Concepts
and Methods.” Machine Learning 110 (3): 457–506. https://doi.org/10.1007/s10994-021-05946-3.
Hutter, Frank, Lars Kotthoff, and Joaquin Vanschoren. 2019.
Automated Machine Learning: Methods, Systems, Challenges.
Springer Nature. https://doi.org/10.1007/978-3-030-05318-5.
Ilyas, Andrew, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom,
Brandon Tran, and Aleksander Madry. 2019. “Adversarial Examples
Are Not Bugs, They Are Features.” Advances in Neural
Information Processing Systems 32.
Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017.
“Image-to-Image Translation with Conditional Adversarial
Networks.” In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 1125–34. https://doi.org/10.1109/CVPR.2017.632.
Jalaian, Brian, Michael Lee, and Stephen Russell. 2019. “Uncertain
Context: Uncertainty Quantification in Machine Learning.” AI
Magazine 40 (4): 40–49. https://doi.org/10.1609/aimag.v40i4.4812
.
Jehi, Lara, Xinge Ji, Alex Milinovich, Serpil Erzurum, Brian P Rubin,
Steve Gordon, James B Young, and Michael W Kattan. 2020.
“Individualizing Risk Prediction for Positive Coronavirus Disease
2019 Testing: Results from 11,672 Patients.” Chest 158
(4): 1364–75. https://doi.org/10.1016/j.chest.2020.05.580.
Jumper, John, Richard Evans, Alexander Pritzel, Tim Green, Michael
Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, et al. 2021.
“Highly Accurate Protein Structure Prediction with
AlphaFold.” Nature 596 (7873): 583–89. https://doi.org/10.1038/s41586-021-03819-2.
Kalisch, Markus, Martin Mächler, Diego Colombo, Marloes H Maathuis, and
Peter Bühlmann. 2012. “Causal Inference Using Graphical Models
with the r Package Pcalg.” Journal of Statistical
Software 47: 1–26. https://doi.org/10.18637/jss.v047.i11.
Kamath, Pritish, Akilesh Tangella, Danica Sutherland, and Nathan Srebro.
2021. “Does Invariant Risk Minimization Capture
Invariance?” In International Conference on Artificial
Intelligence and Statistics, 4069–77. PMLR.
Kapoor, Sayash, Emily M. Cantrell, Kenny Peng, Thanh Hien Pham,
Christopher A. Bail, Odd Erik Gundersen, Jake M. Hofman, et al. 2024.
“REFORMS: Consensus-Based
Recommendations for Machine-Learning-Based
Science.” Science Advances 10 (18):
eadk3452. https://doi.org/10.1126/sciadv.adk3452.
Kasneci, Enkelejda, Kathrin Seßler, Stefan Küchemann, Maria Bannert,
Daryna Dementieva, Frank Fischer, Urs Gasser, et al. 2023.
“ChatGPT for Good? On Opportunities and Challenges of Large
Language Models for Education.” Learning and Individual
Differences 103: 102274.
Kell, Alexander JE, Daniel LK Yamins, Erica N Shook, Sam V
Norman-Haignere, and Josh H McDermott. 2018. “A Task-Optimized
Neural Network Replicates Human Auditory Behavior, Predicts Brain
Responses, and Reveals a Cortical Processing Hierarchy.”
Neuron 98 (3): 630–44. https://doi.org/10.1016/j.neuron.2018.03.044.
Kendall, Alex, and Yarin Gal. 2017. “What Uncertainties Do We Need
in Bayesian Deep Learning for Computer Vision?” Advances in
Neural Information Processing Systems 30.
Kim, Jongpil, and Vladimir Pavlovic. 2014. “Ancient Coin
Recognition Based on Spatial Coding.” In 2014 22nd
International Conference on Pattern Recognition, 321–26. https://doi.org/10.1109/ICPR.2014.64.
Knaus, Michael C. 2022. “Double Machine Learning-Based Programme
Evaluation Under Unconfoundedness.” The Econometrics
Journal 25 (3): 602–27. https://doi.org/10.1093/ectj/utac015.
Kobak, Dmitry, Rita González Márquez, Emőke-Ágnes Horvát, and Jan Lause.
2024. “Delving into ChatGPT Usage in Academic Writing Through
Excess Vocabulary.” arXiv Preprint arXiv:2406.07016.
Koh, Pang Wei, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma
Pierson, Been Kim, and Percy Liang. 2020. “Concept Bottleneck
Models.” In International Conference on Machine
Learning, 5338–48. PMLR.
König, Gunnar, Timo Freiesleben, and Moritz Grosse-Wentrup. 2023.
“Improvement-Focused Causal Recourse (ICR).” In
Proceedings of the AAAI Conference on Artificial Intelligence,
37:11847–55. 10. https://doi.org/10.1609/aaai.v37i10.26398.
Krenn, Mario, Mehul Malik, Robert Fickler, Radek Lapkiewicz, and Anton
Zeilinger. 2016. “Automated Search for New Quantum
Experiments.” Physical Review Letters 116 (9): 090405.
Kuhn, Thomas S. 1997. The Structure of Scientific Revolutions.
Vol. 962. University of Chicago press Chicago.
Künzel, Sören R, Jasjeet S Sekhon, Peter J Bickel, and Bin Yu. 2019.
“Metalearners for Estimating Heterogeneous Treatment Effects Using
Machine Learning.” Proceedings of the National Academy of
Sciences 116 (10): 4156–65. https://doi.org/10.1073/pnas.1804597116.
Lagerquist, Ryan, Amy McGovern, Cameron R Homeyer, David John Gagne II,
and Travis Smith. 2020. “Deep Learning on Three-Dimensional
Multiscale Data for Next-Hour Tornado Prediction.” Monthly
Weather Review 148 (7): 2837–61. https://doi.org/10.1175/MWR-D-19-0372.1.
Lam, Remi, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger,
Meire Fortunato, Ferran Alet, Suman Ravuri, et al. 2023. “Learning
Skillful Medium-Range Global Weather Forecasting.”
Science, eadi2336. https://doi.org/10.1126/science.adi2336.
Lei, Jing, Max G’Sell, Alessandro Rinaldo, Ryan J. Tibshirani, and Larry
Wasserman. 2018. “Distribution-Free Predictive
Inference for Regression.” Journal of the
American Statistical Association 113 (523): 1094–1111. https://doi.org/10.1080/01621459.2017.1307116.
Lei, Lihua, and Emmanuel J Candès. 2021. “Conformal Inference of
Counterfactuals and Individual Treatment Effects.” Journal of
the Royal Statistical Society Series B: Statistical Methodology 83
(5): 911–38. https://doi.org/10.1111/rssb.12445.
Letzgus, Simon, and Klaus-Robert Müller. 2023. “An Explainable
AI Framework for Robust and Transparent Data-Driven Wind
Turbine Power Curve Models.” Energy and AI, December,
100328. https://doi.org/10.1016/j.egyai.2023.100328.
Lipton, Zachary C. 2017. “The Mythos of Model
Interpretability.” arXiv. https://doi.org/10.48550/arXiv.1606.03490.
List, Christian. 2019. “Levels: Descriptive, Explanatory, and
Ontological.” Noûs 53 (4): 852–83.
London, Ian. 2016. “Encoding Cyclical Continuous Features -
24-Hour Time.” Ian London’s Blog. https://ianlondon.github.io/posts/encoding-cyclical-features-24-hour-time/.
Lu, Yulong, and Jianfeng Lu. 2020. “A Universal Approximation
Theorem of Deep Neural Networks for Expressing Probability
Distributions.” Advances in Neural Information Processing
Systems 33: 3094–3105. https://doi.org/10.5555/3495724.3495984.
Lucas, Tim CD. 2020. “A Translucent Box: Interpretable Machine
Learning in Ecology.” Ecological Monographs 90 (4):
e01422. https://doi.org/10.1002/ecm.1422.
Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to
Interpreting Model Predictions.” In Proceedings of the 31st
International Conference on Neural Information
Processing Systems, 4768–77. NIPS’17. Red
Hook, NY, USA: Curran Associates Inc. https://doi.org/10.5555/3295222.3295230.
Maaz, Kai, Cordula Artelt, Pia Brugger, Sandra Buchholz, Stefan Kühne,
Holger Leerhoff, Thomas Rauschenbach, Josef Schrader, and Susan Seeber.
2022. Bildung in Deutschland 2022: Ein
Indikatorengestützter Bericht Mit Einer Analyse Zum
Bildungspersonal. wbv Publikation. https://doi.org/10.3278/6001820hw.
Manski, Charles F. 2003. Partial Identification of Probability
Distributions. Vol. 5. Springer. https://doi.org/10.1007/b97478.
Marcus, Gary. 2020. “The Next Decade in AI: Four Steps Towards
Robust Artificial Intelligence.” arXiv Preprint
arXiv:2002.06177. https://doi.org/10.48550/arXiv.2002.06177.
Maurage, Pierre, Alexandre Heeren, and Mauro Pesenti. 2013. “Does
Chocolate Consumption Really Boost Nobel Award Chances? The Peril of
over-Interpreting Correlations in Health Studies.” The
Journal of Nutrition 143 (6): 931–33. https://doi.org/10.3945/jn.113.174813.
McDermott, Matthew BA, Shirly Wang, Nikki Marinsek, Rajesh Ranganath,
Luca Foschini, and Marzyeh Ghassemi. 2021. “Reproducibility in
Machine Learning for Health Research: Still a Ways to Go.”
Science Translational Medicine 13 (586): eabb1655. https://doi.org/10.1126/scitranslmed.abb1655.
Mentch, Lucas, and Giles Hooker. 2016. “Quantifying Uncertainty in
Random Forests via Confidence Intervals and Hypothesis Tests.”
Journal of Machine Learning Research 17 (26): 1–41.
Miller, Tim. 2019. “Explanation in Artificial Intelligence:
Insights from the Social Sciences.” Artificial
Intelligence 267 (February): 1–38. https://doi.org/10.1016/j.artint.2018.07.007.
Mitchell, Margaret, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy
Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and
Timnit Gebru. 2019. “Model Cards for
Model Reporting.” In Proceedings of
the Conference on Fairness,
Accountability, and Transparency, 220–29.
FAT* ’19. New York, NY, USA: Association for Computing
Machinery. https://doi.org/10.1145/3287560.3287596.
Molnar, Christoph. 2022. Interpretable Machine Learning: A Guide for
Making Black Box Models Explainable. 2nd ed. https://christophm.github.io/interpretable-ml-book.
Molnar, Christoph, Giuseppe Casalicchio, and Bernd Bischl. 2020.
“Quantifying Model Complexity via Functional Decomposition for
Better Post-Hoc Interpretability.” In Machine Learning and
Knowledge Discovery in Databases: International Workshops of ECML PKDD
2019, würzburg, Germany, September 16–20, 2019,
Proceedings, Part i, 193–204. Springer. https://doi.org/10.1007/978-3-030-43823-4_17.
Molnar, Christoph, Timo Freiesleben, Gunnar König, Julia Herbinger, Tim
Reisinger, Giuseppe Casalicchio, Marvin N. Wright, and Bernd Bischl.
2023. “Relating the Partial Dependence Plot
and Permutation Feature Importance to the Data
Generating Process.” In Explainable Artificial
Intelligence, edited by Luca Longo, 456–79. Communications
in Computer and Information Science.
Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-44064-9_24.
Molnar, Christoph, Gunnar König, Bernd Bischl, and Giuseppe Casalicchio.
2023. “Model-Agnostic Feature Importance and
Effects with Dependent Features – A
Conditional Subgroup Approach.” Data Mining and
Knowledge Discovery, January. https://doi.org/10.1007/s10618-022-00901-9.
Moor, Michael, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M
Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. 2023.
“Foundation Models for Generalist Medical Artificial
Intelligence.” Nature 616 (7956): 259–65.
Mumuni, Alhassan, and Fuseini Mumuni. 2021. “CNN Architectures for
Geometric Transformation-Invariant Feature Representation in Computer
Vision: A Review.” SN Computer Science 2 (5): 340. https://doi.org/10.1007/s42979-021-00735-0.
———. 2022. “Data Augmentation: A Comprehensive Survey of Modern
Approaches.” Array 16: 100258. https://doi.org/10.1016/j.array.2022.100258.
Nadeau, Claude, and Yoshua Bengio. 1999. “Inference for the
Generalization Error.” Advances in Neural Information
Processing Systems 12.
Neal, Brady. 2020. “Introduction to Causal Inference.”
Course Lecture Notes (Draft).
Neethu, MS, and R Rajasree. 2013. “Sentiment Analysis in Twitter
Using Machine Learning Techniques.” In 2013 Fourth
International Conference on Computing, Communications and Networking
Technologies (ICCCNT), 1–5. IEEE. https://doi.org/10.1109/ICCCNT.2013.6726818.
Neupert, Titus, Mark H Fischer, Eliska Greplova, Kenny Choo, and M
Michael Denner. 2021. “Introduction to Machine Learning for the
Sciences.” arXiv Preprint arXiv:2102.04883. https://doi.org/10.48550/arXiv.2102.04883.
Nguyen, An-phi, and Marı́a Rodrı́guez Martı́nez. 2019.
“MonoNet: Towards Interpretable Models
by Learning Monotonic Features.” arXiv.
https://doi.org/10.48550/arXiv.1909.13611.
Norouzzadeh, Mohammad Sadegh, Anh Nguyen, Margaret Kosmala, Alexandra
Swanson, Meredith S Palmer, Craig Packer, and Jeff Clune. 2018.
“Automatically Identifying, Counting, and Describing Wild Animals
in Camera-Trap Images with Deep Learning.” Proceedings of the
National Academy of Sciences 115 (25): E5716–25. https://doi.org/10.1073/pnas.1719367115.
Oikarinen, Tuomas, Karthik Srinivasan, Olivia Meisner, Julia B Hyman,
Shivangi Parmar, Adrian Fanucci-Kiss, Robert Desimone, Rogier Landman,
and Guoping Feng. 2019. “Deep Convolutional Network for Animal
Sound Classification and Source Attribution Using Dual Audio
Recordings.” The Journal of the Acoustical Society of
America 145 (2): 654–62. https://doi.org/10.1121/1.5097583.
Ozoani, Ezi, Marissa Gerchick, and Margaret Mitchell. 2022. Model
Card Guidebook. Hugging Face. https://huggingface.co/docs/hub/en/model-card-guidebook.
Papernot, Nicolas, Patrick McDaniel, and Ian Goodfellow. 2016.
“Transferability in Machine Learning: From Phenomena to Black-Box
Attacks Using Adversarial Samples.” arXiv Preprint
arXiv:1605.07277. https://doi.org/10.48550/arXiv.1605.07277.
Pearl, Judea. 2009. Causality. Cambridge university press.
———. 2019. “The Limitations of Opaque Learning Machines.”
Possible Minds 25: 13–19.
Pearl, Judea, and Dana Mackenzie. 2018. The Book of Why: The New
Science of Cause and Effect. Basic books.
Pedersen, Morten Axel. 2023. “Editorial Introduction: Towards a
Machinic Anthropology.” Big Data & Society. SAGE
Publications Sage UK: London, England. https://doi.org/10.1177/20539517231153803.
Pereira, Joana, Adam J Simpkin, Marcus D Hartmann, Daniel J Rigden,
Ronan M Keegan, and Andrei N Lupas. 2021. “High-Accuracy Protein
Structure Prediction in CASP14.” Proteins: Structure,
Function, and Bioinformatics 89 (12): 1687–99. https://doi.org/10.1002/prot.26171.
Perry, George LW, Rupert Seidl, André M Bellvé, and Werner Rammer. 2022.
“An Outlook for Deep Learning in Ecosystem Science.”
Ecosystems, 1–19. https://doi.org/10.1007/s10021-022-00789-y.
Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. 2017.
Elements of Causal Inference: Foundations and Learning
Algorithms. The MIT Press.
Pfisterer, Florian, Stefan Coors, Janek Thomas, and Bernd Bischl. 2019.
“Multi-Objective Automatic Machine Learning with
Autoxgboostmc.” arXiv Preprint arXiv:1908.10796. https://doi.org/10.48550/arXiv.1908.10796.
Platt, John et al. 1999. “Probabilistic Outputs for Support Vector
Machines and Comparisons to Regularized Likelihood Methods.”
Advances in Large Margin Classifiers 10 (3): 61–74.
Popper, Karl. 2005. The Logic of Scientific Discovery.
Routledge.
Pu, Zhaoxia, and Eugenia Kalnay. 2019. “Numerical Weather
Prediction Basics: Models, Numerical Methods, and Data
Assimilation.” Handbook of Hydrometeorological Ensemble
Forecasting, 67–97.
Raissi, Maziar, Paris Perdikaris, and George Em Karniadakis. 2017a.
“Physics Informed Deep
Learning (Part I):
Data-Driven Solutions of
Nonlinear Partial Differential
Equations.” arXiv. https://doi.org/10.48550/arXiv.1711.10561.
———. 2017b. “Physics Informed Deep
Learning (Part II):
Data-Driven Discovery of
Nonlinear Partial Differential
Equations.” arXiv. https://doi.org/10.48550/arXiv.1711.10566.
Rajpurkar, Pranav, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel
Mehta, Tony Duan, Daisy Ding, et al. 2017. “CheXNet:
Radiologist-Level Pneumonia
Detection on Chest
X-Rays with Deep
Learning.” arXiv. https://doi.org/10.48550/arXiv.1711.05225.
Rebuffi, Sylvestre-Alvise, Sven Gowal, Dan Andrei Calian, Florian
Stimberg, Olivia Wiles, and Timothy A Mann. 2021. “Data
Augmentation Can Improve Robustness.” Advances in Neural
Information Processing Systems 34: 29935–48. https://doi.org/10.48550/arXiv.2111.05328.
Reichstein, Markus, Gustau Camps-Valls, Bjorn Stevens, Martin Jung,
Joachim Denzler, Nuno Carvalhais, et al. 2019. “Deep Learning and
Process Understanding for Data-Driven Earth System Science.”
Nature 566 (7743): 195–204. https://doi.org/10.1038/s41586-019-0912-1.
Ren, Pengzhen, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B
Gupta, Xiaojiang Chen, and Xin Wang. 2021. “A Survey of Deep
Active Learning.” ACM Computing Surveys (CSUR) 54 (9):
1–40. https://doi.org/10.1145/3472291.
Ren, Xiaoli, Xiaoyong Li, Kaijun Ren, Junqiang Song, Zichen Xu, Kefeng
Deng, and Xiang Wang. 2021. “Deep Learning-Based Weather
Prediction: A Survey.” Big Data Research 23: 100178. https://doi.org/10.1016/j.bdr.2020.100178.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016.
“"Why Should I Trust You?": Explaining
the Predictions of Any Classifier.” In
Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data
Mining, 1135–44. KDD ’16. New York, NY,
USA: Association for Computing Machinery. https://doi.org/10.1145/2939672.2939778.
———. 2018. “Anchors: High-Precision Model-Agnostic
Explanations.” Proceedings of the AAAI Conference on
Artificial Intelligence 32 (1). https://doi.org/10.1609/aaai.v32i1.11491.
Roberts, Michael, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael
Yeung, Stephan Ursprung, Angelica I Aviles-Rivero, et al. 2021.
“Common Pitfalls and Recommendations for Using Machine Learning to
Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT
Scans.” Nature Machine Intelligence 3 (3): 199–217. https://doi.org/10.1038/s42256-021-00307-0.
Rocks, Jason W, and Pankaj Mehta. 2022. “Memorizing Without
Overfitting: Bias, Variance, and Interpolation in Overparameterized
Models.” Physical Review Research 4 (1): 013201. https://doi.org/10.1103/PhysRevResearch.4.013201.
Romano, Yaniv, Evan Patterson, and Emmanuel Candes. 2019.
“Conformalized Quantile Regression.” Advances in Neural
Information Processing Systems 32.
Romano, Yaniv, Matteo Sesia, and Emmanuel Candes. 2020.
“Classification with Valid and Adaptive Coverage.”
Advances in Neural Information Processing Systems 33: 3581–91.
Roscher, Ribana, Bastian Bohn, Marco F. Duarte, and Jochen Garcke. 2020.
“Explainable Machine Learning for Scientific
Insights and Discoveries.” IEEE
Access 8: 42200–42216. https://doi.org/10.1109/ACCESS.2020.2976199.
Rothfuss, Jonas, Fabio Ferreira, Simon Walther, and Maxim Ulrich. 2019.
“Conditional Density Estimation with Neural Networks: Best
Practices and Benchmarks.” arXiv Preprint
arXiv:1903.00954. https://doi.org/10.48550/arXiv.1903.00954.
Rudin, Cynthia, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova,
and Chudi Zhong. 2022. “Interpretable Machine Learning:
Fundamental Principles and 10 Grand Challenges.”
Statistics Surveys 16 (none): 1–85. https://doi.org/10.1214/21-SS133.
Ruff, Lukas, Jacob R Kauffmann, Robert A Vandermeulen, Grégoire
Montavon, Wojciech Samek, Marius Kloft, Thomas G Dietterich, and
Klaus-Robert Müller. 2021. “A Unifying Review of Deep and Shallow
Anomaly Detection.” Proceedings of the IEEE 109 (5):
756–95. https://doi.org/10.1109/JPROC.2021.3052449.
Schaeffer, Rylan, Mikail Khona, Zachary Robertson, Akhilan Boopathy,
Kateryna Pistunova, Jason W. Rocks, Ila Rani Fiete, and Oluwasanmi
Koyejo. 2023. “Double Descent
Demystified: Identifying,
Interpreting & Ablating the
Sources of a Deep Learning
Puzzle.” arXiv. https://doi.org/10.48550/arXiv.2303.14151.
Schmidt, Jonathan, Mário RG Marques, Silvana Botti, and Miguel AL
Marques. 2019. “Recent Advances and Applications of Machine
Learning in Solid-State Materials Science.” Npj Computational
Materials 5 (1): 1–36. https://doi.org/10.1038/s41524-019-0221-0.
Scholbeck, Christian A., Christoph Molnar, Christian Heumann, Bernd
Bischl, and Giuseppe Casalicchio. 2020. “Sampling,
Intervention, Prediction,
Aggregation: A Generalized
Framework for Model-Agnostic
Interpretations.” In Machine
Learning and Knowledge Discovery
in Databases, edited by Peggy Cellier and Kurt
Driessens, 205–16. Communications in Computer and
Information Science. Cham: Springer
International Publishing. https://doi.org/10.1007/978-3-030-43823-4_18.
Schölkopf, Bernhard. 2022. “Causality for Machine
Learning.” In Probabilistic and Causal Inference: The Works
of Judea Pearl, 765–804.
Schölkopf, Bernhard, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun
Zhang, and Joris Mooij. 2012. “On Causal and Anticausal
Learning.” arXiv Preprint arXiv:1206.6471. https://doi.org/10.48550/arXiv.1206.6471.
Schölkopf, Bernhard, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke,
Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. 2021. “Toward
Causal Representation Learning.” Proceedings of the IEEE
109 (5): 612–34. https://doi.org/10.1109/JPROC.2021.3058954.
Seibold, Heidi. 2024. 6 Steps Towards Reproducible Research.
Zenodo. https://doi.org/10.5281/zenodo.12744715.
Semenova, Lesia, Harry Chen, Ronald Parr, and Cynthia Rudin. 2024.
“A Path to Simpler Models Starts with Noise.” Advances
in Neural Information Processing Systems 36.
Sen, Rajat, Ananda Theertha Suresh, Karthikeyan Shanmugam, Alexandros G
Dimakis, and Sanjay Shakkottai. 2017. “Model-Powered Conditional
Independence Test.” Advances in Neural Information Processing
Systems 30. https://doi.org/10.5555/3294996.3295055.
Settles, Burr. 2009. “Active Learning Literature Survey.”
Computer Sciences Technical Report 1648. University of
Wisconsin–Madison.
Shah, RAJEN D, and JONAS Peters. 2020. “THE HARDNESS OF
CONDITIONAL INDEPENDENCE TESTING AND THE GENERALISED COVARIANCE
MEASURE.” The Annals of Statistics 48 (3): 1514–38. https://doi.org/
10.1214/19-AOS1857.
Shalev-Shwartz, Shai, and Shai Ben-David. 2014. Understanding
Machine Learning: From Theory to Algorithms. Cambridge university
press. https://doi.org/10.1017/CBO9781107298019.
Sharma, Amit, and Emre Kiciman. 2020. “DoWhy: An End-to-End
Library for Causal Inference.” arXiv Preprint
arXiv:2011.04216.
Shmueli, Galit. 2010. “To Explain or to
Predict?” Statistical Science 25 (3): 289–310. https://doi.org/10.1214/10-STS330.
Shorten, Connor, and Taghi M Khoshgoftaar. 2019. “A Survey on
Image Data Augmentation for Deep Learning.” Journal of Big
Data 6 (1): 1–48. https://doi.org/10.1186/s40537-019-0197-0.
Smith, Samuel L, Benoit Dherin, David GT Barrett, and Soham De. 2021.
“On the Origin of Implicit Regularization in Stochastic Gradient
Descent.” arXiv Preprint arXiv:2101.12176. https://doi.org/10.48550/arXiv.2101.12176.
Spirtes, Peter, Clark N Glymour, and Richard Scheines. 2000.
Causation, Prediction, and Search. MIT press. https://doi.org/10.1007/978-1-4612-2748-9.
Sterkenburg, Tom F, and Peter D Grünwald. 2021. “The No-Free-Lunch
Theorems of Supervised Learning.” Synthese 199 (3):
9979–10015. https://doi.org/10.1007/s11229-021-03233-1.
Strobl, Carolin, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin,
and Achim Zeileis. 2008. “Conditional Variable Importance for
Random Forests.” BMC Bioinformatics 9: 1–11.
Štrumbelj, Erik, and Igor Kononenko. 2014. “Explaining Prediction
Models and Individual Predictions with Feature Contributions.”
Knowledge and Information Systems 41 (3): 647–65. https://doi.org/10.1007/s10115-013-0679-x.
Swanson, Alexandra, Margaret Kosmala, Chris Lintott, Robert Simpson,
Arfon Smith, and Craig Packer. 2015. “Snapshot Serengeti,
High-Frequency Annotated Camera Trap Images of 40 Mammalian Species in
an African Savanna.” Scientific Data 2 (1): 1–14. https://doi.org/10.1038/sdata.2015.26.
Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna,
Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. “Intriguing
Properties of Neural Networks.” arXiv Preprint
arXiv:1312.6199. https://doi.org/10.48550/arXiv.1312.6199.
Takeuchi, Ichiro, Quoc V Le, Timothy D Sears, Alexander J Smola, and
Chris Williams. 2006. “Nonparametric Quantile Estimation.”
Journal of Machine Learning Research 7 (7).
Taleb, Nassim. 2005. “The Black Swan: Why Don’t We Learn That We
Don’t Learn.” NY: Random House 1145.
Tanay, Thomas, and Lewis Griffin. 2016. “A Boundary Tilting
Persepective on the Phenomenon of Adversarial Examples.”
arXiv Preprint arXiv:1608.07690. https://doi.org/10.48550/arXiv.1608.07690.
Tang, Ying-Peng, Guo-Xiang Li, and Sheng-Jun Huang. 2019. “ALiPy:
Active Learning in Python.” arXiv Preprint
arXiv:1901.03802. https://doi.org/10.48550/arXiv.1901.03802.
Tejani, Ali S., Michail E. Klontzas, Anthony A. Gatti, John T. Mongan,
Linda Moy, Seong Ho Park, Charles E. Kahn, et al. 2024. “Checklist
for Artificial Intelligence in
Medical Imaging (CLAIM): 2024
Update.” Radiology: Artificial Intelligence
6 (4): e240300. https://doi.org/10.1148/ryai.240300.
“The California Almond.” n.d.
Accessed February 16, 2024. https://www.waterfordnut.com/almond.html.
Toledo-Marı́n, J Quetzalcóatl, Geoffrey Fox, James P Sluka, and James A
Glazier. 2021. “Deep Learning Approaches to Surrogates for Solving
the Diffusion Equation for Mechanistic Real-World Simulations.”
Frontiers in Physiology 12: 667828. https://doi.org/10.3389/fphys.2021.667828.
Tsipras, Dimitris, Shibani Santurkar, Logan Engstrom, Alexander Turner,
and Aleksander Madry. 2018. “Robustness May Be at Odds with
Accuracy.” arXiv Preprint arXiv:1805.12152. https://doi.org/10.48550/arXiv.1805.12152.
Uhler, Caroline, Garvesh Raskutti, Peter Bühlmann, and Bin Yu. 2013.
“Geometry of the Faithfulness Assumption in Causal
Inference.” The Annals of Statistics, 436–63.
Van der Laan, Mark J, and Sherri Rose. 2011. Targeted Learning.
Vol. 1. 3. Springer. https://doi.org/10.1007/978-1-4419-9782-1.
Van Noorden, Richard, and Jeffrey M. Perkel. 2023.
“AI and Science: What 1,600 Researchers
Think.” Nature 621 (7980): 672–75. https://doi.org/10.1038/d41586-023-02980-0.
Vapnik, Vladimir N. 1999. “An Overview of Statistical Learning
Theory.” IEEE Transactions on Neural Networks 10 (5):
988–99. https://doi.org/10.1109/72.788640.
Vens, Celine, Jan Struyf, Leander Schietgat, Sašo Džeroski, and Hendrik
Blockeel. 2008. “Decision Trees for Hierarchical Multi-Label
Classification.” Machine Learning 73: 185–214. https://doi.org/10.1007/s10994-008-5077-3.
Vigen, Tyler. 2015. Spurious Correlations. Hachette UK.
Vovk, Vladimir, Glenn Shafer, and Ilia Nouretdinov. 2003.
“Self-Calibrating Probability Forecasting.” Advances in
Neural Information Processing Systems 16.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017.
“Counterfactual Explanations Without Opening the
Black Box: Automated Decisions and the
GDPR.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3063289.
Wang, Siyue, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, and
Xue Lin. 2018. “Defensive Dropout for Hardening Deep Neural
Networks Under Adversarial Attacks.” In 2018 IEEE/ACM
International Conference on Computer-Aided Design (ICCAD), 1–8.
IEEE. https://doi.org/10.1145/3240765.3264699.
Watson, David S., and Marvin N. Wright. 2021. “Testing Conditional
Independence in Supervised Learning Algorithms.” Machine
Learning 110 (8): 2107–29. https://doi.org/10.1007/s10994-021-06030-6.
Wolpert, David H. 1996. “The Lack of a Priori Distinctions Between
Learning Algorithms.” Neural Computation 8 (7): 1341–90.
https://doi.org/10.1162/neco.1996.8.7.1341.
Wu, Zonghan, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and
S Yu Philip. 2020. “A Comprehensive Survey on Graph Neural
Networks.” IEEE Transactions on Neural Networks and Learning
Systems 32 (1): 4–24. https://doi.org/10.1109/TNNLS.2020.2978386.
Wynants, Laure, Ben Van Calster, Gary S. Collins, Richard D. Riley,
Georg Heinze, Ewoud Schuit, Marc M. J. Bonten, et al. 2020.
“Prediction Models for Diagnosis and Prognosis of Covid-19:
Systematic Review and Critical Appraisal.” BMJ (Clinical
Research Ed.) 369 (April): m1328. https://doi.org/10.1136/bmj.m1328.
Yang, Jingkang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. 2024.
“Generalized Out-of-Distribution
Detection: A Survey.”
International Journal of Computer Vision, June. https://doi.org/10.1007/s11263-024-02117-4.
Yoon, Jinsung, James Jordon, and Mihaela Van Der Schaar. 2018.
“GANITE: Estimation of Individualized Treatment Effects Using
Generative Adversarial Nets.” In International Conference on
Learning Representations.
Yu, Kui, Xianjie Guo, Lin Liu, Jiuyong Li, Hao Wang, Zhaolong Ling, and
Xindong Wu. 2020. “Causality-Based Feature Selection: Methods and
Evaluations.” ACM Computing Surveys (CSUR) 53 (5): 1–36.
https://doi.org/10.1145/3409382.
Zeileis, Achim, Torsten Hothorn, and Kurt Hornik. 2008.
“Model-Based Recursive Partitioning.”
Journal of Computational and Graphical Statistics 17 (2):
492–514. https://doi.org/10.1198/106186008X319331.
Zhang, Huan, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon,
and Cho-Jui Hsieh. 2019. “The Limitations of Adversarial Training
and the Blind-Spot Attack.” arXiv Preprint
arXiv:1901.04684. https://doi.org/10.48550/arXiv.1901.04684.
Zhang, Zhou, Yufang Jin, Bin Chen, and Patrick Brown. 2019.
“California Almond Yield Prediction at the Orchard Level with a
Machine Learning Approach.” Frontiers in Plant Science
10: 809. https://doi.org/10.3389/fpls.2019.00809/full.