References
Breiman, L. 2001. "Random Forests." Machine Learning. https://doi.org/10.1023/A:1010933404324
Estlund, D. M. 1994. "Opinion Leaders, Independence, and Condorcet’s Jury Theorem." Theory and Decision. https://doi.org/10.1007/BF01079210
Fort, S., H. Hu, and B. Lakshminarayanan. 2019. "Deep Ensembles: A Loss Landscape Perspective." 2, 1–14. https://arxiv.org/abs/1912.02757
Freund, Y. and R.E. Schapire. 1996. "Experiments with a New Boosting Algorithm." Proceedings of the 13th International Conference on Machine Learning.
https://dl.acm.org/doi/10.5555/3091696.3091715
Gal, Y. 2016. "Uncertainty in Deep Learning." Department of Engineering. University of Cambridge.
Gal, Y., and Z. Ghahramani. 2016. "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning." 33rd International Conference on Machine Learning (ICML 2016).
https://arxiv.org/abs/1506.02142
Guo, C., G. Pleiss, Y. Sun, and K.Q. Weinberger. 2017. "On Calibration of Modern Neural Networks." 34th International Conference on Machine Learning (ICML 2017). https://arxiv.org/abs/1706.04599
Hein, M., M. Andriushchenko, and J. Bitterwolf. 2019. "Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem." 2019.
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (June 2019): 41–50.
https://doi.org/10.1109/CVPR.2019.00013
Kendall, A. and Y. Gal. 2017. "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?"
Advances in Neural Information Processing Systems.
https://papers.nips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision
Lakshminarayanan, B., A. Pritzel, and C. Blundell. 2017. "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles."
Advances in Neural Information Processing Systems. https://arxiv.org/abs/1612.01474
Liu, Y., M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. 2019. "RoBERTa: A Robustly Optimized BERT Pretraining Approach."
https://arxiv.org/abs/1907.11692
Nado, Z., S. Padhy, D. Sculley, A. D’Amour, B. Lakshminarayanan, and J. Snoek. 2020. "Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift."
https://arxiv.org/abs/2006.10963
Nalisnick, E., A. Matsukawa, Y.W. Teh, D. Gorur, and B. Lakshminarayanan. 2019. "Do Deep Generative Models Know What They Don’t Know?"
7th International Conference on Learning Representations (ICLR 2019).
https://arxiv.org/abs/1810.09136
Ovadia, Y., E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J.V. Dillon, B. Lakshminarayanan, and J. Snoek. 2019. "Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift."
33rd Conference on Neural Information Processing Systems (NeurIPS 2019). https://arxiv.org/abs/1906.02530
Platt, J., and others. 1999. "Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods."
Advances in Large Margin Classifiers. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639
Srivastava, N., G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. 2014. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting." Journal of Machine Learning Research.
https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
van Amersfoort, J., L. Smith, Y.W. Teh, and Y. Gal. 2020. "Uncertainty Estimation Using a Single Deep Deterministic Neural Network."
International Conference for Machine Learning.
https://arxiv.org/abs/2003.02037
Warstadt, A., A. Singh, and S.R. Bowman. 2019. "Neural Network Acceptability Judgments." Transactions of the Association for Computational Linguistics.
https://doi.org/10.1162/tacl_a_00290
Wilson, A. G., and P. Izmailov. 2020. "Bayesian Deep Learning and a Probabilistic Perspective of Generalization." https://arxiv.org/abs/2002.08791