Calendar of Events

Events Calendar

Deeply Uncertain: (how) can we make deep learning tools trustworthy for scientific measurements?

Date and Time: Tuesday, May 04, 2021, 01:00pm -
Location: Zoom: https://us02web.zoom.us/j/81073398605?pwd=T1JQa1VualBnQjBDdVJoMzlQQTVSdz09
 

Speaker: Brian Nord (Fermilab)

Abstract: Artificial Intelligence (AI) --- including machine learning and deep learning --- refers to a set of techniques that rely primarily on the data itself for the construction of a quantitative model. AI has been in development for about three quarters of a century, but there has been a recent resurgence in research and applications. This current (third) wave of AI progress is marked by extraordinary results --- for example, in image analysis, language translation, and machine automation. Despite the modest definition of AI, its potential to disrupt technologies, economies, society, and even science is often presented as unmatched in modern times. However, along with the promise of AI, there are significant challenges to overcome to reach a degree of reliability that is on par with more traditional modeling methods. 

In particular, uncertainty quantification metrics derived from deep neural networks are yet to be made physically interpretable. For example, when one uses a convolutional neural network to measure values from an image (e.g., regression for galaxy properties), the error estimates do not necessarily match those from an MCMC likelihood fit. In this presentation, I will discuss the landscape of uncertainty quantification in deep learning, as well as some computational experiments in a physical context that demonstrate a mismatch between errors derived directly from deep learning methods and those derived through traditional error propagation. Before we can apply deep learning tools confidently for the direct measurement of physical properties, we’ll need statistically robust error estimation methods.

Extra Info: