Graduate Students Seminar
Wednesday, March 11, 2026 · 11 - 11:50 AM
| Session Chair: | Nathan Tamiru |
| Discussant: | Weining Kang |
Speaker 1: Madison Christ
- Title
- Stability and Periodicity of Coupled Hyperbolic Systems
- Abstract
- The existence of resonance in periodic partial differential equations (PDEs) has applications to many areas, such as acoustics or aerospace dynamics. In undamped periodic wave equations there are infinite forcing functions that will cause resonance, while in damped periodic wave equations there is no way to achieve resonance. We are curious about the stability and periodicity of a coupled hyperbolic system with partial damping. Upon examining the dynamics of a corresponding Cauchy problem we can obtain information about the dynamics of the periodic system. This is achieved through a result which states that the existence of resonance in the periodic system is related to the stability of the Cauchy problem. We reduce to a first order system by first determining the finite energy space corresponding to the problem, and then choosing the domain of our operator so that the operator is maximally dissipative. By utilizing classical results such as Lumer-Phillips and Lax-Milgram, and examining the spectrum of the operator, we will show well-posedness of the Cauchy system and strong stability of the induced semigroup.
Speaker 2: Abhisek Chatterjee
- Title
- Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
- Abstract
- Deep learning models are now widely deployed in high-stakes domains, yet standard neural networks provide point predictions with no principled quantification of uncertainty. Bayesian neural networks offer a coherent way to represent model uncertainty, but they are often computationally expensive. In this talk, I present the framework of “dropout as a Bayesian approximation,” originally proposed by Gal & Ghahramani, 2016, which interprets the widely used dropout regularization technique as approximate variational inference in a deep Gaussian process model. This perspective leads to a simple procedure, Monte Carlo dropout, where keeping dropout active at test time and averaging multiple stochastic forward passes yields an efficient approximation to the predictive posterior. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We will review empirical results demonstrating improved predictive performance and calibrated uncertainty estimates on vision and regression benchmarks. I will conclude by discussing practical considerations, limitations of the approximation, and how this line of work has influenced subsequent research in Bayesian deep learning and uncertainty quantification.