talks

Invited talks, conference presentations, and guest lectures

Invited Talks & Presentations


2025

Advanced Sampling Methods for Machine Learning
Guest Lecture | California State University, Los Angeles

April 14, 2025 — Los Angeles, CA


2024

Uncertainty Quantification for SciML using Deep Operator Networks
Talk | SIAM Annual Meeting 2024

March 2024 — Spokane, WA

Part of MS66, a mini-symposium on New Methods in Probabilistic and Science-Guided Machine Learning.

Video


2023

Science-guided Machine Learning for Forward, Inverse, and Control Problems
Invited Talk | Grand Valley State University

March 2023 — Michigan, USA

Scientific machine learning (SciML) is an interdisciplinary field that solves complex scientific problems by combining computational and algorithmic techniques with machine learning methods. This talk covered the most recent developments in SciML, highlighting limitations of current methodologies and exploring new ideas to address them.

Slides


2022

On Time-stepping Methods for Gradient-flow Optimization
Talk | CAIMS/SCMAI 2022

March 2022 — Ontario, Canada

Gradient-based optimization methods are essential in neural network training in many applications. The evolution of neural network parameters can be considered as an ODE system evolving in pseudo-time towards a local minimum of the objective function. This interpretation allows us to use different time-stepping schemes for the optimization.

Slides Preprint


2021

Linearly Implicit General Linear Methods
Talk | SIAM Conference on Computational Science and Engineering 2021

March 2021 — Atlanta, GA (Virtual)

Linearly implicit Runge-Kutta methods provide a fitting balance between implicit treatment of stiff systems and computational costs. We extend the class of linearly implicit Runge-Kutta methods to include multi-stage and multi-step methods.

Program Abstract


2019

Discrete Multirate GARK Schemes
Talk | SIAM Conference on Computational Science and Engineering 2019

March 2019 — Spokane, WA

Multirate time integration schemes apply different step sizes to different components of a system based on the local dynamics of the components. This talk focused on high-order multirate methods using the theoretical framework of generalized additive Runge-Kutta (GARK) methods.

Preprint Paper Video Slides Program