Applied Math Collective
The Applied Math Collective is a graduate-student-led seminar, where the aim is to provide an informal platform where the speaker discusses general interest “SIAM review”-style applied math papers. We meet Thursdays at 4pm, when the Department Colloquium does not have a speaker. This seminar welcomes all students (graduate or undergraduate), postdocs and faculty and has been running since the Fall 2016, thanks the initiative of Christel Hohenegger and Braxton Osting.
If you are interested in giving a talk or simply getting into the mailing list, please send an email to RK Yoon (yoon at math dot utah dot edu)
Spring 2021 talks (via Zoom, Wed 3pm)
Zoom info: https://utah.zoom.us/j/96815843924 Please email RK or Chee Han for passcode.
Wed February 3, Yiming Zhu, An introduction to adversarial bandits
Abstract: Bandit learning provides an effective approach to studying sequential decision making problems. In this talk, we will give a brief introduction to some basic results in adversarial bandits. Time permitting, we will also discuss some of its applications.
Wed February 17
Wed March 3
Wed March 17
Wed March 31
Wed April 7
Wed April 21
Wed May 5
Fall 2020 talks (held via Zoom)
Zoom info: https://utah.zoom.us/j/98655692471 (please email RK for passcode)
Thu September 10 - Graduate student summer research experience (Nathan Willis and RK Yoon)
Thu September 17 - First year talks
Thu September 24, Trent DeGiovanni, On Cooking a Roast
Abstract: A question of central importance to cooks preparing autumn meals is how long does a roast needed to be cooked? Most cookbooks give a time per weight, more relevant today most online recipes ask that you get a roast of a fixed weight. In the event one cannot find a perfectly weighed roast, the question of cooking time remains relevant. Under certain assumptions, we show that the time to cook a roast is proportional to 2/3rds of its weight. We also cook some roast. This talk is based on Murray Klamkin’s 1961 SIAM review article “On Cooking a Roast.”
Thu October 1, Chee Han Tan, Deep Learning - An Introduction for Applied Mathematicians
Abstract: The speaker will attempt to provide a brief introduction to the basic ideas underlie deep learning from an applied math perspective. We begin by explaining what it means to create and train an artificial neural network (ANN) through a “data fitting-nonlinear-separator” example and define a general network. We will describe the stochastic gradient method, a variation of the traditional optimisation technique (specifically gradient descent) that is designed to cope with very large scale sets of training data and explain how to apply it to train an ANN efficiently using back propagation. Time permits, we will discuss a large scale image classification problem. This talk is based on the 2019 SIAM review article “Deep Learning: An Introduction for Applied Mathematicians” by Catherine F. Higham and Desmond J. Higham.
Thu October 8 - Rebecca Hardenbrook - Polarizing Parties: A Mathematical
Model for U.S. Politics
Abstract: Could we have mathematically predicted the Trump presidency? Is there a model in which we can definitively say who the 2020 president will be? In this talk we will not answer either of those questions. Instead, we will look at a new dynamical model introduced by Yang et al. for predicting ideological positions of political parties utilizing a “satisficing” decision-making function.
This talk will be based on the 2020 paper by Yang et al. titled ‘Why are U.S. Parties So Polarized? A “Satisficing” Dynamical Model’.
Thu October 15 - Akil Narayan - Constructing Least-Squares Polynomial Approximations
Abstract: Polynomial approximations of functions that are constructed using a least-squares approach are a ubiquitous technique in numerical computation and data analysis. One of the simplest ways to generate data for least- squares problems is with random sampling of a function. We discuss theory and algorithms for stability of the least-squares problem using random samples, with the goal of attained a “near-optimal” approximation. The techniques discussed leverage concentration inequalities to yield novel insight and new algorithms that challenge typical intuition about how data for least-squares problems should be collected.
Thu October 22 - Amanda Hampton (CU Boulder) - Anti-integrability for Quadratic Volume Preserving Maps
Abstract: The dynamics of volume preserving maps can model a variety of mixing problems ranging from microscopic granular mixing, to dispersion of pollutants over our planet’s atmosphere. We study a general quadratic volume preserving map using a concept first introduced thirty years ago in the field of solid-state physics: the anti-integrable (AI) limit. In the AI limit, orbits degenerate to a sequence of symbols and the dynamics reduces to the shift operator on the symbols. Such symbolic dynamics is a pure form of chaos. An advantage of this approach is that one can prove the existence of infinitely many orbits and, using a version of the contraction mapping theorem, find orbits that continue away from the limit to become deterministic orbits of the original system. A novelty of the AI limit in our case is that one often needs to use contraction arguments to find orbits at the AI limit, as well as to determine necessary conditions for these orbits to persist for added perturbations. At the AI limit we can visualize orbits using one-dimensional maps (or actually “relations”). Upon perturbation these become full orbits in 3-dimensions with intriguing Cantor-like structures. A future goal is to continue these orbits to a (nearly) integrable case to understand how chaotic structures undergo bifurcation to regularity.
Thu October 29 - RK Yoon - Intro to Generative Adversarial Networks (GAN)
Abstract: Generative Adversarial Networks (GANs) has been known as one of the powerful generative models induced by neural networks and is widely used in unsupervised learning. GAN consists of two parts, generator and discriminator. The discriminator determines whether the dataset is fake or not, as the generator creates samples which seems comes from the same distribution of true data. By competing with each other, the GAN is trained. In this talk, I’ll present how does the GAN works and show simple numerical simulations. Also I’ll introduce several modified GANs which improves performances on target dataset.
This talk is based on the paper: https://arxiv.org/pdf/1701.00160.pdf
Thu November 5 - (No AMC: Colloquium by Nilima Nigam)
Thu November 12 - (No AMC: Colloquium by Christine Berkesch)
Thu November 19 - Nathan Willis - Forecasting Elections Using Compartmental Models of Infection
Abstract: In the last several months there has been an intense presidential election and a worldwide pandemic. Therefore, in this moment, it only makes sense to use an epidemiology model to forecast elections. In this talk we will discuss the extension of a familiar SIS-model (susceptible-infected-susceptible) to what I will be calling a DUR-model (democratic-undecided-republican) at the state-level. In the paper, predictions from the model are compared to results of the 2012 and 2016 gubernatorial, senatorial, and presidential elections. On November 2nd the authors posted final predictions for the 2020 elections and we will review how they fared!
This talk is based on the paper: https://arxiv.org/pdf/1811.01831.pdf
Thu December 3 - TBA
Fall 2019 talks (LCB 222 unless specified otherwise)
Thu Aug 29 2019, Title: Organizational meeting
Thu Sep 5 2019 (department colloquium)
Thu Sep 12 2019 Hyunjoong Kim, Nathan Willis, and RK Yoon
Talk about summer workshop experiences
Thu Sep 19 2019 (department colloquium)
Thu Sep 26 2019, Elias Clark, Title: The German Tank Problem
Abstract: The German Tank Problem is a classic example of the application of statistical analysis to economic intelligence. During the Second World War, German tanks were marked with sequential serial numbers. By analyzing captured and destroyed tanks, the USA and UK were able to make surprisingly accurate estimates of German tank production. This talk will discuss how these estimates were made, and other applications of serial number analysis.
Thu Oct 3 2019, RK Yoon, Title: Introduction to reinforcement learning
Abstract: In these days, the reinforcement learning (RL) is widely studied in many. The most popular example showing the power of reinforcement learning is the fact that the AlphaGo, an AI-powered system, beats the champion of the complex boardgame Go by 4 points to 1.
Unlike other machine learning algorithms relying on complete models or exemplary supervision, reinforcement learning is focused on the goal-directed learning from the interaction between the agent and its environment like states, actions and rewards.
Thu Oct 10 2019 (Fall Break)
Thu Oct 17 2019 (department colloquium)
Thu Oct 24 2019
Thu Oct 31 2019 (department colloquium)
Thu Nov 7 2019, China Mauck, Title: TBA, Abstract: TBA
Thu Nov 14 2019, Sergazy Nurbavliyev, Title: TBA, Abstract: TBA
Thu Nov 21 2019, Ryleigh Moore, Title: TBA, Abstract: TBA
Thu Nov 28 2019, (Thanksgiving)
Thu Dec 5 2019
Webmaster: Fernando Guevara Vasquez. (Created by Yekaterina Epshteyn and FGV)