epshteyn (at) math.utah.edu
)January 22. Special joint Applied Math and Stochastics Seminar. Note
Time 3pm and Place LCB 219.
Speaker: Wenjia Jing,
Department of Mathematics, The University of Chicago
Title: Imaging in random media: modeling and sifting noisy signals
Abstract: In imaging problems, waves propagating through a medium are
recorded and used to locate reflectors, sources, etc. in the medium. The
recorded signals are often incoherent due to multiple scattering by
inhomogeneities in the medium. In optics, additional incoherencies appear
due to the loss of phase information in the measurements. Imaging with such
incoherent signals is challenging. In this talk, through the examples of
passive array imaging in random waveguides and so-called "transient
opto-elastography," I will discuss how to model such noisy signals and how
to "sift" them, namely using correlation-based techniques, to extract
useful information for imaging purposes.
This talk is based mainly on joint
works with Habib Ammari and Josselin Garnier.
January 27 (Wednesday). Special joint Statistics/Stochastics
and Applied Math Seminar.
Note
Time 3pm and Place LCB 219.
Speaker: Giovanni Motta,
Department of Statistics, Columbia University
Title: Local polynomials estimation of time-varying matrices for multivariate locally stationary processes
Abstract: In this paper we propose two novel non-parametric approaches to estimate,
respectively, the time-varying covariance and the time-varying correlation matrix
of a multivariate locally stationary process. They both improve our previous
estimators of the time-varying covariance and the time-varying correlation
matrices.
Our previous approach to estimate a time-varying covariance matrix is based
on a Kernel-type smoother with the same bandwidth for all the entries of the
matrix. This estimator is positive definite by construction. However, the use of
one common bandwidth for all the entries of the covariance matrix appears to
be restrictive, as these curves are in general characterized by different degrees of
smoothness. A possible approach is to use a kernel-type estimator based on different
smoothing parameters. However, this estimator is not in general positive
definite, unless some severe restrictions are imposed on the bandwidths. Our
novel estimator adapts to the different degrees of smoothness of the entries of
the covariance matrix and, at the same time, is positive definite by construction.
Our previous approach to estimate a time-varying correlation uses the same
bandwidth for both numerator (covariance) and denominator (variances). This
approach guarantees that the resulting estimator is a well defined correlation
matrix. However, the use of one common bandwidth for both numerator and
denominator does not allow the covariance and the variances to have different
degrees of smoothness. On the other hand, a kernel-type estimator based on
different smoothing parameters for numerator and denominator might deliver
unbounded correlations and the resulting estimated correlation matrix is not
necessarily positive semi-definite. Moreover, the estimated bandwidths that
are optimal for estimating the covariance and the variances are not necessarily
optimal for estimating the ratio. The estimator we propose in this paper is a
local average of the signs of the cross-products: it does not require distinguishing
between numerator and denominator and, at the same time, is positive definite
by construction.
January 29. Commutative Algebra Seminar, 3:10pm - 4:00pm, LCB
222
Speaker: Graeme Milton,
Department of Mathematics, The University of Utah
Title: Superfunctions and the algebra of subspace collections
Abstract: A natural connection between rational functions of several real or complex variables, and subspace collections is explored. A new class of function, superfunctions, are introduced which are the counterpart to functions at the level of subspace collections. Operations on subspace collections are found to correspond to various operations on rational functions, such as addition, multiplication and substitution. It is established that every rational matrix valued function which is homogeneous of degree 1 can be generated from an appropriate, but not necessarily unique, subspace collection: the mapping from subspace collections to rational functions is onto, but not one to one. For some applications superfunctions may be more important than functions, as they incorporate more information about the physical problem, yet can be manipulated in much the same way as functions. Orthogonal subspace collections occur in many physical problems, but we'll show an example of the use of non-orthogonal ones, which when substituted in orthogonal ones greatly accelerate the convergence of Fast Fourier transform methods.
February 5. Special Applied Math Seminar. Note
Time 3:55pm and Place LCB 225.
Speaker: Francois Monard,
Department of Mathematics, University of Michigan
Title: Geodesic X-ray transforms on surfaces and tensor tomography
Abstract: In this talk, we will study what can be reconstructed about a function (or
a tensor) on a surface,
from knowledge of its integrals along a given family of geodesic curves,
that is, its X-ray transform. The "straight-line" version of this question
was first answered by J. Radon in 1917 and its solution forms the
theoretical backbone of Computerized Tomography since the 1960's. In
practice, variations of the refractive index do occur and bend photon paths
in optics-based imaging, and this requires that the problem be studied for
general curves.
In a geometric setting beyond that of the Radon transform, examples of
situations impacting the
qualitative invertibility and stability of these transforms are (i) the
case of "simple" surfaces, (ii) the
presence of conjugate points/caustics, and (iii) the presence of trapped
geodesics. We will discuss
positive and negative theoretical results occurring when one considers each
of the scenarios above, with numerical illustrations throughout.
February 8
Speaker: Gunilla Kreiss,
Department of Information Technology, Uppsala University
Title: Super convergence of finite difference solutions of PDEs
Abstract: A stable finite difference method applied to a problem with a smooth solution
allows for a straightforward estimate of convergence rate based on the convergence
rate of the
local truncation error. In many high order cases the local truncation error at a few
points near boundaries
is significantly larger than at interior points. The straight forward estimat will
predict a rate determined by the
slowest converging local truncation error. Convergence in numerical computations is
often faster than this
prediction. We explore ways to improve our understanding of such super-convergence.
We will in particular
consider the second order wave equation.
February 10 (Wednesday). Special joint Statistics/Stochastics
and Applied Math Seminar.
Note
Time 4pm and Place LCB 219.
Speaker: Lizhen Lin,
Department of Statistics and Data Sciences, University of Texas at Austin
Title: Nonparametric Statistical Inference of non-Euclidean data
Abstract: Over the last few decades data represented in various non-conventional forms have
become increasingly more prevalent. Typical examples include diffusion matrices in
diffusion tensor imaging (DTI) of neuroimaging, and various digital image data
arising in biology, medicine, machine vision and other fields of science and
engineering. One may also encounter data that are stored in the forms of
subspaces, orthonormal frames, surfaces, and networks. Statistical analysis of
such data requires rigorous formulation and characterization of the underlying
space, and inference is heavily dependent on the geometry of the space. For a
majority of the cases considered, the underlying spaces fall into the general
category of manifolds. This talk focuses on nonparametric inference on manifolds
and other non-Euclidean spaces. Appropriate notion of means (e.g., Frechet
means) and variations are defined, and inference is based on the asymptotic
distributions of their sample counterparts. In particular, we present omnibus
central limit theorems for Frechet means for inference, from which many of the
existing CLTs follows immediately. Applications are provided to some stratified
spaces of recent interest, and to the space of symmetric positive definite
matrices arising in diffusion tensor imaging. In addition to inferring i.i.d data,
we also consider nonparametric regression problems where predictors or
responses lying on manifolds.
February 12. Note Time is
3:55pm and Place LCB 225.
Speaker: Mikyoung Lim,
Department of Mathematical Sciences, KAIST
Title: Spectrum of the Neumann-Poincaré operator and plasmon resonance
Abstract: In this talk we consider spectral properties of the Neumann-Poincaré(NP)
operator on planar domains with corners. Recently there is rapidly growing
interest in the spectral properties of the NP operator due to its relation
to plasmonics and cloaking by anomalous localized resonance: Plasmon
resonance occurs at eigenvalues of the NP operator and anomalous localized
resonance occurs at the accumulation point of eigenvalues, respectively. We
show that the rate of resonance at continuous spectrum is different from
that at eigenvalues, and then derive a method to distinguish continuous
spectrum from eigenvalues. We analyse the spectrum of intersecting disks
which has two corners and show computational experiments which provide the
spectral properties of domains with corners. For the computations we use a
modification of the Nyström method which makes it possible to construct
high-order convergent discretizations of the NP operator on domains with
corners.
February 19. Special joint Statistics/Stochastics
and Applied Math Seminar.
Note
Time 3pm and Place LCB 219.
Speaker: Shuyang Bai, Department of Mathematics and Statistics, Boston University
Title: Self-normalized resampling of long-memory time series
Abstract: Statistical inference of long-memory time series faces two challenges due
to the special behavior of the sample sum 1) a non-standard fluctuation rate which
is typically unknown 2) a family of non-Gaussian scaling limits arise and it is
difficult to statistically determine which one. We introduce a procedure which
combines two strategies: self-normalization and resampling. Such combination
successfully bypasses the aforementioned challenges. To establish the validity of
the procedure, a key result involving bounding the maximal correlation between two
blocks of a long-memory sequence is derived. Furthermore, the same procedure
also works under short memory or heavy tails. It thus provides a unified treatment
for various different situations.
February 22. Special joint Statistics/Stochastics
and Applied Math Seminar. Note Room is LCB 219.
Speaker: Veniamin Morgenshtern, Department of Statistics, Stanford University
Title: Super-Resolution of Positive Sources
Abstract: The resolution of all microscopes is limited by diffraction. The
observed data is a convolution of the emitted signal with a low-pass
kernel, the point-spread function (PSF) of the microscope. The frequency
cut-off of the PSF is inversely proportional to the wavelength of light.
Hence, the features of the object that are smaller than the wavelength of
light are difficult to observe. In single-molecule microscopy the emitted
signal is a collection of point sources, produced by blinking molecules.
The goal is to recover the location of these sources with precision that is
much higher than the wavelength of light. This leads to the problem of
super-resolution of positive sources in the presence of noise. I will show
that the problem can be solved by using convex optimization in a stable
fashion. The stability of reconstruction depends on Rayleigh-regularity of
the signal support, i.e., on how many point sources can occur within an
interval of one wavelength. The stability estimate is complemented by a
converse result: the performance of the convex algorithm is nearly optimal.
I will also give a brief summary on the ongoing project, developed in
collaboration with the group of Prof. W.E. Moerner, where we use these
theoretical ideas to improve data processing in modern
microscopes.
February 26. Joint Stochastics and Applied Math Seminar. Note
Time is 3-4 pm. Room LCB 219
Speaker: Harish
Bhat, Department of Applied Mathematics, University of California, Merced
Title: Density tracking by quadrature for stochastic differential equations
Abstract: We consider the problem of computing the probability density
function (pdf) for a class of stochastic differential equations. In a
nutshell, our method consists of using quadrature to solve, at each time
step, the Chapman-Kolmogorov equation associated with a time-discretization
of the stochastic differential equation. After motivating the method and
comparing it with existing techniques, we will discuss convergence
results. Our main result is that our pdf converges in L^1 to both the
exact pdf of the Markov chain (with exponential convergence rate), and to
the exact pdf of the stochastic differential equation (with linear
convergence rate). We carry out numerical tests to show that the empirical
performance of the method complies with theoretical convergence results.
Finally, we discuss how the method can be used to construct Metropolis
algorithms for posterior inference of parameters in stochastic differential
equation models.
February 29.
Speaker: Davit
Harutyunyan, Department of Mathematics, University of Utah
Title: Recent progress in the shell buckling theory
Abstract: In has been known that the rigidity of a shell, also under compression, is
closely related to the optimal Korn's constant in the nonlinear first
Korn's inequality (geometric rigidity estimate) for $W^{1,2}$ fields under
the appropriate Dirichlet type boundary conditions arising from the nature
of the compression. In their recent work Frisecke, James and Mueller
(2002, 2006) derived a geometric rigidity estimate for plates, which gave
rise to a derivation of a hierarchy of plate theories for different
scaling regimes of the elastic energy depending on the thickness of the
plate. FJM type theories have been derived by Gamma-convergence and rely
on compactness arguments and of course the underlying nonlinear Korn's
inequality. While the rigidity of plates has been understood almost
completely, the rigidity, in particular the buckling of shells is almost
completely open. This is due to the luck of rigidity estimates and
compactness as understood by Grabovsky and Harutyunyan (2014) for
cylindrical shells. In the case of shells, when there is enough rigidity,
is has been understood that actually the linear first Korn's inequality
can replace the nonlinear one, Grabovsky, Truskinovsky (2007). The
important mathematical question is: What makes the shells more rigid than
plates and how can one compare the rigidity of two different shells? In
this talk we give the answer to that question by classifying shells
according to the Gaussian curvature. We derive sharp first Korn's
inequalities for shells of zero, positive and negative Gaussian curvature.
It turns out, that for zero Gaussian curvature the amount of rigidity is
$h^{1.5}$, for negative curvature it is $h^{4/3}$ and for positive curvature
it is h, i.e. the positive Gaussian curvature shell is the must rigid one.
Here h is the shell thickness. All three exponents are completely new in
Korn's inequalities.
This is partially joint work with Yury Grabovsky.
March 21
Speaker: Dima Pesin,
Department of Physics, The University of Utah
Title: Berry phase effects in electronic transport
Abstract: I will briefly review basics of Berry phase effects in electronic
transport, focusing on one- and two-dimensional systems. I will then
generalize to the three-dimensional case, and introduce what has become
known as the Weyl semimetal: the 3D analog of graphene. I will review their
transport and optical properties, focusing on non-local voltages generated
by the so-called chiral anomaly, as well as effects of non-locality in their
electrodynamics.
March 28
Speaker: Lajos Horvath,
Department of Mathematics, The University of Utah
Title: Statistical inference based on curves
Abstract: Functional data analysis is concerned with observations that are random functions defined on a set ${\mathcal T}$. For example, $X(t)$ could denote the observation of temperature (pollution level) at a given location at time $t$. Stock prices, exchange rates are also modeled as continues curves in economics and finance. Many of such continuous time phenomena are studied, although they are measured only at discrete time points. We provide examples when the observations can be modeled as curves. We discuss how inference on the mean of random curves is performed. There are two popular techniques for this task: principal component analysis and fully functional approach. The principal component analysis transforms the data into a finite dimensional vector (dimension reduction) while the fully functional approach uses the whole sample paths directly. We compare the advantages and drawbacks of both methods.
March 30. Note
Time is 4-5 pm on Wednesday. Room LCB 219
Speaker: Marc Briane,
Institut de Recherche Mathematique de Rennes, France
Title: Loss of ellipticity by homogenization in 2D elasticity
Abstract: This work in collaboration with G. Francfort deals with the loss of ellipticity of
two-dimensional quadratic elastic functionals by homogenization. It was shown by Geymonat,
Muller, Triantafyllidis (1993) that, in the setting of linearized elasticity, a $\Gamma$-convergence
result holds for highly oscillating sequences of elastic energies whose functional
coercivity constant in $R^N$ is zero, while the corresponding coercivity constant on
the torus remains positive. We illustrate the range of applicability of that result by finding
sufficient conditions for such a situation to occur. We thereby justify the degenerate
laminate construction of Gutierrez (1999). We also demonstrate that the predicted loss
of strict strong ellipticity resulting from the Gutierrez construction is unique within a
"laminate-like" class of microstructures.
April 11 (Student Talk)
Speaker: Todd Harry Reeb,
Department of Mathematics, The University of Utah
Title: The consistency of Dirichlet graph partitions
Abstract: The Dirichlet partition problem is to partition a domain
$\Omega$ into $k$ connected subdomains so that the sum of their first
Dirichlet-Laplacian eigenvalues is minimized; a minimizer of this
objective is called a Dirichlet partition of $\Omega$. While the original
problem has its origins in shape optimization and the theory of
Bose-Einstein condensates, it has more recently inspired a graph
partitioning algorithm for use in image processing and data analysis. In
this talk, we'll discuss both the continuum and discrete problems, and
we'll give a consistency result for the discrete problem on random
geometric graphs approximating $\Omega$, which states the convergence of
graph partitions to an appropriate continuum partition.
This is joint work
with Braxton Osting.
epshteyn (at) math.utah.edu
).
Past lectures: Fall 2015, Spring 2015, Fall 2014, Spring 2014, Fall 2013, Spring 2013, Fall 2012, Spring 2012, Fall 2011, Spring 2011, Fall 2010, Spring 2010, Fall 2009, Spring 2009, Fall 2008, Spring 2008, Fall 2007, Spring 2007, Fall 2006, Spring 2006, Fall 2005, Spring 2005, Fall 2004, Spring 2004, Fall 2003, Spring 2003, Fall 2002, Spring 2002, Fall 2001, Spring 2001, Fall 2000, Spring 2000, Fall 1999, Spring 1999, Fall 1998, Spring 1998, Winter 1998, Fall 1997, Spring 1997, Winter 1997, Fall 1996, Spring 1996, Winter 1996, Fall 1995.