Abstract:
Bounded sequences in the Sobolev space of non-compact manifold $M$ converge weakly in $L^p$ only after subtraction of countably many terms supported in the neighborhood of infinity. These terms are defined by functions on the manifolds-at-infinity of $M$, defined by a gluing procedure, and the sum of their respective Sobolev energies is dominated by the energy of the original sequence. Existence of minimizers in isoperimetric problems involving Sobolev norms is therefore dependent on comparison between Sobolev constants for $M$ and its manifolds-at-infinity. This is a joint work with Leszek Skrzypczak.
Abstract:
In this introductory talk, we describe an extension of the probabilistic model of diffusion that was introduced by K.Itô using his method of `stochastic differential equations'. In our model, which is based on ideas from stochastic partial differential equations, the evolving system of (solute) particles is modeled by a tempered distribution. Our method can also be viewed as an extension of the ‘method of characteristics’ in partial differential equations.
Abstract:
Since Lovasz introduced the idea of associating complexes with graphs to solve problems in Graph Theory, the properties that these complexes enjoy have been a topic of study. In this talk, I will be discussing the complexes associated with some classes of graphs, including complete graphs, Kneser and Stable Kneser graphs. The connectedness of some of the complexes associated with the graphs provide good lower bounds for the chromatic number of these graphs.
Abstract:Geometric invariant theory (GIT) provides a construction of quotients for projective algebraic varieties equipped with an action of a reductive algebraic group. Since its foundation by Mumford, GIT plays an important role in the construction of moduli (or parameter) spaces. More recently, its methods have been successfully applied to problems of representation theory, operator theory, computer vision and complexity theory. In the first part of the talk, I will give a very brief introduction to GIT and in the second part I will talk about some geometric properties of the torus and finite group quotients of Flag varieties.
Abstract:A branching random walk is a system of growing particles that starts from one particle at the origin with each particle branching and moving independently of the others after unit time. In this talk, we shall discuss how the tails of progeny and displacement distributions determine the extremal properties of branching random walks. In particular, we have been able to verify two related conjectures of Eric Brunet and Bernard Derrida in many cases that were open before.This talk is based on a joint works with Ayan Bhattacharya (PhD thesis work at Indian Statistical Institute (ISI), Kolkata, presently at Centrum Wiskunde & Informatica, Amsterdam), Rajat Subhra Hazra (ISI Kolkata), Souvik Ray (M. Stat dissertation work at ISI Kolkata, presently at Stanford University) and Philippe Soulier (University of Paris Nanterre).
Abstract:
During the last four decades, it has been realized that some problems in number theory and, in particular, in Diophantine approximation, can be solved using techniques from the theory of homogeneous dynamics. We undertake this as the theme of the talk. We shall first give a broad overview of the subject and demonstrate how some Diophantine problems can be reformulated in terms of orbit properties under certain flows in the space of lattices, following the ideas of G. A. Margulis, S. G. Dani, D. Y. Kleinbock and G. A. Margulis, E.Lindenstrauss. Subsequently, we will discuss the joint work of the speaker with Victor Beresnevich, Anish Ghosh and Sanju Velani on Inhomogeneous dual approximation on affine subspaces.
Abstract:
Randomized experiments have long been considered to be a gold standard for causal inference. The classical analysis of randomized experiments was developed under simplifying assumptions such as homogeneous treatment effects and no treatment interference leading to theoretical guarantees about the estimators of causal effects. In modern settings where experiments are commonly run on online networks (such as Facebook) or when studying naturally networked phenomena (such as vaccine efficacy) standard randomization schemes do not exhibit the same theoretical properties. To address these issues we develop a randomization scheme that is able to take into account violations of the no interference and no homophily assumptions. Under this scheme, we demonstrate the existence of unbiased estimators with bounded variance. We also provide a simplified and much more computationally tractable randomized design which leads to asymptotically consistent estimators of direct treatment effects under both dense and sparse network regimes.
Abstract:
Step-stress life testing is a popular experimental strategy, which ensures an efficient estimation of parameters from lifetime distributions in a relatively shorter period of time. In our analysis, we have used two different stress models, viz., the Cumulative Exposure and Khamis-Higgins models, with respect to the one and two-parameter exponential distributions, and the two-parameter Weibull distributions, respectively, under various censoring schemes. Both Bayesian and frequentist (wherever possible) approaches have been applied for the estimation of parameters and construction of their confidence/credible intervals. Under Bayesian analysis, estimation with order restriction on the mean lifetimes of units has been considered as well. In yet another attempt to analyze lifetime observations, we construct optimal variable acceptance sampling plans, an instrument to test the quality of manufactured items with a crucial role in the acceptance or rejection of the lot. Assuming the one-parameter exponential lifetime distribution, in presence of Type-I and Type-I hybrid censoring, we propose decision-theoretic approach based plans with a new estimator of the scale parameter. The optimal plans are obtained by minimizing the Bayes’ risk under a well-defined loss function.
Abstract:
The Cauchy dual subnormality problem asks when the Cauchy dual operator of an m-isometry is subnormal. This problem can be considered as the non-commutative analog of the fact that the reciprocal of a Bernstein function is completely monotone. We discuss this problem, its connection with a Hausdorff moment problem, its role in a problem posed by N. Salinas dating back to 1988, and some instances in which this problem can be solved. This is a joint work with A. Anand, Z. Jablonski, and J. Stochel.
Abstract:
One of the central questions in representation theory of finite groups is to describe the irreducible characters of finite groups of Lie type, namely matrix groups over finite fields. The theory of character sheaves, initiated by Lusztig, is a geometric approach to this problem. I will give a brief overview of this theory.
Abstract:
Consider a non-parametric regression model y = μ(x) + ε, where y is the response variable, x is the scalar covariate, ε is the error, and μ is the unknown non-parametric regression function. For this model, we propose a new graphical device to check whether v-th (v ≥ 1) derivative of the regression function μ is positive or not, which includes checking for monotonicity and convexity as special cases. An example is also presented that demonstrates the practical utility of the graphical device. Moreover, this graphical device is employed to formulate a class of test statistics to test the aforementioned assertion. The asymptotic distribution of the test statistics are derived, and the tests are implemented on various simulated and real data.
Abstract:
Mathematical modelling of complex ecological interactions is a central goal of research in mathematical ecology. A wide variety of mathematical models are proposed and relevant dynamical analysis is performed to understand the complex interaction among various trophic levels. Existing models are modified as well to remove their ecological discrepancies. Main objective of this talk is to provide a overview of current research trend in mathematical ecology and how dynamical complexities among interacting populations can be captured and analyzed with the help of mathematical models involving ODEs, PDEs, DDEs, SDEs and their combinations.
Abstract:
In this talk, we study the configuration of systoles (minimum length geodesics) on closed hyperbolic surfaces. The set of all systoles forms a graph on the surface, in fact a so-called fat graph, which we call the systolic graph. We study which fat graphs are systolic graphs for some surface, we call these admissible.There is a natural necessary condition on such graphs, which we call combinatorial admissibility. Our first result characterises admissibility.It follows that a sub-graph of an admissible graph is admissible. Our second major result is that there are infinitely many minimal non-admissible fat graphs (in contrast, to the classical result that there are only two minimal non-planar graphs).
Abstract:
Lag windows are commonly used in the time series, steady state simulation, and Markov chain Monte Carlo (MCMC) literature to estimate the long range variances of ergodic averages. We propose a new lugsail lag window specifically designed for improved finite sample performance. We use this lag window for batch means and spectral variance estimators in MCMC simulations to obtain strongly consistent estimators that are biased from above in finite samples and asymptotically unbiased. This quality is particularly useful when calculating effective sample size and using sequential stopping rules where they help avoid premature termination.Further, we calculate the bias and variance of lugsail estimators and demonstrate that there is little loss compared to other estimators. We also show mean square consistency of these estimators under weak conditions. Our results hold for processes that satisfy a strong invariance principle, providing a wide range of practical applications of the lag windows outside of MCMC. Finally, we study the finite sample properties of lugsail estimators in various examples.
Abstract:
Intractable integrals appear in many areas in statistics; for example, generalized linear mixed
models and Bayesian statistics. Inference in these models relies heavily on the estimation of said integrals. In this talk, I present Monte Carlo methods for estimating intractable integrals. I introduce the accept-reject sampler and demonstrate its use on an example. Although useful, the accept-reject sampler is not effective for estimating high-dimensional integrals. To this end, I present Markov Chain Monte Carlo (MCMC) methods, like the Metropolis-Hastings sampler, which allow estimation of high-dimensional integrals. I discuss important theoretical properties of MCMC methods and some statistical challenges in its practical implementation.
Abstract:
Intractable integrals appear in many areas in statistics; for example, generalized linear mixed
models and Bayesian statistics. Inference in these models relies heavily on the estimation of said integrals. In this talk, I present Monte Carlo methods for estimating intractable integrals. I introduce the accept-reject sampler and demonstrate its use on an example. Although useful, the accept-reject sampler is not effective for estimating high-dimensional integrals. To this end, I present Markov Chain Monte Carlo (MCMC) methods, like the Metropolis-Hastings sampler, which allow estimation of high-dimensional integrals. I discuss important theoretical properties of MCMC methods and some statistical challenges in its practical implementation.
Abstract:
We talk about the mod 2 cohomology ring of the Grassmannian $\widetilde{G}_{n,3}$ of oriented 3-planes in $\mathbb{R}^n$. We first state the previously known results. Then we discuss the degrees of the indecomposable elements in the cohomology ring. We have an almost complete description of the cohomology ring. This description provides lower and upper bounds on the cup length of $\widetilde{G}_{n,3}$. This talk is based on my work with Somnath Basu.
Abstract:
How can we determine whether a mean-square continuous stochastic process is finite-dimensional, and if so, what its precise dimension is? And how can we do so at a given level of confidence? This question is central to a great deal of methods for functional data, which require low-dimensional representations whether by functional PCA or other methods. The difficulty is that the determination is to be made on the basis of iid replications of the process observed discretely and with measurement error contamination. This adds a ridge to the empirical covariance, obfuscating the underlying dimension. We build a matrix-completion inspired test statistic that circumvents this issue by measuring the best possible least square fit of the empirical covariance’s off-diagonal elements, optimised over covariances of given finite rank. For a fixed grid of sufficient size, we determine the statistic’s asymptotic null distribution as the number of replications grows. We then use it to construct a bootstrap implementation of a stepwise testing procedure controlling the family-wise error rate corresponding to the collection of hypothesis formalising the question at hand. Under minimal regularity assumptions we prove that the procedure is consistent and that its bootstrap implementation is valid. The procedure involves no tuning parameters or pre-smoothing, is indifferent to the omoskedasticity or lack of it in the measurement errors, and does not assume a low-noise regime. An extensive study of the procedure’s finite-sample accuracy demonstrates remarkable performance in both real and simulated data.
This talk is based on an ongoing work with Victor Panaretos (EPFL, Switzerland).
Abstract:
Virtual element methods(VEM) is a recently developed technology considered as generalization of FEM hav- ing firm mathematical foundations, simplicity in implementation, efficiency and accuracy in computations. Unlike FEM which allows element like triangle and quadrilateral only, VEM permits very general shaped polygons including smoothed voronoi, random voronoi, distorted polygons, nonconvex elements. Basis func- tions in VEM are constructed virtually and can be computed from the informations provided by degrees of freedom(DoF) associated with the VEM space.
Moreover, basis functions are solution of some PDEs which determine the dimension of VEM space. Furthermore, we have two projection operators on VEM space; orthogonal L2projection operator Π0k,K and elliptic projection operator Π∇k,K.Both operators are definedlocally element-wish on K ∈ Th , where Th and K denote mesh partition, polygon respectively and project basis function to computable polynomial subspace sitting inside VEM space. Basically, in abstract VEMformulation, we split the bilinear form into two parts polynomial part and non-polynomial part or stabi- lization part. Polynomial part can be computed directly from degrees of freedom and non-polynomial part can be approximated from DoF ensuring same scaling as polynomial part. However, the above mentioned framework will not work in case of non-linear problems. The primary reason is that term involving nonlinearfunction e.g. (f (u)∇u • ∇v)K can not be split into polynomial and non-polynomial parts. Hence discreteform will not be computable from DoF. In view of this difficulty, we introduce a graceful idea of employingorthogonal projection operator Π0k,Kin order to discretise nonlinear term. Exploiting this technique, weencounter semi-linear parabolic and hyperbolic problems ensuring optimal order of convergence in L2 and H 1 norms. However, we assert that this technique can be employed to discretize general nonlinear type of problem.
Abstract:
Main objects of study in model theory are definable subsets of structures. For example, the definable sets in the field of complex numbers are precisely the (boolean combinations of) varieties. The (model-theoretic) Grothendieck ring of a structure aims to classify definable sets up to definable bijections. The Grothendieck ring of varieties is central to the study of motivic integration, but its computation is a wide open problem in the area. The problem of classification is simplified to a large extent if the theory of the structure admits some form of elimination of quantifiers, i.e., the complexity of the formulas describing the definable sets is in control.
In this ``double talk'', we will begin by describing the construction of the Grothendieck ring and by giving a survey of the known Grothendieck rings. Then we will present the results and techniques used to compute the Grothendieck ring in the case of dense linear orders (joint with A. Jain) and atomless boolean algebras. On our way we will also state the ``implicit function theorem’’ for boolean varieties.
Abstract:
We begin by presenting a spectral characterization theorem that settles Chevreau's problem of characterizing the class of absolutely norming operators --- operators that attain their norm on every closed subspace. We next extend the concept of absolutely norming operators to several particular (symmetric) norms and characterize these sets. In particular, we single out three (families of) norms on B(H,K): the ``Ky Fan k-norm(s)", ``the weighted Ky Fan (\pi, k)-norm(s)", and the ``(p, k)-singular norm(s)", and thereafter define and characterize the set of absolutely norming operators with respect to each of these three norms.
We then restrict our attention to the algebra B(H) of operators on a separable infinite-dimensional Hilbert space H and use the theory of symmetrically normed ideals to extend the concept of norming and absolutely norming from the usual operator norm to arbitrary symmetric norms on B(H). In addition, we exhibit the analysis of these concepts and present a constructive method to produce symmetric norm(s) on B(H) with respect to each of which the identity operator does not attain its norm.
Finally, we introduce the notion of "universally symmetric norming operators" and "universally absolutely symmetric norming operators" and characterize these classes. These refer to the operators that are, respectively, norming and absolutely norming, with respect to every symmetric norm on B(H).
In effect, we show that an operator in B(H) is universally symmetric norming if and only if it is universally absolutely symmetric norming, which in turn is possible if and only if it is compact. In particular, this result provides an alternative characterization theorem for compact operators on a separable Hilbert space.
Abstract:
In this talk, we address the question of identifying commutant and reflexivity of the multiplication d-tuple M_z on a reproducing kernel Hilbert space H of E-valued holomorphic functions on Ω, where E is a separable Hilbert space and Ω is a bounded domain in C^d admitting bounded approximation by polynomials. In case E is a finite dimensional cyclic subspace for M_z, under some natural conditions on the B(E)-valued kernel associated with H , the commutant of M_z is shown to be the algebra H^{∞}_ B(E)(Ω) of bounded holomorphic B(E)-valued functions on Ω, provided M_z satisfies the matrix-valued von Neumann’s inequality. Also, we show that a multiplication d-tuple M_z on H satisfying the von Neumann’s inequality is reflexive. The talk is based on joint work with Sameer Chavan and Shailesh Trivedi.
Abstract:
Although we can trace back the study of epidemics to the work of Daniel Bernoulli nearly two and a half centuries ago, the fact remains that key modeling advances followed the work of three individuals (two physicians) involved in the amelioration of the impact of disease at the population level a century or so ago: Sir Ronald Ross (1911) and Kermack and McKendrick (1927). Ross' interests were in the transmission dynamics and control of malaria while Kermack and McKendrick's work was directly tied in to the study of the dynamics of communicable diseases. In this presentation, I will deal primarily with the study of the dynamics of influenza type A, a communicable disease that does not present a “fixed” target. The study of the short-term dynamics of influenza, single epidemic outbreaks, makes use of extensions/modifications of the models first introduced by Kermack and McKendrick while the study of its long-term dynamics requires the introduction of modeling modifications that account for the continuous emergence of novel influenza variants: strains or subtypes. Here, I will briefly review recent work on the dynamics of influenza A/H1N1, making use of single outbreak models that account for the movement of people in the transmission process over various regions within Mexico. This research has been carried in collaboration with a large number of researchers over a couple of decades.
From a theoretical perspective, I will observe that over the past 100 years modeling epidemic processes have been based primarily on the use of the mass action law. What have we learned from this approach and what are the limitations? In this lecture, I will revisit old and “new” modeling approaches in the context of the dynamics of vector borne, sexually-transmitted and communicable diseases.
Abstract:
Let F(x; y) that belongs to Z[x; y] be a homogeneous and irreducible with degree 3. Consider F(x; y) = h for some fixed nonzero integer h. In 1909, Thue proved that this has only finitely many integral solutions. These eponymous equations have several applications. Much effort has been made to obtain upper bounds for the number of solutions of Thue equations which are independent of the size of the coefficients of F. Siegel conjectured that the number of solutions could be bounded only in terms of h and the number of non-zero coefficients of F. This was settled in the affirmative by Mueller and Schmidt. However, their bound doesn’t have the desired shape. In this talk, we present some instances when their result can be improved. This is joint work with N. Saradha.
Abstract:
Suppose p(t)=X_0+X_1t+...+X_n t^n is a polynomial where each X_i is randomly chosen to be +1 or -1. How many real roots does the polynomial have, on average? Turns out that the answer is of order \log(n). More generally, given a subset of the complex plane, how many roots are in the given subset (on average, say)? Turns out that the roots are almost all close to the unit circle, and distributed roughly uniformly in angle. We survey basic results answering these questions. The talk is aimed to be accessible to advanced undergraduate and graduate students.
Abstract:
Let F(x; y) that belongs to Z[x; y] be a homogeneous and irreducible with degree 3. Consider F(x; y) = h for some fixed nonzero integer h. In 1909, Thue proved that this has only finitely many integral solutions. These eponymous equations have several applications. Much effort has been made to obtain upper bounds for the number of solutions of Thue equations which are independent of the size of the coefficients of F. Siegel conjectured that the number of solutions could be bounded only in terms of h and the number of non-zero coefficients of F. This was settled in the affirmative by Mueller and Schmidt. However, their bound doesn’t have the desired shape. In this talk, we present some instances when their result can be improved. This is joint work with N. Saradha.
Abstract:
Penalized regression techniques are widely used in practice to perform variable selection (also known as model selection). Variable selection is important to drop the covariates from the regression model, which are irrelevant in explaining the response variable. When the number of covariates is large compared to the sample size, variable selection is indeed the most important requirement of the penalized method. Fan and Li(2001) introduced the Oracle Property as a measure of how good a penalized method is. A penalized method is said to have the oracle property provided it works as well as if the correct sub-model were known (like the Oracle who knows everything beforehand). We categorize different penalized regression methods with respect to oracle property and show that bootstrap works for each category. Moreover, we show that in most of the situations, the inference based on bootstrap is much more accurate than the oracle based inference.
Abstract:
The Adaptive Lasso (Alasso) was proposed by Zou (2006) as a modification of the Lasso for the purpose of simultaneous variable selection and estimation of the parameters in a linear regression model. Zou (2006) established that the Alasso estimator is variable-selection consistent as well as asymptotically Normal in the indices corresponding to the nonzero regression coefficients in certain fixed-dimensional settings. Minnier et al. (2011) proposed a perturbation bootstrap method and established its distributional consistency for the Alasso estimator in the fixed-dimensional setting. In this paper, however, we show that this (naive) perturbation bootstrap fails to achieve the desired second order correctness [i.e. with uniform error rate o(n^{-1/2})] in approximating the distribution of the Alasso estimator. We propose a modification to the perturbation bootstrap objective function and show that a suitably studentized version of our modified perturbation bootstrap Alasso estimator achieves second-order correctness even when the dimension of the model is allowed to grow to infinity with the sample size. As a consequence, inferences based on the modified perturbation bootstrap is more accurate than the inferences based on the oracle Normal approximation. Simulation results also justifies our method in finite samples.
Abstract:
Let g be a simple finite dimensional Lie algebra. Let A be the Laurent polynomial algebra in n + 1 commuting variables. Then g ⊗ A is naturally a Lie algebra. We now consider the universal central extension g ⊗ A ⊕ ΩA/dA. Then we add derivations of A, Der(A) and consider τ = g ⊗ A ⊕ ΩA/dA ⊕ Der(A). τ is called full toroidal Lie algebra. In this talk, we will explain the classification of irreducible integrable modules for the full toroidal Lie algebra. In the first half of the lecture, we will recall some general facts of toroidal Lie algebras and then we will go on into the technical part.
Abstract:
We briefly discuss the concept of quantum symmetry and mention how it fits into the realm of noncommutative geometry. We take a particular noncommutative topological space coming from connected, directed graph which is called graph C*-algebra and introduce a notion of quantum symmetry of such noncommutative space. A few concrete examples of such quantum symmetry will also be discussed.
Abstract:
Let Gq be the q-deformation of a simply connected simple compact Lie group G of type A, C or D and Oq(G) be the algebra of regular functions on Gq. In this talk, we show that the Gelfand-Kirillov dimension of Oq(G) is equal to the dimension of the underlying real manifold G. If time allows then we will discuss some applications of this result.
Abstract:
Cryptographic protocols base their security on the hardness of mathematical problems. Discrete Logarithm Problem (DLP) is one of them. It is known to be computationally hard in the groups of cryptographic interest. Most important among them are the multiplicative subgroup of finite fields, group of points on an elliptic curve and the group of divisor classes of degree 0 divisors (Jacobian) on a hyperelliptic curve.
Abstract:
Let g be a simple finite dimensional Lie algebra. Let A be the Laurent polynomial algebra in n + 1 commuting variables. Then g ⊗ A is naturally a Lie algebra. We now consider the universal central extension g ⊗ A ⊕ ΩA/dA. Then we add derivations of A, Der(A) and consider τ = g ⊗ A ⊕ ΩA/dA ⊕ Der(A). τ is called full toroidal Lie algebra. In this talk, we will explain the classification of irreducible integrable modules for the full toroidal Lie algebra. In the first half of the lecture, we will recall some general facts of toroidal Lie algebras and then we will go on into the technical part.
In this talk, I will discuss some of the index calculus algorithms for solving the discrete logarithm problem on these groups. More specifically, I will discuss the tower number field sieve algorithm (TNFS) for solving the discrete logarithm problem in the medium to large characteristic finite fields.
Abstract:
Multivariate two-sample testing is a very classical problem in statistics, and several methods are available for it. But, in the current era of big data and high-dimensional data, most of the existing methods fail to perform well, and they cannot even be used when the dimension of the data exceeds the sample size. In this talk, I will propose and investigate some methods based on inter-point distances, which can be conveniently used for data of arbitrary dimensions. I will discuss the merits and demerits of these methods using theoretical as well as numerical results.
Abstract:
Let F be a non-Archimedean local field F with ring of integers O and a fi nite residue fi eld k of odd characteristic. In contrast to the well-understood representation theory of the fi nite groups of Lie type GL_n(k) or of the locally compact groups GL_n(F), representations of groups GL_n(O) are considerably less understood. For example, the uniqueness of Whittaker model is well known for the complex representations of both GL_n(k) and GL_n(F) but is not known for GL_n(O).
Abstract:
In this talk we will see two possible generalizations, due to A. Connes and Frohlich et al., of the de-Rham calculus on manifolds to the noncommutative geometric context. Computations of both these will be highlighted for a class of examples provided by the quantum double suspension, which helps to compare these two generalizations in a very precise sense.
Abstract:
In this presentation, we analyze a semi-discrete finite difference scheme for a stochastic balance laws driven by multiplicative L´evy noise. By using BV estimate of approximate solutions, generated by finite difference Scheme, and Young measure technique in stochastic setting, we show that the approximate solutions converges to the unique BV entropy solution of the underlying problem. Moreover, we show that the expected value of the L^1-difference between approximate solutions and the unique entropy solution converges at a rate O(√∆x), where ∆x being a spatial mesh size.
Abstract:
We will give an overview of some techniques involved in computing the mod p reduction of p-adic Galois representations associated to certain cusp forms of GL(2).
Abstract:
We study the congruences between Galois representations and their base-change along a p-adic Lie extension. We formulate a Main conjecture arising out of these Galois representations which explains how the congruences are related to values of L-functions. This formulation requires the Galois representations to satisfy some conditions and we provide some examples where the conditions can be verified.
Abstract:
The Bellman and Isaac equations appear as the dynamic programming equations for stochastic control and differential games. This talk is concerned with the error estimates for monotone numerical approximations with the viscosity solutions of such equations. I will be focusing on both local and non-local scenario. I will discuss the most general results for equations of order less than one (non-local) and first order (local) equations. For second order equations, convex and non-convex cases (partial results!) will be treated separately. I will explain why the methods are quite different for convex and non-convex cases. To the end, I will discuss the recent developments on error estimates for non-local Isaac equations of order greater than one.
Abstract:
Transport phenomenon is of fundamental as well as practical importance in a wide spectrum of problems of different length and time scales, viz., enhanced oil recovery (EOR), carbon-capture and storage (CCS), contaminant transport in subsurface aquifers, and chromatographic separation. These transport processes in porous media feature different hydrodynamic instabilities [1, 2]. Viscous
Abstract:
This talk will present a class of tests for fitting a parametric model to the regression function in the presence of Berkson measurement error in the covariates without specifying the measurement error density but when validation data is available. The availability of validation data makes it possible to estimate the calibrated regression function non-parametrically. The proposed tests are based on a class of minimized integrated square distances between a nonparametric estimate of the calibrated regression function and the parametric null model being fitted. The asymptotic distributions of these tests under the null hypothesis and against certain alternatives are established. Surprisingly, these asymptotic distributions are the same as in the case of known measurement error density. In comparison, the asymptotic distributions of the corresponding minimum distance estimators of the null model parameters are affected by the estimation of the calibrated regression function. A simulation study shows desirable performance of a member of the proposed class of estimators and tests. This is co-authored with Pei Geng.
Abstract:
Residual empirical processes are known to play a central role in the development of statistical inference in numerous additive models. This talk will discuss some history and some recent advances in the Asymptotic uniform linearity of parametric and nonparametric residual empirical processes. We shall also discuss their usefulness in developing asymptotically distribution free goodness-of-fit tests for fitting an error distribution functions in nonparametric ARCH(1) models. Part of this talk is based on some joint work with Xiaoqing Zhu.
Abstract: Passive transport models are equations of advection-diffusion type. In most of the applications involving passive transport, the advective fields are of greater magnitude compared to molecular diffusion.
This talk attempts to present a novel theory developed by myself, Thomas Holding (Imperial) and Jeffrey Rauch (Michigan) to address these strong advection problems. Loosely speaking, our strategy is to recast the advection-diffusion equation in moving coordinates dictated by the flow associated with the advective field. Crucial to our analysis is the introduction of a fast time variable and the introduction of some new notions of weak convergence along flows in Lp spaces. We also use ideas from the theory of “homogenization structures” developed by Gabriel Nguetseng.
Our asymptotic results show the following dichotomy:
-
If the Jacobian matrix associated with the flow satisfies certain structural conditions (loosely speaking, boundedness in the fast time variable) then the strong advection limit is a non-degenerate diffusion when seen along flows.
-
On the other hand, when the Jacobian matrix associated with the flow fails to satisfy the aforementioned structural conditions, then the strong advection limit is a parabolic problem with a constraint. Here we show the appearance of an initial layer where there is an enhanced dissipation along flows.
Our results have close links to
-
the Freidlin-Wentzell theory on perturbations of dynamical systems.
-
the theory of Relaxation enhancing Lipschitz flows.
This talk will illustrate the theoretical results via various interesting examples. We address some well-known advective fields such as the Euclidean motions, the Taylor-Green cellular flows, the cat’s eye flows and some special class of the Arnold-Beltrami-Childress (ABC) flows. We will also comment on certain examples of hyperbolic or Anosov flows. Some of the results to be presented in this talk can be found in the following
Publication:T. Holding, H. Hutridurga, J. Rauch. Convergence along mean flows, SIAM J Math.
Anal., Volume 49, Issue 1, pp. 222–271 (2017).
Abstract:
In the problem of selecting a linear model to approximate the true unknown regression model, some necessary and/or sufficient conditions will be discussed for the asymptotic validity of various model selection procedures including Akaike’s AIC, Mallows’ Cp, Schwarz’ BIC, generalized AIC, etc. We shall see that these selection procedures can be classified into three distinct classes according to their asymptotic behaviour. Under some fairly weak conditions, the selection procedures in one class are asymptotically valid if there exists fixed dimensional correct models; while the selection procedures in another class are asymptotically valid if no fixed dimensional correct model exists. On the other hand, the procedures in the third class are compromises of the procedures in the first two classes.
Abstract: We consider the problem of computationally-efficient prediction from high-dimensional and highly correlated predictors in challenging settings where accurate variable selection is effectively impossible. Direct application of penalization or Bayesian methods implemented with Markov chain Monte Carlo can be computationally daunting and unstable. Hence, some type of dimensionality reduction prior to statistical analysis is in order. Common solutions include application of screening algorithms to reduce the regressors, or dimension reduction using projections of the design matrix. The former approach can be highly sensitive to threshold choice in finite samples, while the later can have poor performance in very high-dimensional settings. We propose a Targeted Random Projection (TARP) approach that combines positive aspects of both the strategies to boost performance. In particular, we propose to use information from independent screening to order the inclusion probabilities of the features in the projection matrix used for dimension reduction, leading to data-informed sparsity. Theoretical results on the predictive accuracy of TARP is discussed in detail along with the rate of computational complexity. Simulated data examples, and real data applications are given to illustrate gains relative to a variety of competitors.