10:30 |
On the quasi-uniformity properties of quasi-Monte Carlo lattice point sets and sequences
Josef Dick, University of New South Wales
A point set is called quasi-uniform if it has optimal covering radius and optimal separation. Such point sets are useful for instance in radial basis function approximation and experimental designs. In this talk we discuss the quasi-uniformity properties of some well known lattice point sets like the Fibonacci lattice, rank-1 lattice point sets, Frolov point sets and Kronecker sequences.
|
11:00 |
On the power of adaption and randomization
Erich Novak, FSU Jena
We present bounds between different widths of convex subsets of Banach spaces,
including Gelfand and Bernstein widths, and discuss implications for the
adaption problem. In particular, we obtain a bound on the maximal gain of adaptive and randomized algorithms over non-adaptive, deterministic ones for approximating linear operators on convex sets.
Joint work with David Krieg and Mario Ullrich.
|
11:30 |
How many continuous measurements are needed to learn a vector?
David Krieg, Universität Passau
It is impossible to learn a vector in $\mathbb{R}^m$ from less than $m$ linear measurements, even if the measurement maps are chosen adaptively. In contrast to this, we show that $\mathcal{O}(\log m)$ continuous adaptive measurements suffice. This leads to an exponential speed-up of continuous information compared to linear information for various approximation problems. Although this rather extreme result is only valid for exact measurements and not for noisy measurements, continuous information can also improve much upon linear information in the presence of noise.
Coauthors: Erich Novak, Leszek Plaskota, and Mario Ullrich
Main reference: https://arxiv.org/abs/2412.06468
|
12:00 |
The $L_p$-discrepancy for finite $p>1$ suffers from the curse of dimensionality
Friedrich Pillichshammer, JKU Linz
The $L_p$-discrepancy is a classical quantitative measure for the irregularity of distribution of an $N$-element point set in the $d$-dimensional unit cube. Its inverse for dimension $d$ and error threshold $\varepsilon \in (0,1)$ is the number of points in $[0,1)^d$ that is required such that the minimal normalized $L_p$-discrepancy is less or equal $\varepsilon$. It is well known, that the inverse of $L_2$-discrepancy grows exponentially fast with the dimension $d$, i.e., we have the curse of dimensionality, whereas the inverse of $L_{\infty}$-discrepancy depends exactly linearly on $d$.
The behavior of inverse of $L_p$-discrepancy for general $p \not\in \{2,\infty\}$ was an open problem since many years.
In this talk we show that the $L_p$-discrepancy suffers from the curse
of dimensionality for all $p$ in $(1,\infty)$ and only the case $p=1$ is still open.
This result follows from a more general result that we show for the worst-case error
of positive quadrature formulas. The same method also applies to other interesting settings.
Joint work with Erich Novak (FSU Jena).
|
15:30 |
Minimal dispersion on the sphere
Mathias Sonnleitner, JKU Linz
Given a fixed number of points on the Euclidean unit sphere, how can we distribute them to minimize the area of the largest empty spherical cap? We call this minimal area the minimal dispersion and we are interested in its behavior depending on the number of points and the dimension, as well as in optimal point configurations. We present connections to covering and approximation problems, derive asymptotic results and highlight efficient constructions in high dimensions. Perhaps (un)surprisingly, constructions are based on random points.
This talk is based on a joint project with Alexander Litvak and Tomasz Szczepanski.
|
16:00 |
Tent transformed order 2 nets as QMC sample nodes with quadratic decay
Nicolas Nagel, TU Chemnitz
We return to the question of quasi-Monte Carlo integration via order 2 digital nets for (nonperiodic) functions of bounded mixed second derivative over the d-dimensional unit cube. While it is known that order 5 nets are asymptotically optimal in terms of the worst case error, these point sets are usually only available for small dimensions. This is due to their relatively high order parameter. It is thus natural to ask if lower order nets can already be used to achieve (near) optimal error decay rates. We explore this direction, using a periodization technique called the tent transform as well as a new embedding result in terms of Faber-Schauder coefficients to improve the previously best known upper bound for low-complexity point sets such as order 2 nets.
This is joint work with Bernd Käßemodel and Tino Ullrich.
|
16:30 |
Spherical cap discrepancy, sums of distances, and Welch bounds
Dmytro Bilyk, University of Minnesota
Spherical cap discrepancy, due to the so-called Stolarsky invariance principle, is closely related to the sum of distances between points. We shall present a new proof of Beck's classical result stating that this discrepancy is always at least of the order $N^{-1/2 - 1/2d}$. This proof is completely elementary in nature and, unlike the other proofs, avoids using Fourier analysis or spherical harmonics/Gegenbauer polynomials. The argument is also flexible enough to provide inequalities for discrepancy in terms of various other geometric quantities, estimates for discrete Riesz energies, almost sharp constants in the discrepancy bounds for various dimensions, as well as new bounds for discrepancy of lines and general sets of Hausdorff dimension between $0$ and $d$ (rather than just point sets). Along the way we shall discuss some related topics that come up: positive definite functions, Welch bounds, frame energy, spherical designs, and energy minimization on the sphere. The talk, which is based on joint work with Johann Brauchart (TU Graz).
|