Welcome to the IOL Seminar and Lecture Series at the Zuse Institute Berlin. Our research seminar serves to bring together researchers presenting their latest work and to organize tutorial lectures on valuable topics not typically covered in graduate coursework.
Presentations usually take place on Wednesday afternoons in ZIB’s Seminar Room (Room 2006). Announcements for these events are sent via email. For more information, please
contact Mathieu Besançon, Kartikey Sharma, or Zev Woodstock.
We introduce a new algorithmic framework, which we call Polyform,
for fast approximate matrix multiplication through sums of
spherical convolutions (sparse polynomial multiplications). This
bilinear operation leads to several new (worst-case and
data-dependent) improvements on the speed-vs-accuracy tradeoffs
for approximate matrix multiplication. Polyform can also be viewed
as a cheap practical alternative to matrix multiplication in deep
neural networks (DNNs), which is the main bottleneck in
large-scale training and inference.
The algorithm involves unexpected connections to Additive
Combinatorics, sparse Fourier transforms, and spherical
harmonics. The core of the algorithm is optimizing the
polynomial’s coefficients, which is a low-rank SDP problem
generalizing Spherical Codes. Meanwhile, our experiments
demonstrate that, when using SGD to optimize these coefficients in
DNNs, Polyform provides a major (3×-5×) speedup on state-of-art DL
models with minor accuracy loss. This suggests replacing matrix
multiplication with (variants of) polynomial multiplication in
large-scale deep neural networks.
Antoine Deza
(McMaster University)
[homepage] Coordinates:
We investigate the following question: how close can two disjoint lattice polytopes be contained in a fixed hypercube? This question occurs in various contexts where this minimal distance appears in complexity bounds of optimization algorithms. We provide nearly matching lower and upper bounds on this distance and discuss its exact computation. Similar bounds are given in the case of disjoint rational polytopes whose binary encoding length is prescribed. Joint work with Shmuel Onn, Sebastian Pokutta, and Lionel Pournin.
Jens Vygen
(University of Bonn)
[homepage] Coordinates:
We survey the state of the art in VLSI routing. Interconnecting millions of sets of pins by wires is challenging because of the huge instance sizes and limited resources. We present our general approach as well as algorithms for fractional min-max resource sharing and goal-oriented shortest path search sped up by geometric distance queries. These are key components of BonnRoute, which is used for routing some of the most complex microprocessors in industry.
Carla Michini
(University of Wisconsin)
[homepage] Coordinates:
We study the pure Price of Anarchy (PoA) of symmetric network
congestion games defined over series-parallel networks, which
measures the inefficiency of pure Nash equilibria.
First, we consider affine edge delays. For arbitrary networks,
Correa and others proved a tight upper bound of 5/2 on the
PoA. On the other hand, Fotakis showed that restricting to
the class of extension-parallel networks makes the worst-case PoA
decreases to 4/3. We prove that, for the larger class of
series-parallel networks, the PoA is at most 2, and that it is at
least 27/19 in the worst case, improving both the best-known upper
bound and the best-known lower bound.
Next, we consider edge delays that are polynomial functions with
highest degree p. We construct a family of symmetric congestion
games over arbitrary networks which achieves the same worst-case
PoA of asymmetric network congestion games given by Aland and
others. We then establish that in games defined over
series-parallel networks the PoA cannot exceed 2p+1 -
1, which is considerably smaller than the worst-case PoA in
arbitrary networks. We also prove that the worst-case PoA, which
is sublinear in extension-parallel networks (as shown by Fotakis),
dramatically degrades to exponential in series-parallel networks.
Finally, we extend the above results to the case where the social
cost of a strategy profile is computed as the maximum players’
cost. In this case, the worst-case PoA is in O(4p),
which is considerably smaller than the worst-case PoA in arbitrary
networks given by Christodoulou and Koutsoupias. Moreover,
while in extension-parallel networks each pure Nash equilibrium is
also a social optimum (as shown by Epstein and others), we construct
instances of series-parallel network congestion games with
exponential PoA.
Sequential decision-making often requires dynamic policies, which are computationally not tractable in general. Decision rules provide approximate solutions by restricting decisions to simple functions of uncertainties. In this talk, we consider a nonparametric lifting framework where the uncertainty space is lifted to higher dimensions to obtain nonlinear decision rules. Current lifting-based approaches require pre-determined functions and are parametric. We propose two nonparametric liftings, which derive the nonlinear functions by leveraging the uncertainty set structure and problem coefficients. Both methods integrate the benefits from lifting and nonparametric approaches, and hence, provide scalable decision rules with performance bounds. More specifically, the set-driven lifting is constructed by finding polyhedrons within uncertainty sets, inducing piecewise-linear decision rules with performance bounds. The dynamics-driven lifting, on the other hand, is constructed by extracting geometric information and accounting for problem coefficients. This is achieved by using linear decision rules of the original problem, also enabling to quantify lower bounds of objective improvements over linear decision rules. Using numerical comparisons with competing methods, we demonstrate superior computational scalability and comparable performance in objectives. These observations are magnified in multistage problems with extended time horizons, suggesting practical applicability of the proposed nonparametric liftings in large-scale dynamic robust optimization. This is a joint work with Eojin Han.
Yao Xie
(Georgia Institute of Technology)
[homepage] Coordinates: @ ZIB Lecture Hall (Room 2005)
Discrete events are sequential observations that record event
time, location, and possibly “marks” with additional event
information. Such event data is ubiquitous in modern applications,
including social networks, neuronal spike trains, police reports,
medical ICU data, power networks, seismic activities, and COVID-19
data. We are particularly interested in capturing the complex
dependence of the discrete events data, such as the latent
influence — triggering or inhibiting effects of the historical
events on future events — a temporal causal relationship. I will
present my recent research on this topic from estimation,
uncertainty quantification, handling high-dimensional marks, and
leveraging neural network representation power. The developed
methods particularly consider computational efficiency and
statistical guarantees, leveraging the recent advances in
variational inequality for monotone operators that bypass the
difficulty posed by the original non-convex model estimation
problem. The performance of the proposed method is illustrated
using real-world data: crime, power outage, hospital ICU, and
COVID-19 data.
We study network flow interdiction problems with nonlinear and
nonconvex flow models. The resulting model is a max-min bilevel optimization
problem in which the follower’s problem is nonlinear and nonconvex. In this
game, the leader attacks a limited number of arcs with the goal to maximize
the load shed and the follower aims at minimizing the load shed by solving a
transport problem in the interdicted network. We develop an exact algorithm
consisting of lower and upper bounding schemes that computes an optimal
interdiction under the assumption that the interdicted network remains weakly
connected. The main challenge consists of computing valid upper bounds for
the maximal load shed, whereas lower bounds can directly be derived from
the follower’s problem. To compute an upper bound, we propose solving a
specific bilevel problem, which is derived from restricting the flexibility of
the follower when adjusting the load flow. This bilevel problem still has a
nonlinear and nonconvex follower’s problem, for which we then prove necessary
and sufficient optimality conditions. Consequently, we obtain equivalent
single-level reformulations of the specific bilevel model to compute upper bounds.
Our numerical results show the applicability of this exact approach using the
example of gas networks.
Paul Breiding
(Universität Osnabrück & MPI MiS Leipzig)
[homepage] Coordinates:
I will discuss recent progress on the problem of sampling from a nonlinear real smooth algebraic variety; that is a nonlinear smooth manifold defined as the zero set of polynomial equations.
Kate Smith-Miles
(University of Melbourne; OPTIMA)
[homepage] Coordinates: @ ZIB Lecture Hall (Room 2005)
Instance Space Analysis (ISA) is a recently developed methodology to support objective testing of algorithms. Rather than reporting algorithm performance on average across a chosen set of test problems, as is standard practice, ISA offers a more nuanced understanding of the unique strengths and weaknesses of algorithms across different regions of the instance space that may otherwise be hidden on average. It also facilitates objective assessment of any bias in the chosen test instances, and provides guidance about the adequacy of benchmark test suites and the generation of more diverse and comprehensive test instances to span the instance space. This talk provides an overview of the ISA methodology, and the online software tools that are enabling its worldwide adoption in many disciplines. A case study comparing algorithms for university timetabling is presented to illustrate the methodology and tools, with several other applications to optimisation, machine learning, computer vision and quantum computing highlighted.
Koopman operator theory has been successfully applied to problems from various research areas such as fluid dynamics, molecular dynamics, climate science, engineering, and biology. Most applications of Koopman theory have been concerned with classical dynamical systems driven by ordinary or stochastic differential equations. In this presentation, we will first compare the ground-state transformation and Nelson’s stochastic mechanics, thereby demonstrating that data-driven methods developed for the approximation of the Koopman operator can be used to analyze quantum physics problems. Moreover, we exploit the relationship between Schrödinger operators and stochastic control problems to show that modern data-driven methods for stochastic control can be used to solve the stationary or imaginary-time Schrödinger equation. Our findings open up a new avenue towards solving Schrödinger’s equation using recently developed tools from data science.
A key problem in mathematical imaging, signal processing and computational statistics is the minimization of non-convex objective functions over conic domains, which are continuous but potentially non-smooth at the boundary of the feasible set. For such problems, we propose a new family of first and second-order interior-point methods for non-convex and non-smooth conic constrained optimization problems, combining the Hessian barrier method with quadratic and cubic regularization techniques. Our approach is based on a potential-reduction mechanism and attains a suitably defined class of approximate first- or second-order KKT points with worst-case iteration complexity O(ϵ−2) and O(ϵ−3/2), respectively. Based on these findings, we develop a new double loop path-following scheme attaining the same complexity, modulo adjusting constants. These complexity bounds are known to be optimal in the unconstrained case, and our work shows that they are upper bounds in the case with complicated constraints as well. A key feature of our methodology is the use of self-concordant barriers to construct strictly feasible iterates via a disciplined decomposition approach and without sacrificing on the iteration complexity of the method. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained optimization problems. This is joint work with Pavel Dvurechensky (WIAS Berlin) and based on the paper: arXiv:2111.00100 [math.OC].
Daniel Blankenburg
(University of Bonn) Coordinates: @ ZIB Lecture Hall (Room 2005)
We revisit the (block-angular) min-max resource sharing problem,
which is a well-known generalization of fractional packing and the
maximum concurrent flow problem. It consists of finding an
ℓ∞-minimal element in a Minkowski sum
X=∑c∈CXc of non-empty closed convex sets
Xc⊆ℝR≥0, where C and R are finite
sets. We assume that an oracle for approximate linear minimization
over Xc is given.
We improve on the currently fastest known FPTAS in various ways.
A major novelty of our analysis is the concept of local weak
duality, which illustrates that the algorithm optimizes (close to)
independent parts of the instance separately. Interestingly, this
implies that the computed solution is not only approximately
ℓ∞-minimal, but among such solutions, also its
second-highest entry is approximately minimal.
Based on a result by Klein and Young, we provide a lower bound of
𝛺(((|C|+|R|) log |R|)/𝛿²) required oracle calls for a natural
class of algorithms. Our FPTAS is optimal within this class — its
running time matches the lower bound precisely, and thus improves
on the previously best-known running time for the primal as well
as the dual problem.
The vanishing ideal of points is the set of all polynomials that vanish over the points. Any vanishing ideal can be generated by a finite set of vanishing polynomials or generators, and the computation of approximate generators has been developed at the intersection of computer algebra and machine learning in the last decade under the name of approximate computation of vanishing ideals. In computer algebra, the developed algorithms are supported by theories more deeply, whereas, in machine learning, the algorithms have been developed toward applications at a cost of some theoretical properties. In this talk, I will present a review on the development of approximate computation of vanishing ideals in two fields, particularly from the perspective of the spurious vanishing problem and normalization, which are recently suggested as a new direction of development.
Vladimir Kolmogorov
(IST Austria) Coordinates: @ ZIB Lecture Hall (Room 2005)
I will consider the problem of minimizing functions of discrete variables represented as a sum of “tractable” subproblems. First, I will briefly review recent theoretical results characterizing complexity classification of discrete optimization in the framework of “Valued Constraint Satisfaction Problems” (VCSPs). Then I will talk about algorithms for solving Lagrangian relaxations of such problems. I will describe an approach based on the Frank-Wolfe algorithm that achieves the best-known convergence rate. I will also talk about practical implementation, and in particular about in-face Frank-Wolfe directions for certain combinatorial subproblems. Implementing such directions for perfect matching subproblems boils down to computing a Gomory-Hu (GH) tree of a given graph. Time permitting, I will describe a new approach for computing GH tree that appears to lead to a state-of-the-art implementation.
There has been enormous progress in the branch-and-bound methods
in the past couple of decades. In particular, much effort has been
put into the so-called variable selection problem, i.e. the
problem of choosing which variable to branch on in the current
search node. Recently, many researchers have investigated the
potential of using machine learning to find good solutions to this
problem by for instance trying to mimic what good, but
computationally costly, heuristics do. The main part of this
research has been focused on branching on so-called elementary
disjunctions, that is, branching on a single variable. Theory,
such as the results by H.W. Lenstra, Jr. and by Lovász & Scarf,
tells us that we in general need to consider branching on general
disjunctions, but due in part to the computational challenges to
implement such methods, much less work in this direction has been
done. Some heuristic results in this direction have been
presented.
In this talk we discuss both theoretical and heuristic results
when it comes to branching on general disjunctions with an
emphasis on lattice based methods. A modest computational study is
also presented. In the last part of the talk we also give a short
description of results from applying machine learning to the
variable selection problem. The talk is based on joint work with
Laurence Wolsey.
Vijay Vazirani
(University of California, Irvine) Coordinates:
Over the last three decades, the online bipartite matching (OBM)
problem has emerged as a central problem in the area of Online
Algorithms. Perhaps even more important is its role in the area of
Matching-Based Market Design. The resurgence of this area, with
the revolutions of the Internet and mobile computing, has opened
up novel, path-breaking applications, and OBM has emerged as its
paradigmatic algorithmic problem. In a 1990 joint paper with
Richard Karp and Umesh Vazirani, we gave an optimal algorithm,
called RANKING, for OBM, achieving a competitive ratio of (1 –
1/e); however, its analysis was difficult to comprehend. Over the
years, several researchers simplified the analysis.
We will start by presenting a “textbook quality” proof of
RANKING. Its simplicity raises the possibility of extending
RANKING all the way to a generalization of OBM called the adwords
problem. This problem is both notoriously difficult and very
significant, the latter because of its role in the AdWords
marketplace of Google. We will show how far this endeavor has gone
and what remains. We will also provide a broad overview of the
area of Matching-Based Market Design and pinpoint the role of OBM.
Sébastien Designolle
(University of Geneva)
[homepage] Coordinates: @ ZIB Lecture Hall (Room 2005)
Title.
Abstract.
In quantum mechanics, performing a measurement is an invasive
process which generally disturbs the system. Due to this
phenomenon, there exist incompatible quantum measurements, i.e.,
measurements that cannot be simultaneously performed on a single
copy of the system.
In this talk we will explain the robustness-based approach
generally used to quantify this incompatibility and how it can be
cast, for finite-dimensional systems, as a semidefinite
programming problem. With this formulation at hand we analytically
investigate the incompatibility properties of some
high-dimensional measurements and we tackle, for an arbitrary
fixed dimension, the question of the most incompatible pairs of
quantum measurements, showing in particular optimality of
Fourier-conjugated bases.
Lorenz T. (Larry) Biegler
(Carnegie Mellon University)
[homepage] Coordinates:
Optimization models for engineering design and operation, are frequently described by complex models of black-box simulations. The integration, solution, and optimization of this ensemble of large-scale models is often difficult and computationally expensive. As a result, model reduction in the form of simplified or data-driven surrogate models is widely applied in optimization studies. While the application to machine learning and AI approaches has lead to widespread optimization studies with surrogate models, less attention has been paid to validating these results on the optimality of high-fidelity, i.e., ‘truth’ models. This talk describes a surrogate-based optimization approach based on a trust-region filter (TRF) strategy. The TRF method substitutes surrogates for high-fidelity models, thus leading to simpler optimization subproblems with sampling information from truth models. Adaptation of the subproblems is guided by a trust region method, which is globally convergent to the local optimum of the original high-fidelity problem. The approach is suitable for broad choices of surrogate models, ranging from neural networks to physics-based shortcut models. The TRF approach has been implemented on numerous optimization examples in process and energy systems, with complex high fidelity models. Three case studies will be presented for Real-Time Optimization (RTO) for oil refineries, chemical processes and dynamic adsorption models for CO2 capture, which demonstrate the effectiveness of this approach.
Haoxiang Yang
(CUHK-Shenzhen)
[homepage] Coordinates:
In this talk, we consider a robust optimization problem with continuous decision-dependent uncertainty (RO-CDDU), which has two new features: an uncertainty set linearly dependent on continuous decision variables and a convex piecewise-linear objective function. We prove that RO-CDDU is NP-hard in general and reformulate it into an equivalent mixed-integer nonlinear program (MINLP) with a decomposable structure to address the computational challenges. Such an MINLP model can be further transformed into a mixed-integer linear program (MILP) given the uncertainty set’s extreme points. We propose an alternating direction algorithm and a column generation algorithm for RO-CDDU. We model a robust demand response (DR) management problem in electricity markets as RO-CDDU, where electricity demand reduction from users is uncertain and depends on the DR planning decision. Extensive computational results demonstrate the promising performance of the proposed algorithms in both speed and solution quality. The results also shed light on how different magnitudes of decision-dependent uncertainty affect the demand response decision.
Vu Nguyen
(Amazon Research Australia)
[homepage] Coordinates: @ ZIB Lecture Hall (Room 2005)
Bayesian optimization (BO) has demonstrated impressive success in optimizing black-box functions. However, there are still challenges in dealing with black-boxes that include both continuous and categorical inputs. I am going to present our recent works in optimizing the mixed space of categorical and continuous variables using Bayesian optimization [B. Ru, A. Alvi, V. Nguyen, M. Osborne, and S. Roberts. “Bayesian optimisation over multiple continuous and categorical inputs.” ICML 2020] and how to scale it up to higher dimensions [X. Wan, V. Nguyen, H. Ha, B. Ru, C. Lu, and M. Osborne. “Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces.” ICML 2021] and population-based AutoRL setting [J. Parker-Holder, V. Nguyen, S. Desai, and S. Roberts. “Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL”. NeurIPS 2021].
Jonathan Eckstein
(Rutgers Business School)
[homepage] Coordinates: @ ZIB Conference Room (Room 3028)
This talk describes the solution of convex optimization problems
that include uncertainty modeled by a finite but potentially very
large multi-stage scenario tree.
In 1991, Rockafellar and Wets proposed the progressive hedging (PH)
algorithm to solve such problems. This method has some advantages
over other standard methods such as Benders decomposition,
especially for problems with large numbers of decision stages. The
talk will open by showing that PH is an application of the
Alternating Direction Method of Multipliers (ADMM). The equivalence
of PH to the ADMM has long been known but not explicitly published.
The ADMM is an example of an “operator splitting” method, and in
particular of a principle called “Douglas–Rachford splitting”. I
will briefly explain what is meant by an “operator splitting
method”.
Next, the talk will apply a different, more recent operator
splitting method called “projective splitting” to the same
problem. The resulting method is called “asynchronous projective
hedging” (APH). Unlike most decomposition methods, it does not need
to solve every subproblem at every iteration; instead, each
iteration may solve just a single subproblem or a small subset of
the available subproblems.
Finally, the talk will describe work integrating the APH algorithm
into mpi-sppy, a Python package for modeling and distributed
parallel solution of stochastic programming problems. mpi-sppy
uses the Pyomo Python-based optimization modeling sytem. Our
experience includes using up to 2,400 processor cores to solve
2-stage and 4-stage test problem instances with as many as
1,000,000 scenarios.
Portions of the work described in this talk are joint with Patrick
Combettes (North Carolina State University), Jean-Paul Watson
(Lawrence Livermore National Laboratory, USA), and David Woodruff
(University of California, Davis).
David Steurer
(ETH Zürich)
[homepage] Coordinates:
We consider mixtures of k≥2 Gaussian components with
unknown means and unknown covariance (identical for all
components) that are well-separated, i.e., distinct components
have statistical overlap at most k-C for a large enough
constant C≥1.
Previous statistical-query lower bounds
[Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart,
Statistical query lower bounds for robust estimation of
high-dimensional Gaussians and Gaussian mixtures (extended
abstract),
58th Annual IEEE Symposium on Foundations of
Computer Science—FOCS 2017, pp. 73–84]
give formal evidence that,
even for the special case of colinear means, distinguishing such
mixtures from (pure) Gaussians may be exponentially hard (in k).
We show that, surprisingly, this kind of hardness can only appear
if mixing weights are allowed to be exponentially small. For
polynomially lower bounded mixing weights, we show how to achieve
non-trivial statistical guarantees in quasi-polynomial time.
Concretely, we develop an algorithm based on the sum-of-squares
method with running time quasi-polynomial in the minimum mixing
weight. The algorithm can reliably distinguish between a mixture
of k≥2 well-separated Gaussian components and a (pure) Gaussian
distribution. As a certificate, the algorithm computes a
bipartition of the input sample that separates some pairs of
mixture components, i.e., both sides of the bipartition contain
most of the sample points of at least one component.
For the special case of colinear means, our algorithm outputs a
k-clustering of the input sample that is approximately consistent
with all components of the underlying mixture. We obtain similar
clustering guarantees also for the case that the overlap between
any two mixture components is lower bounded quasi-polynomially in
k (in addition to being upper bounded polynomially in k).
A significant challenge for our results is that they appear to be
inherently sensitive to small fractions of adversarial outliers
unlike most previous algorithmic results for Gaussian mixtures.
The reason is that such outliers can simulate exponentially small
mixing weights even for mixtures with polynomially lower bounded
mixing weights.
A key technical ingredient of our algorithms is a characterization
of separating directions for well-separated Gaussian components in
terms of ratios of polynomials that correspond to moments of two
carefully chosen orders logarithmic in the minimum mixing weight.
Linear optimization, also known as linear programming, is a
modelling framework widely used by analytics practitioners.
The reason is that many optimization problems can easily be
described in this framework. Moreover, huge linear optimization
problems can be solved using readily available software and
computers.
However, a linear model is not always a good way to describe an
optimization problem since the problem may contain nonlinearities.
Nevertheless such nonlinearities are often ignored or linearized
because a nonlinear model is considered cumbersome. Also there are
issues with local versus global optima and in general it is just
much harder to work with nonlinear functions than linear
functions.
Over the last 15 years a new paradigm for formulating certain
nonlinear optimization problems called conic optimization has
appeared. The advantage of conic optimization is that it allows the
formulation of a wide variety of nonlinearities
while almost keeping the simplicity and efficiency of linear
optimization.
Therefore, in this presentation we will discuss what conic
optimization is and why it is relevant to analytics
practitioners. In particular we will discuss what can be
formulated using conic optimization, illustrated by examples. We
will also provide some computational results documenting that
large conic optimization problems can be solved efficiently in
practice. To summarize, this presentation should be interesting
for everyone interested in an important recent development in
nonlinear optimization.
Proximity operators are tools which use first-order information to solve optimization problems. However, unlike gradient-based methods, algorithms involving proximity operators are guaranteed to work in nonsmooth settings. This expository talk will discuss the mathematical and numerical properties of proximity operators, how to compute them, algorithms involving them, and advice on implementation.