Alexander Timans
Mail |
LinkedIn |
Google Scholar |
GitHub |
CV
Last updated: November 2024
About me
I am an ELLIS PhD candidate at the University of Amsterdam in the
Machine Learning Lab,
jointly supervised by Eric Nalisnick (from Johns Hopkins)
and Christian Naesseth.
I am part of the Delta Lab, a
research collaboration with the Bosch Center for Artificial Intelligence.
In that context, I am also advised by Bosch research scientists
Christoph-Nikolas Straehle
and Kaspar Sakmann.
My research interests focus on principled and efficient uncertainty quantification for
deep learning, with the goal of facilitating model reasoning and decision-making.
This includes probabilistic approaches based on frequentist statistics such as
conformal prediction,
risk control or
e-values, but also those relying on
Bayesian principles. I am also interested in the connection to other notions of model reliability such as
calibration and
forecaster scoring, robustness and
generalization, or interpretability.
Applications of interest include vision, time series, and online settings.
I graduated with an MSc in Statistics from ETH Zurich,
specialising in machine learning and computational statistics.
My master thesis was an interdisciplinary project with the
MIE Lab
on uncertainty quantification in traffic prediction (see
here). I obtained a BSc
in Industrial Engineering from the
Karlsruhe Institute of Technology (KIT),
focusing on statistics and finance.
|
|
Research
Max-Rank: Efficient Multiple Testing for Conformal Prediction
Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick
Preprint (arXiv), 2024
Links:
Paper
We suggest max-rank, a multiple testing correction based on rank permutations that is particularly suitable for multiple testing issues arising in conformal prediction.
|
|
Fast yet Safe: Early-Exiting with Risk Control
Metod Jazbec*, Alexander Timans*, Tin Hadzi Veljkovic, Kaspar Sakmann, Dan Zhang, Christian A. Naesseth, Eric Nalisnick
Neural Information Processing Systems (NeurIPS), 2024
Also in: ICML Workshops on Structured Probabilistic Inference and Generative Modelling & Efficient Systems for Foundation Models * Equal contribution
Links:
Paper
| Code
| Poster
We investigate how to adapt frameworks of risk control to early-exit neural networks, providing a distribution-free, post-hoc solution that tunes the EENN’s exiting mechanism so that exits only occur when the output is guaranteed to satisfy user-specified performance goals.
|
|
Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction
Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick
European Conference on Computer Vision (ECCV) Oral, 2024
Also in: ECCV Workshop on Uncertainty Quantification for CV
Links:
Paper
| Code
| Poster
We propose a two-step conformal approach that propagates uncertainty in predicted class labels into the uncertainty intervals of bounding boxes. This broadens the validity of conformal coverage guarantees to include incorrectly classified objects. This work builds on our workshop paper.
|
|
Conformal time series decomposition with component-wise exchangeability
Derck Prinzhorn, Thijmen Nijdam, Putri van der Linden, Alexander Timans
Conformal and Probabilistic Prediction with Applications (PLMR), 2024
Links:
Paper
| Code
We present a novel use of conformal prediction for time series forecasting that incorporates time series decomposition, allowing us to customize employed methods to account for the different exchangeability regimes underlying each time series component.
|
|
Adaptive Bounding Box Uncertainty via Conformal Prediction
Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick
ICCV Workshop on Uncertainty Quantification for CV, 2023
Links:
Paper
We quantify the uncertainty in multi-object 2D bounding box predictions via conformal prediction, producing tight prediction intervals with guaranteed per-class coverage for the bounding box coordinates.
|
|
Uncertainty Quantification for Image-based Traffic Prediction across Cities
Alexander Timans, Nina Wiedemann, Nishant Kumar, Ye Hong, Martin Raubal
Preprint (arXiv), 2023
Links:
Paper
| Code
We explore different uncertainty quantification methods on a large-scale image-based traffic dataset spanning multiple cities and time periods, originally featured as a NeurIPS 2021 prediction challenge. Meaningful uncertainty relating to traffic dynamics is recovered by a combination method of deep ensembles and patch-based deviations. In a case study, we demonstrate how uncertainty estimates can be employed for unsupervised outlier detection on traffic behaviour.
|
|
Other activities
-
Reviewing: AISTATS 2025, ICLR 2025, NeurIPS 2024, ICCV 2023
-
Teaching & Supervision: Project AI (MSc/UvA),
Human-in-the-loop ML (MSc/UvA), Introduction to ML (BSc/UvA),
Derck Prinzhorn (BSc Thesis/UvA, Thesis award),
Deep Learning 2 (MSc/UvA), ANOVA (MSc/ETH), Econometrics (BSc/KIT)
|