Alexander Timans

Mail   |   GitHub   |   Google Scholar   |   LinkedIn   |   CV


I am a PhD student in Machine Learning at the University of Amsterdam supervised by Eric Nalisnick in the Amsterdam Machine Learning Lab. I am also affiliated with the Delta Lab, a research collaboration between the University of Amsterdam and the Bosch Center for Artificial Intelligence. In that context, I am co-supervised by Bosch research scientists Kaspar Sakmann and Christoph-Nikolas Straehle.

My research interests focus on principled and efficient uncertainty quantification for deep learning models, with a particular focus on applications in the computer vision domain. This includes probabilistic approaches such as those relying on Bayesian principles, but also those based on frequentists frameworks such as conformal prediction. I am also interested in the connection to other notions of reliability such as model calibration, robustness and generalization, and interpretability; as well as alternative frameworks providing safety assurances.

I graduated with an MSc in Statistics from ETH Zurich, specialising in machine learning and computational statistics. My master thesis was done as an interdisciplinary project with the Mobility Information Engineering Lab on uncertainty quantification in traffic prediction (see here). I graduated with a BSc in Industrial Engineering and Management from the Karlsruhe Institute of Technology (KIT), focusing on statistics and finance.

profile photo

Updates


Research


Adaptive Bounding Box Uncertainty via Conformal Prediction


Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick

ICCV 2023 Workshop on Uncertainty Quantification for Computer Vision
Links: Paper

We quantify the uncertainty in multi-object 2D bounding box predictions via conformal prediction, achieving practically useful prediction intervals with guaranteed per-class coverage for the bounding box coordinates.

project image

Uncertainty Quantification for Image-based Traffic Prediction across Cities


Alexander Timans, Nina Wiedemann, Nishant Kumar, Ye Hong, Martin Raubal

arXiv preprint (under review), 2023
Links: Paper | Code

We explore a series of uncertainty quantification methods on a large-scale image-based traffic dataset spanning multiple cities and time periods, originally featured as a NeurIPS 2021 prediction challenge. Meaningful uncertainty estimates relating to underlying traffic dynamics are recovered by a combination method of deep ensembles and patch-based deviation. In a case study, we then demonstrate how uncertainty estimates can be employed for unsupervised outlier detection on changes in city traffic dynamics.

project image

Source: adapted from Dharmesh Tailor's fork of Leonid Keselman's fork of John Barron's website.