September 2021: Wow – “Partial success in closing the gap between human and machine vision” was accepted as Oral at NeurIPS 2021, and “How Well do Feature Visualizations Support Causal Understanding of CNN Activations?” as a Spotlight!

July 2021: I just submitted my doctoral thesis and joined FAIR for a summer internship, where I’ll be working with Ari Morcos!

May 2021:The developmental trajectory of object recognition robustness: comparing children, adults, and CNNs“, a project of my Master student, Lukas Huber, has been accepted as a talk at VSS 2021!

December 2020: Since our paper “On the surprising similarities of supervised and self-supervised models” was selected as “Oral” at the NeurIPS 2020 workshop on the Shared Visual Representations in Human & Machine Intelligence, I had the pleasure to talk about our work comparing human perception against networks trained with and without labels:

November 2020: Proud to have received a NeurIPS 2020 Outstanding Reviewer Award (top 10% of reviewers)

November 2020:Shortcut learning in deep neural networks” has just been published by Nature Machine Intelligence!

May 2020: I am honoured to have been selected for an Elsevier/Vision Research Travel Award to attend the 2020 virtual meeting of the Vision Sciences Society

February 2020: Spektrum der Wissenschaft, the German edition of the Scientific American (popular science magazine), has printed an article featuring our work on shape vs. texture

July 2019: Our work has been featured by an article in Quanta Magazine: “Where We See Shapes, AI Sees Textures”

July 2019: I’ve attended the Computational Vision Summer School (CVSS) in Freudenstadt, Germany

May 2019: I’ve given a talk at VSS 2019 about “Inducing a human-like shape bias leads to emergent human-level distortion robustness in CNNs”

May 2019: Hosted by Robbe Goris, I’ve visited UT Austin‘s Center for Perceptual Systems for a few days where I gave a talk about “Where humans still outperform Convolutional Neural Networks—and how to narrow the gap”

May 2019: I’ve given a talk at ICLR 2019 about “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness”:

The talk is available here:

March 2019: I’ve been invited to give a talk at the AI Meetup Hamburg about “The (in)corrigible laziness of convolutional neural networks”