About

Hi there, I’m Robert!

I’m a Research Scientist at Google DeepMind (formerly Google Brain), located in Toronto.

During my PhD at the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and the University of Tübingen, I was fortunate to work with Felix Wichmann, Matthias Bethge and Wieland Brendel. In 2021, I was a Research Intern at FAIR / Meta AI, where I worked with Ari Morcos and Surya Ganguli.

My research has been honored with the ELLIS PhD Award, an Outstanding Paper Award at NeurIPS, and a total of eight “Orals” at ICLR, NeurIPS & ICML (leading machine learning conferences) and VSS (leading human vision conference).

Why do Deep Neural Networks see the world as they do?

I’m interested in the fascinating area that lies at the intersection of Deep Learning and Visual Perception.

I want to understand why Deep Neural Networks (DNNs) see the world as they do. Visual perception is a process of inferring—typically reasonably accurate—hypotheses about the world. But what are the hypotheses and assumptions that DNNs make? Answering this question involves understanding the limits of their abilities (when do machines fail, and why?), the biases that they incorporate (e.g. texture bias, a reliance on local features) and the underlying pattern behind some of their successes (such as shortcut learning, or “cheating”).

When comparing DNNs to human perception, I develop quantitative methods to identify areas where DNNs are still falling short of the remarkably robust, flexible and general representations of the human visual system and in a second step seek to overcome these differences. Ultimately, I am convinced that understanding why DNNs see the world as they do holds the key towards making them more interpretable, robust and reliable: Once we have understood DNNs, we can build DNNs that truly “understand”.

Latest news:

Click here for more news.