About

Hi there, I’m Robert!

I’m a Postdoctoral Researcher working in the labs of Felix Wichmann and Wieland Brendel at the University of Tübingen. I recently completed my PhD (advised by Felix Wichmann and Matthias Bethge) at the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and the University of Tübingen.

During summer 2021, I was a Research Intern at FAIR / Meta AI with Ari Morcos.

Why do Deep Neural Networks see the world as they do?

I’m interested in the fascinating area that lies at the intersection of Deep Learning and Visual Perception.

I want to understand why Deep Neural Networks (DNNs) see the world as they do. Visual perception is a process of inferring—typically reasonably accurate—hypotheses about the world. But what are the hypotheses and assumptions that DNNs make? Answering this question involves understanding the limits of their abilities (when do machines fail, and why?), the biases that they incorporate (e.g. texture bias, a reliance on local features) and the underlying pattern behind some of their successes (such as shortcut learning, or “cheating”).

When comparing DNNs to human perception, I develop quantitative methods to identify areas where DNNs are still falling short of the remarkably robust, flexible and general representations of the human visual system and in a second step seek to overcome these differences. Ultimately, I am convinced that understanding why DNNs see the world as they do holds the key towards making them more interpretable, robust and reliable: Once we have understood DNNs, we can build DNNs that truly “understand”.

Latest news:


Click here for more news.