Hi there, I’m Robert!
I’m a PhD student in deep learning and vision science, working in the labs of Felix Wichmann and Matthias Bethge at the University of Tübingen and the International Max Planck Research School for Intelligent Systems (IMPRS-IS).
Why do Deep Neural Networks see the world as they do?
I’m interested in the fascinating area that lies at the intersection of Deep Learning and Visual Perception.
I want to understand why Deep Neural Networks (DNNs) see the world as they do. Visual perception is a process of inferring—typically reasonably accurate—hypotheses about the world. But what are the hypotheses and assumptions that DNNs make? Answering this question involves understanding the limits of their abilities (when do machines fail, and why?), the biases that they incorporate (e.g. texture bias, a reliance on local features) and the underlying pattern behind their success (such as shortcut learning, or “cheating”).
When comparing DNNs to human perception, I develop quantitative methods to identify areas where DNNs are still falling short of the remarkably robust, flexible and general representations of the human visual system and in a second step seek to overcome these differences. Ultimately, I am convinced that understanding why DNNs see the world as they do holds the key towards making them more interpretable, robust and reliable: Once we have understood DNNs, we can build DNNs that truly “understand”.
- “Shortcut learning in deep neural networks” has just been published by Nature Machine Intelligence!
- “On the surprising similarities between supervised and self-supervised models” has been selected as an “Oral” at the NeurIPS SVRHM workshop!
- My recent NeurIPS paper “Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency” is described in a blog post on Towards Data Science: “Are all CNNs created equal?“