Hi there, I’m Robert!
I’m a Postdoctoral Researcher working in the labs of Felix Wichmann and Wieland Brendel at the University of Tübingen. I recently completed my PhD (advised by Felix Wichmann and Matthias Bethge) at the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and the University of Tübingen.
Why do Deep Neural Networks see the world as they do?
I’m interested in the fascinating area that lies at the intersection of Deep Learning and Visual Perception.
I want to understand why Deep Neural Networks (DNNs) see the world as they do. Visual perception is a process of inferring—typically reasonably accurate—hypotheses about the world. But what are the hypotheses and assumptions that DNNs make? Answering this question involves understanding the limits of their abilities (when do machines fail, and why?), the biases that they incorporate (e.g. texture bias, a reliance on local features) and the underlying pattern behind some of their successes (such as shortcut learning, or “cheating”).
When comparing DNNs to human perception, I develop quantitative methods to identify areas where DNNs are still falling short of the remarkably robust, flexible and general representations of the human visual system and in a second step seek to overcome these differences. Ultimately, I am convinced that understanding why DNNs see the world as they do holds the key towards making them more interpretable, robust and reliable: Once we have understood DNNs, we can build DNNs that truly “understand”.
- February 2022: “The bittersweet lesson: data-rich models narrow the behavioural gap to human vision” is accepted as a talk at VSS 2022 in Florida!
- November 2021: Together with AI researcher Elisabeth André and science fiction researcher Moritz Ingwersen, I had the pleasure to participate in a panel discussion on “Artificial Stupidity? On accidents and deceptions of technical intelligence” at the DHM Dresden moderated by Ariana Dongus. First in-person event after a long, long time!
- September 2021: Wow – “Partial success in closing the gap between human and machine vision” was accepted as Oral at NeurIPS 2021, and “How Well do Feature Visualizations Support Causal Understanding of CNN Activations?” as a Spotlight!
Click here for more news.