My work has been featured by different forms of media and science communication (e.g. print, online, radio, podcast, …).
Beyond neural scaling laws: beating power law scaling via data pruning:
- AI coffee break with Letitia: Beyond neural scaling laws – Paper Explained
Partial success in closing the gap between human and machine vision:
- Machine Learning for Science: Do machines see like humans? They are getting closer
Shortcut Learning in Deep Neural Networks:
- Frankfurter Allgemeine Zeitung: Hier irrt der Algorithmus
- The Gradient: Shortcuts: How Neural Networks Love to Cheat
- Tech Xplore: Exploring the notion of shortcut learning in deep neural networks
- Knowable Magazine: Why some artificial intelligence is smart until it’s dumb (also in The Week)
- Bloomberg Businessweek: Goodhart’s Law Rules the Modern World. Here Are Nine Examples
- Deeplearning.ai: The Batch newsletter
- Towards Data Science: Shortcut Learning, The Reason ML Models Often Fail in Practice
- Underrated ML podcast: Energy functions and shortcut learning
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness:
- Quanta Magazine: Where We See Shapes, AI Sees Textures
- Spektrum der Wissenschaft (German edition of the Scientific American): Form versus Textur
- The Register: Object-recognition AI – the dumb program’s idea of a smart program: How neural nets are really just looking at textures
- Two Minute Papers: Do Neural Networks Need To Think Like Humans?