Interested in an internship?
TL/DR: please express interest by registering your email address in this Google Form: https://forms.gle/aGfXPGKZnHNoc9si6
Longer explanation: If you’re interested in an internship – that’s great, but internships at DeepMind require hiring headcount, and things can move quickly from “I don’t know whether I’ll have headcount” to “I’ve got confirmed headcount and am looking to hire”. I recommend expressing interest in the Google Form above and I’ll notify you via email if and when I have a position. Additionally, please feel free to connect or follow on LinkedIn where I post about open positions; please don’t message or email to inquire about internships. Typically, interns are PhD students with a demonstrated publication track record and are approximately in the last third of their PhD.
Current / former interns and students:
I’ve had the privilege to advise, mentor and learn from the following talented interns / students:
- Thaddäus Wiedemer: intern via Google DeepMind
Currently graduate student, Max Planck Institute for Intelligent Systems
Project: “Video models are zero-shot learners and reasoners“, featured in The Economist and by Two Minute Papers. - Saman Motamed: intern via Google DeepMind
Currently graduate student, INSAIT
Project: “Do generative video models understand physical principles?“, featured in OpenAI’s Sora 2 announcement and by Two Minute Papers. - Fanfei Li: MSc thesis via University of Tübingen
Now PhD student, Max Planck Institute for Intelligent Systems
Project: LAION-C: An out-of-distribution benchmark for web-scale vision models (ICML). - Lukas Huber: MSc thesis via University of Bern
Now PhD student, Universities of Bern & Tübingen
Project: The developmental trajectory of object recognition robustness (Journal of Vision & VSS Oral). - Benjamin Mitzkus: MSc thesis & researcher via University of Tübingen
Now software developer, Salufast
Contributed to three projects: object detection robustness benchmark (NeurIPS workshop paper), closing the gap (NeurIPS Oral & VSS Oral), surprising similarities between supervised and self-supervised models (NeurIPS workshop Oral). - Jannis Ahlert: BSc thesis via University of Tübingen
Now MSc student
Project: How aligned are different alignment metrics? (ICLR workshop paper). - Tizian Thieringer: BSc thesis via University of Tübingen
Now MSc student
Project: “Benchmarking the latest machine vision developments against human categorization performance”; contributed to closing the gap (NeurIPS Oral & VSS Oral). - Ole Jonas Wenzel: lab rotation via GTC Tübingen
Now PhD student, Volkswagen
Project: “Imperceptible signals in perceptible noise”. - Shuchen Wu: lab rotation via ETH Zürich
Now Fellow, Allen Institute & University of Washington
Project: “An early vision-inspired visual recognition model”. - Patricia Rubisch: lab rotation & researcher via University of Tübingen
Now PostDoc, Medical School Berlin
Contributed to texture vs. shape (ICLR Oral & VSS Oral) & lab vs. crowdsourcing.
Collaborations:
If you’re interested in a potential collaboration, please feel free to reach out. I may or may not have bandwidth to collaborate at any given point in time, but would love to hear from you either way.
As an industry researcher, I don’t have a “lab” in the same way academic professors have: I run lots of experiments myself (and enjoy doing so), write code, perform analyses, run model trainings, write and read papers, debug, and test. As a consequence, however, my bandwidth for collaborations can be limited, and I may need to decline collaboration requests purely for that reason.
I appreciate it if the type of input you’re looking for from me is made clear – e.g., “meet every X weeks for an hour to advise on Y”, or “run experiment Z”, etc. – this helps me compare against my other priorities: I do my best to be a reliable collaborator and only commit to projects where I’m confident I can meet this expectation.