I am a third-year PhD Student at University of Mannheim supervised by Professor Markus Strohmaier. I work at the intersection of computational social science and responsible NLP, studying how social biases manifest across the model pipeline, from pre-training data and model internals to prompting strategies. I am particularly interested in how LLMs represent (and sometimes under- and misrepresent) different socio-demographic groups.

Currently, I am investigating how demographic patterns and value-laden signals in the pre-training data connect to model outputs, with the goal of developing language technologies that are more diverse, equitable and socially responsible.

Previously, I obtained my M.Sc. (with distinction) in Computer Science at RWTH Aachen, where I also worked as a research assistant.

News

Jan, 2026Excited that our paper Do Psychometric Tests Work for Large Language Models? Evaluation of Tests on Sexism, Racism, and Morality led by my former master’s student Jana Jung and developed from her thesis under my supervision got accepted to EACL Main 2026!
Aug, 2025Our paper The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for Large Language Models was accepted to EMNLP Findings 2025!
Sept, 2024Our paper Local Contrastive Editing of Gender Stereotypes was accepted to EMNLP Main 2024!
May, 2024Our symposium, "Large Language Models in Psychological Research" has been accepted for the DGPS/ÖGP Congress! I’ll be presenting alongside Dirk Wulff, Rui Mata, Marcel Binz, and Zakir Hussain.
May, 2024Our paper, Properties of Group Fairness Measures for Rankings has been accepted for publication in Transactions on Social Computing!
Jul, 2023My conference presentation "SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings" was awarded one of the two best parallel talks at IC2S2 2023!