I am a PhD student at Center for Humanities Computing, at Aarhus University where I explore the philosophical and computational implications of bias analysis.
My research delves into the intersection of machine learning and humanities domain knowledge, with a focus on gender biases. I aim to integrate computational experiments with deeper philosophical discussions on how biases manifest and persist.
I hold bachelor's degrees in both computer science and philosophy, as well as a master’s degree in philosophy. My academic journey has also included studies in Science-Technology-Society (STS), which have deepened my interest in the ethical and social dimensions of technology and its impact on knowledge production [see CV].
I am particularly interested in how power dynamics are hidden and reinforced through technology use, and I am committed to examining and challenging these dynamics to create more equitable systems.
This presentation delves into the consequences of unfair technologies in knowledge production contexts. An examination of performance disparities in Danish Named Entity Recognition tools illuminates how biased technologies can systematically exclude and marginalize certain social groups, silencing their voices and experiences. While this study is conducted in the field of digital humanities, the consequences are potentially at play in all knowledge production contexts.While a substantial part of the literature on philosophy of algorithms deals with the philosophical issues related to opacity aspects of data-driven systems, this work takes a distinct approach by directing attention towards practices to explore sources of epistemic injustice.
In this work, I examine the political dimension of data-driven technologies through the lens of epistemic injustice (Fricker, 2007), focusing on silencing (Carter, 2006). While much of the philosophy of algorithms addresses the opacity of data-driven systems (e.g., Symons & Alvarado, 2022), my approach shifts attention to data science practices as sources of epistemic injustice. I will explore four specific practices in data science to derive three new distinctions: data silencing, algorithmic silencing, and application silencing. These distinctions will clarify how silencing occurs and is perpetuated by data technologies, posing challenges for both individuals and researchers.
In this workshop, we delve into how machine learning and large language models work, with a focus on Chat-GPT. You will gain a fundamental understanding of how Chat-GPT operates, how machines learn, and the philosophical and social challenges these technologies bring. Participants will also have the opportunity to experiment with the technologies and explore their practical limitations. The workshop can be tailored for both students and teachers, depending on your needs.
This workshop focuses on the critical issues that arise from the use of machine learning technologies. We explore important questions such as 'Where does the data come from, and who is represented?' and 'Is a high performance score always equivalent to a good solution?' Together, we will explore both problems and solutions, examining how the humanities can help us address the major challenges these technologies bring.