ida marie s. lassen

I am a PhD student at Center for Humanities Computing, at Aarhus University where I explore the philosophical and computational implications of bias analysis.

My research delves into the intersection of machine learning and humanities domain knowledge, with a focus on gender biases. I aim to integrate computational experiments with deeper philosophical discussions on how biases manifest and persist.

I hold bachelor's degrees in both computer science and philosophy, as well as a master’s degree in philosophy. My academic journey has also included studies in Science-Technology-Society (STS), which have deepened my interest in the ethical and social dimensions of technology and its impact on knowledge production [see CV].

I am particularly interested in how power dynamics are hidden and reinforced through technology use, and I am committed to examining and challenging these dynamics to create more equitable systems.

talks & workshops

[talk]

Epistemic Consequences of Unfair Tools

This presentation delves into the consequences of unfair technologies in knowledge production contexts. An examination of performance disparities in Danish Named Entity Recognition tools illuminates how biased technologies can systematically exclude and marginalize certain social groups, silencing their voices and experiences. While this study is conducted in the field of digital humanities, the consequences are potentially at play in all knowledge production contexts.While a substantial part of the literature on philosophy of algorithms deals with the philosophical issues related to opacity aspects of data-driven systems, this work takes a distinct approach by directing attention towards practices to explore sources of epistemic injustice.

[talk + workshop]

Data science silencing

Exploring epistemic injustices in data science practices

In this work, I examine the political dimension of data-driven technologies through the lens of epistemic injustice (Fricker, 2007), focusing on silencing (Carter, 2006). While much of the philosophy of algorithms addresses the opacity of data-driven systems (e.g., Symons & Alvarado, 2022), my approach shifts attention to data science practices as sources of epistemic injustice. I will explore four specific practices in data science to derive three new distinctions: data silencing, algorithmic silencing, and application silencing. These distinctions will clarify how silencing occurs and is perpetuated by data technologies, posing challenges for both individuals and researchers.

Talks and workshops for high schools

I offer lectures and workshops specifically tailored for high school education. Click here to learn more about how I can contribute to your teaching with exciting and relevant topics in AI and technology combined with philosophical perspectives.
[talk + workshop]

Chat-GPT 

An Introduction to the Technology, Its Opportunities, and Challenges

In this workshop, we delve into how machine learning and large language models work, with a focus on Chat-GPT. You will gain a fundamental understanding of how Chat-GPT operates, how machines learn, and the philosophical and social challenges these technologies bring. Participants will also have the opportunity to experiment with the technologies and explore their practical limitations. The workshop can be tailored for both students and teachers, depending on your needs.

[workshop]

Humanities Perspectives on Artificial Intelligence

Blind Spots and Key Questions

This workshop focuses on the critical issues that arise from the use of machine learning technologies. We explore important questions such as 'Where does the data come from, and who is represented?' and 'Is a high performance score always equivalent to a good solution?' Together, we will explore both problems and solutions, examining how the humanities can help us address the major challenges these technologies bring.

publications

Epistemic Consequences of Unfair Tools

Lassen, I. M. S., Kristensen-McLachlan, R. D., Almasi, M., Enevoldsen, K., & Nielbo, K., (2024)
In Digital Scholarship in the Humanities, 39 (1), 198-214.

Detecting intersectionality in NER models: A data-driven approach

Lassen, I. M. S., Almasi, M., Enevoldsen, K., & Kristensen-McLachlan, R. D. (2023)
In Proceedings of the 7th joint SIGHUM workshop on computational linguistics for cultural heritage, social sciences, humanities and literature (pp. 116-127).

Persistence of Gender Asymmetries in Book Reviews Within and Across Genres

Lassen, I. M. S., Moreira, P. F., Bizzoni, Y., Thomsen, M. R., & Nielbo, K. L. (2023)
In CEUR Workshop Proceedings (Vol. 3558, p. 14). ceur workshop proceedings.

feel free to reach out for talks, collaborations,
or any questions about my research

idamarie@cas.au.dk