Please register for the Gender and AI Symposium via the Eventbrite platform here.
Full Programme
Date: 3rd June 2026
Location: Neill Lecture Theatre, Trinity Long Room Hub, Trinity College, University of Dublin
Hosts: Dr Jenny Carla Moran (https://www.jcmoran.com/), partnered with SOHAM: the Joined TCD-TU Dublin Centre for the Sociology of Humans and Machines (https://www.tcd.ie/soham/), and the Counter Data Lab TCD (https://counterdatalab.com/)
SCHEDULE FOR THE DAY
9-9:30 Registration
9:30-9:40 Welcome Address
9:40-10:30: Keynote Address – Prof Louise Amoore
10:30-10:45 Coffee Break
10:45-12 Panel 1 – WTF is GPT?
12-13:15 Panel 2 – AI Apprehends Humans
13:15-14:15 Lunch and Poster Session (Posters in Ideas Space, Trinity Long Room Hub)
14:15-15:30 Panel 3 – Humans Apprehend AI
15:30-16:00 Plenary Speaker – Dr Matthew Abbey
16:00-16:30 Coffee/Access Break
16:30-17:20 Invited Conversation – Feminism Confronts the AI and ICT industries
17:20-17:30 – Closing Remarks
FURTHER SESSION INFORMATION
Keynote Address
Prof Louise Amoore, Durham University
Panel 1: WTF is GPT?: Grappling with Architecture, Data, and Symbolic Power of Generative AI
Dr Madeleine Steeds, University College Dublin
Further details TBC.
Dr Shannon Philip, University of Cambridge
Title: “Built Like a Man: Masculine Privilege and Everyday Sexism in Generative AI”
Abstract: This is a title suggested to me by ChatGPT. It was in response to my prompt for a ‘catchy’ title that brought together my long-term research interests in masculinities, patriarchal privilege, heteronormativity and men’s everyday sexisms towards women. ChatGPT also gave me options to ‘refine’ this title by changing the ‘tone’ to make it more ‘radical’ or ‘policy-oriented’ or ‘sociological’. As a queer-feminist male-presenting sociologist of gender and sexuality working on postcolonial India and South Africa, the irony of this process is not lost on me. Rather, I want to use this ChatGPT title as a point of departure, to reflect on the ways in which Generative AI ‘does’ gender, and in particular an abstracted masculinity that appears to be disembodied, displaced and without any critical reflection of power. Indeed, as has long been argued by feminist and postcolonial scholars, the power of masculinity lies in its ‘unmarked’ status, where it is appears to be ‘value-neutral’ or ‘normal’, similar to ‘straightness’ and ‘whiteness’. In the face of this ‘absence of masculinity’, to me as a queer-feminist scholar of colour, the need for positionality and standpoint come to take on a sharper analytical focus. I use this alleged ‘absence of masculinity’ to build a critique of Generative AI as both ‘built like a heteropatriarchal man’ and ‘built by men’. Gen AI attempt create unified scripts of gender and masculinity, that reassert and reproduce patriarchal power in insidious ways by visbilising and invisiblising gendered inequalities simultaneously. In so doing, I build on existing feminist scholarship on Gen AI, ‘Tech Bros’ and postcolonial digital inequalities, to think about the ways in which the masculine bias of Gen AI could be processually studied and unpacked.
Dr James Beirne, Maynooth University
Title: “A feminist sociology of knowledge account of the architecture of large language models”
Abstract: Three decades after Dorothy Smith’s groundbreaking feminist sociology of knowledge, the advent of large language models (LLMs) vindicates her theory. While many of her colleagues claimed that sociology should aim at objective knowledge, Smith showed that embodied knowers are always-already implicated in knowledge, and provided a model to demonstrate that efforts to strip knowers from knowledge would instead produce an abstracted, reified, patriarchal ideology that obscured, rather than revealed, reality. LLMs implement a software architecture which maps almost directly onto Smith’s model. Reducing massive corpora of texts into complex statistical abstractions, they crucially rely on fixed representations of language to produce apparently novel outputs. Reducing meaning to abstract ‘tokens’ and purely quantitative relationships between them, LLMs unwittingly replicate the model of ideology which Smith identified more than thirty years before. This, clearly, is a strong vindication of Smith’s theory – with stark implications. Going beyond Hicks, Humphreys & Slater’s characterisation of LLMs as ‘bullshit’, this paper suggests that the architecture of large language models is intrinsically flawed. Stripping knowledge of its embodied relational context, pretending to an ontologically impossible objectivity, replacing meaning and experience with fixed numerical abstractions, and obfuscating reality with ideology, large language models are an inherently oppressive and patriarchal technology. No account of the so-called ‘advent of artificial intelligence’ can be complete without seriously reckoning with this longstanding stalwart of feminist theory.
Ms Meredith Veit, Business and Human Rights Centre
Title: “Gender-related harms in the AI data supply chain”
Abstract: The Business & Human Rights Resource Centre (BHRRC) systematically tracks corporate human rights harms worldwide, including in the technology sector. The BHRRC has documented allegations of labor abuses across AI Business Process Outsourcing (BPO) companies such as Scale AI, Remotasks, Majorel, Appen, Telus, Teleperformance, Accenture, Toloka, Hive Micro, Testable Minds, and PAIDERA. These BPOs supply critical data labeling services to AI companies including OpenAI, Anthropic, Google, Meta, and Stability AI, as well as firms working in computer vision, robotics, and smart devices. Despite repeated outreach, many AI companies remain unresponsive to inquiries about labor conditions, reflecting economic, legal, and reputational incentives to obscure human labor behind AI systems. This opacity reinforces exploitative business models, often affecting women and other marginalized workers in the Global South. Gender-disaggregated information is scarce, which is why more research into these corporate practices is needed. We would like to propose, in partnership with UNI Global Union, carrying out a survey targeting BPO companies and AI clients to collect more information, including gender breakdowns, wages, protections, and labor conditions, about gender sensitive human rights due diligence in the AI supply chain.
Panel 2: AI Apprehends Humans: Algorithmic Classification and/as Gender-Detector
Dr Kylo Thomas, The Love Tank CIC
Title: “‘If you Exist in the Gaps, do you Even Exist at All?’: Carceral AI, Medical Surveillance, and the Clinical Consequences for Trans People in UK Healthcare”
Abstract: Forthcoming.
Mr Christoffer Koch Andersen, University of Cambridge
Title: “The Politics of Classification: Unlearning Algorithmic Classification of Binary Gender”
Abstract: Grand promises are exponentially generated surrounding the transcending and revolutionary capacities of ‘AI’, yet at a violent peril to trans and gender nonconforming people whose gendered embodiments sway from and do not correspond to that of the binary gender classification consistently rendered and reproduced by the logic of AI technologies (Danielsson 2023; Keyes 2018; Scheuerman et al 2019; Scheuerman et al 2021; Sebastian 2020; Quinan and Hunt 2022). This unfolding reality not only has grave societal implications for the liveability of trans people, but it further poses entrenching epistemological consequences for whose lives we value, and, in return, whose lives become situated at a new algorithmic frontier of surveillance, exclusion and impossibility. In contrast to these violent developments, what happens if we move beyond critique towards the politics of unlearning the classification of gender as binary as presented and predicted by AI technologies? How can we understand this newfound political order of ‘AI’ beyond what it appears to be? To address these questions, in this talk, I attend to the politics of classification and consider what unveils when we uncover the trajectories and map the genealogies of binary gender classification located in AI technologies to unlearn the automatic and encoded renderings and assumptions of binary gender that disproportionately affect gendered minorities. By utilising trans bodies as the point of referent to expose the contingent structures of algorithmic classification, we can unveil the sociopolitical development of binary gender classification and work towards unlearning these binary patterns. From this critical interception, alternative forms of algorithmic orders and life emerge – forms of life that do not dismiss the possibilities of trans lives but read them alongside algorithmic technologies.
Ms Marilou Niedda, Utrecht University
Title: “To Combat Algorithmic Bias in AI Systems with Feminist Epistemology of Science”
Abstract: This paper argues that cases of algorithmic discrimination affecting marginalised groups must be theoretically tackled and empirically examined through feminist epistemology of science. Algorithmic bias is often understood as the result of flawed computational processes within machine-learning systems, producing “unfair” outcomes, particularly for women and people of colour. I contend that feminist theories of science (Haraway, 1989; Harding, 1986; and many more) provide a robust framework for analysing and addressing these issues. First, I articulate how feminist epistemology challenges positivist notions of objectivity and the presumed value-neutrality of mathematically driven disciplines. Algorithmic bias is frequently framed as a technical error that can be corrected through improved calculation, thereby granting AI practitioners epistemic authority as the primary agents capable of fixing these systems. This framing creates the illusion that technical expertise alone ensures epistemic robustness, while obscuring the social and normative dimensions embedded in algorithmic design and deployment. By critiquing this understanding of objectivity and technical epistemic authority, feminist epistemology calls for a situated and socially contextualised account of AI knowledge production. This perspective foregrounds critical questions concerning how datasets are constructed, why AI systems are deployed in specific contexts, and who designs and uses them. Algorithmic calculations must therefore be analysed within broader socio-technical systems, with attention to epistemic power relations among different knowers. I illustrate these two arguments through my qualitative empirical research conducted at a national AI research institute in the UK. I highlight the importance of including non-dominant perspectives in public-sector AI design, a context in which algorithmic bias frequently arises. However, this stands only if epistemic diversity is accompanied by (i) a rightful ethical reflexivity and (ii) resistance to a technical-oriented, automated, and productivist mode of reasoning (which is utterly difficult for individual knowers to change at an institutional level).
Ms Andrea Heaney, Technological University Dublin
Title: “Addressing Gender Bias in AI Healthcare Models”
Abstract: Artificial Intelligence systems are rapidly expanding in the healthcare sector with applications from prioritising scans and drug discovery to personalised medicine and diagnostic support. However, these systems can inherit and even amplify existing social and structural biases, meaning AI can be unfairly biased towards disadvantaged or underrepresented groups, such as gender. Gender bias is a prominent issue in AI models, with recruitment models shown to favour male applicants due to an unbalanced dataset (Sogancioglu et al., 2023; Koivunen et al.,2019), facial recognition models, with drastically higher misclassification rate among black women in comparison to white men (Buolamwini, 2018) and Natural Language Processing (NLP) perpetuating gender related societal issues and assumptions, such as ’fight’ or ’overpower’ being more associated with men, while ‘giving’ and ’emotional’ are more associated with women (Caliskan et al., 2022). These biases are especially consequential in healthcare contexts. Historically, women (and those seen as women) have been underrepresented in medical research, along with differences in prevalence and presentation of certain conditions (Merone et al., 2022; Washington (DC): National Academies Press (US), 2010; Bates, 2023; Edwards et al., 2023). When healthcare models are trained on these exclusions and don’t account for when differences in care are necessary it can result in inequitable and even harmful AI models for women. Cardiovascular disease is a salient example where the underrepresentation of women has resulted biased cardiovascular imaging models (Assen et al. 2024). Understanding gender bias in healthcare models is a complex multidimensional issue and requires approaches that go beyond just algorithmic solutions. Improving data quality, understanding historical and societal contexts and interdisciplinary collaboration between healthcare and data science are all vital in creating more equitable healthcare models for women.
Poster Session
The free-roaming poster session takes place over lunch time, in the Ideas Space (upstairs) of the Trinity Long Room Hub. All attendees of the Symposium will be welcome to peruse the displayed posters and communicate with the poster presenters at their own leisure.
In no particular order, the poster presentations are as follows:
Dr Aisha Sobey, University of Cambridge
“White, topless and male: Generative AI representation of larger bodies”
Ms Alina Berry, Technological University Dublin
“Designing for Persistence: The Role and Potential of AI-supported Learning for the Retention of Women in Computing Education”
Ms Camila Contreras McKay, Trinity College, University of Dublin
“Coding Peace without Gender? A Feminist Peace Research Review of AI for Peace”
Ms Nana Mgbechikwere Nwachukwu, Trinity College, University of Dublin, and Ms Ekaterina Uetova, Technological University Dublin
“He Built It, She Questioned It: How Media Assigns AI Expertise by Gender and How It Affects Society”
Mrs Ninell Oldenburg, University of Copenhagen
“Dismantling Masculine Epistemologies in AI Alignment Research”
Dr Hao Cui, Trinity College, University of Dublin
“Gender Bias in Perception of Human Managers Extends to AI Managers”
Ms Tsitsi Prudence Humanikwa, Lupane State University
“Bridging Africa’s Digital Gender Divide for equitability in the Age of Artificial Intelligence”
Dr Patricia Gibson, Institute of Art, Design and Technology, Dún Laoghaire
“Teaching with AI: A Posthuman Perspective”
Panel 3: Humans Apprehend AI: Gender Stereotyping and Human-AI Interactions
Mrs Vittoria Benfatto, University of Valle d'Aosta, and Mrs Giulia Coppo, University of Padua
(NB: Mrs Benfatto and Mrs Coppo are presenting a co-authored study by Vittoria Benfatto, Giulia Coppo, and Claudio Riva)
Title: When AI Speaks as a Political Subject: Gender, Anthropomorphisation, and Bias in Human–AI Interaction
Abstract: Generative AI systems are increasingly employed in political communication and marketing practices (Battista, 2024), yet their role in shaping gendered political identities remains underexplored (Amoore, 2020). This paper presents an exploratory, work-in-progress study that investigates how gendered political subjectivities are co-constructed through human–AI interaction (Duan et al., 2025) in simulated political marketing scenarios. The study addresses the following research question: How does a generative AI model reproduce, negotiate, or reconfigure gendered dispositions when impersonating political personas across different gender attributions? More specifically, the research explores whether human-machine interactions with gendered AI political subjects reinforce traditional gendered schemas in the context of political communication. Empirically, the study involves the students enrolled in a Master’s programme in Political Marketing at the University of Padua. Participants interacted with a generative AI model (e.g., ChatGPT) by prompting it to impersonate a political candidate during a simulated “first client interview”, a common practice in political marketing agencies. Two political personas were constructed: one reflecting attributes and narrative schemas typically associated with masculine-coded domains and one connected to feminine-coded domains. Each persona was instantiated in both masculine and feminine forms, while keeping constant political orientation, education, professional background, family situation, social class, geographical origin, hobbies, and reference figures. All interactions were conducted in Italian, a gender-marked language, allowing for close attention to culturally embedded assumptions in the AI’s representations of political identity. The dataset consists of the AI–user conversations (Balmer, 2023; Rama, Airoldi, 2025) and reflexive diaries in which participants critically reflected on the influence of model training data, platform affordances, and prompt construction. The symposium presentation will combine methodological reflection with preliminary findings to highlight how processes of AI anthropomorphisation mediate the reproduction of gender norms and biases in political communication (Nadeem et al., 2022). Furthermore, the paper aims to contribute to scholarship on Human-Machine Communication and sociotechnical reproduction of biases in the underexplored context of political marketing.
Dr Ruiqing Han, Trinity College, University of Dublin
(NB: Dr Han is presenting a study co-authored by Ruiqing Han, Hao Cui, and Taha Yasseri)
Title: “Text vs. Face: Divergent Effects of Competence Signals on Gender Bias in AI Perception”
Abstract: This research investigates whether competence attribution reduces gender bias in evaluations of AI managers. Previous research identified a "gender-outrage effect" where female managers receive harsher evaluations after delivering unfavourable news compared to male counterparts. Our theoretical framework integrates Heilman’s lack of fit model and Eagly’s role congruity theory, examining how perceived incongruity between female gender roles and leadership qualities creates prejudice that can be potentially offset by competence signals. We examine how competence signals affect perceived fit between gendered AI and leadership roles. Our study employed two parallel experiments with identical $3 \times 2 \times 2$ designs (AI gender: male/female/unspecified; competence: high/low; outcome: favourable/unfavourable) with a total of 2,505 participants. The experiments differed only in representation: one used text descriptions while the other employed visually-generated AI manager faces through reverse correlation techniques. Key findings reveal that competence signals effectively reduced gender bias in text-based interactions, with lower-competence AI receiving more negative feedback when delivering unfavourable news regardless of gender. Interestingly, in image-based experiments, male AI faces were punished more for unfavourable results and rewarded less for favourable outcomes compared to female AI, suggesting anthropomorphism may backfire for male representations. Results demonstrate that task-relevant competence signals can effectively mitigate bias without requiring anthropomorphic features, providing insights for designing bias-resistant AI systems
Ms Faezeh Fadaei, University College Dublin
(NB: Ms Fadaei is presenting a study co-authored by Faezeh Fadaei, Jenny Carla Moran and Taha Yasseri)
Title: Gender in the Absence of Bodies: Gender Performance and Social Patterns in AI Systems
Abstract: Generative artificial intelligence and large language models (LLMs) are increasingly deployed in interactive settings, yet we know little about how gendered identity performance develops when such systems interact within large-scale networks. Our study draws on data from Chirper.ai, a social media platform composed entirely of autonomous AI chatbots. The dataset comprises over 70,000 agents, approximately 140 million posts, and the evolving followership network over one year. Based on agents’ text production, weekly gender scores are assigned to each agent, enabling an analysis of how gender performance emerges and shifts over time. The findings indicate that gender performance among LLM agents is fluid rather than fixed, with individual agents shifting across the gender spectrum over time. Despite this fluidity, the network exhibits persistent gender-based homophily, as agents consistently connect to others performing gender similarly. Further analysis distinguishes between social selection, in which agents form ties with similar others, and social influence, in which agents’ gender performance shifts in response to their followees. Consistent with patterns observed in human social networks, both mechanisms contribute to the emergence of structured gender-based sorting. These findings suggest that, even in the absence of bodies, cultural entrainment of gender performance leads to gender-based sorting and allows culturally embedded norms of gendered language drawn from human training data to form durable social patterns. This shows that gender in AI is more than simply a way of writing. It becomes a meaningful social signal that shapes how agents interact with one another and with humans. It is therefore essential to consider how AI’s gender performance becomes recognisable, interpretable, and socially perceived in contemporary human–AI interaction.
Dr Sara O’Sullivan, University College Dublin
Title: “Teaching Gender Bias in AI: A Classroom Experiment”
Abstract: When ChatGPT launched in November 2022, many faculty responded with concerns about academic integrity and student learning. Rather than banning AI tools or returning to examination-based assessment, last year I included an experiment in my undergraduate Sociology of Gender module designed to highlight gender bias in LLM-generated recommendation letters for male and female students. The first step was a lecture examining how LLMs function and highlighting that they trained on data containing gendered norms about occupations, abilities, and interests. Following this, students conducted an experiment using prompts inspired by Wan et al.'s (2023) research. They generated four reference letters using their chosen LLM: for a female student, a male student, a female sociology student, and a male engineering student. They then critically analysed the outputs for gendered language patterns. Results were striking. Students used ChatGPT, Copilot, Perplexity, and DeepSeek, with consistent patterns across platforms. While letters initially appeared similar, students identified systematic differences: male students (particularly "Seamus") were consistently positioned as leaders and authority figures, while female students ("Roisin") were described as empathetic and reliable. Students connected these patterns to course content on gender stereotypes and norms. Interestingly, some students suggested discipline-specific prompts reduced bias, though all generated letters still showed gendered differences. This exercise demonstrates how students can be supported to develop critical AI literacy. Students have the opportunity to test and see for themselves whether the outputs from their chosen LLM contain gender biases. By having students discover algorithmic bias firsthand, the assignment reinforces the syllabus message that Generative AI might provide biased answers, particularly problematic in a gender module! The exercise also taught them to appropriately cite AI use in line with course policy and institutional requirements. Drawing on scholarship of teaching and learning in higher education, I argue that critical engagement with AI tools—rather than prohibition—better serves pedagogical goals of developing students' analytical capacities and awareness of structural inequalities embedded in technological systems.
Plenary Speaker
Dr Matthew Abbey, University of Leeds
Title: “The Fantasy of Recognition: Surveillance, Data, Value, and the Human-Machine Non-Relationship”
Abstract: Personalised surveillance tools such as smart home devices now and mobile apps permeate everyday life. While feminist scholarship is increasingly exploring the role of affect in human–machine relationships, there has yet to be a sustained focus on the value attributed to the data moving between such actors. Drawing on the psychoanalytic notion of projection, this paper examines how data itself becomes invested with personal value, not just economic. Data is important to consider on the level of affect because it underpins the very transfer of meaning that structures human-machine relationships. To begin, I explore how value is assigned to data through engaging Lauren Berlant’s (2011, 2024) concepts of cruel optimism and inconvenience. While cruel optimism illuminates how investments in intelligent or personalised machines may impede human flourishing, it is increasingly evident that their supposed convenience often transforms the provision of data into a necessary inconvenience. Following this, I engage Sara Ahmed’s (2005) queer phenomenology to offer a perspective on how we might differently orient ourselves not only toward these machines but also toward the data mediating our interactions with them. In a social landscape dominated by algorithms, projecting value onto data within human-machine relationships imbues the latter with the phantasmic ability to recognise the former. Nonetheless, since machines can only interpret us as data rather than as subjects, gaining any sense of recognition from them will always fail, rendering the very notion of a human–machine relationship a misnomer. To challenge the extent to which we project value onto data, I argue that our engagements with machines should become more machinic, an unavoidable inconvenience of AI that we might queer by refusing to participate in the fantasy of recognition.
Invited Conversation: Feminism Confronts the AI and ICT industries
Our final session will be hosted by Dr Fiona McDermott and Dr Laure De Tymowski of the newly-launched Counter Data Lab in Trinity College, University of Dublin. Dr McDermott, Dr De Tymowski, and their invited collaborators will take a practice-based approach to understanding and contesting the grounded processes and impacts of AI / ICT developments here in Ireland. Further information on the session is forthcoming and will be updated via the online programme in course.