Researchers at Trinity’s AI Accountability Lab secure major UK grant to investigate potential risks of AI companions
Posted on: 28 November 2025
Over a decade ago millions of cinema fans were beguiled by the movie ‘Her’ in which a man ultimately had his heart broken by an AI-powered device. Fast forward to today and the use of AI chatbots for ‘companionship’ is surging in popularity.
Even general purpose chatbots such as ChatGPT are increasingly presented as ‘friends’, ‘partners’, or emotional confidants to millions worldwide, fostering emotional dependence and exacerbating individuals’ vulnerabilities. But as these systems are increasingly designed to imitate human-like interactions they raise urgent questions about emotional safety, dependency, monetisation of relationships, and the blurring of boundaries in accountability and responsibility.
To investigate this, the AI Accountability Lab at ADAPT in the School of Computer Science and Statistics at Trinity College Dublin has secured a major research grant funded by the UK Government’s AI Security Institute (AISI) in the Department of Science, Innovation and Technology, to investigate the design choices of AI ‘companion’ applications.

Image: Kathryn Conrad & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Titled An Analysis of AI Companions: Friendship without Boundaries?, the project will look at how these systems are designed to shape our feelings and behaviours, and what happens when technology begins to blur relational boundaries. The investigation into the psychological and societal risks posed by AI companion apps aims to explore three key questions:
- How do AI companion applications use deceptive user interface design?
- How do chatbot conversations escalate user engagement and foster emotional dependency?
- How do the data collection and privacy practices of these apps function in practice?
Project lead Maribeth Rauh, said: “News headlines have highlighted the concerns around people relying on AI ‘partners’ for emotional closeness and the emergent risks AI chatbot use pose to mental health. This timely project will help people understand the aspects of the systems’ design which contribute to these issues and how we can ensure they are not exploitative, and are instead built with appropriate safeguards.”
AI companions raise new questions about deceptive design and language use, consent, psychological harm, and commercial incentives. The research aims to provide one of the clearest evidence-based reports to date to help inform policymakers, regulators, and consumer protection bodies to understand and address these issues. The report will be available in 2026.
The AI Accountability Lab is led by Professor Abeba Birhane of the School of Computer Science and Statistics at Trinity College Dublin and housed in the Government-funded Research Ireland ADAPT Centre for AI-Driven Digital Content Technology.