Skip to main content

Trinity College Dublin, The University of Dublin

Menu Search



You are here Industry > Available Technologies

Trinity logoAdapt logo

 

 

 

Offensive Content Detection

Moderate online user-generated content easily

 

Overviewfrustrated computer user

User-generated content has revolutionised the flow of company-customer communication, and companies are now leveraging online content to build brand awareness and generate more engaging business offerings. However, real-time content moderation to identify abusive content is difficult and expensive without a dynamic automated solution.
The ADAPT Centre at Trinity and Microsoft identified the need for a scalable solution for moderating user-generated content. By merging expertise, the partners are developing a detection platform that will help guide online content moderators to identify offensive user-generated content.

What Problem Does it Solve/Advantages

In collaboration with Microsoft, ADAPT’s language technology and content analytics experts are developing a scalable Offensive Content Detection API that will objectively annotate large volumes of offensive content.
The detection technology will use natural language parsing, machine learning and interaction analysis to process content and assign an
‘offensiveness’ score. It will facilitate customisation of tolerance thresholds, allowing even obscure offensive content to be flagged for
human review.

Technology and Patent Status

The Offensive Content Detection technology is currently under joint development by Microsoft, ADAPT researchers Dr. Carl Vogel and Dr. Erwan Moreau from Trinity College Dublin, and the ADAPT Design and Innovation Lab team.

The opportunity

This technology is available for license/collaboration.

Researcher: Dr. Carl Vogel and Dr. Erwan Moreau


Trinity logoAdapt Centre logo