Event round-up: Algorithms and Allyship: Examining AI’s Approach to Queer Content

A quick look back at our webinar exploring how AI navigates queer content, and where allyship meets algorithms.

This event, hosted on 19 Mar as part of techUK’s Tech Together campaign explored the intersection of advanced artificial intelligence technologies and their impact on the queer community, shedding light on both the opportunities and challenges that arise in this dynamic space. Our panellists explored the ways in which AI systems perceive queer induvials and the biases that can emerge from these technologies. Read or watch a summary of the event below. 

Panellists included: 

  • Dr Harry Muncey, Senior Director of Data and Responsible AI, Elsevier 

  • Dr Eleanor Drage, Senior Research Fellow, Leverhulme Centre for the Future of Intelligence 

  • Dawn McAra-Hunter, Programme Manager, Scottish AI Alliance 

  • Alfie Potter, Technical Analyst (Enterprise Architecture and Generative AI), ITV 

Please note that the below is a summary of the event, and readers are encouraged to watch the webinar to understand the full details of the discussion.  


Understanding Generative AI and its impact on representation 

The session began with a brief overview of howGenerative AI models work by analysing vast datasets, recognising patterns, and then generating new content based on those patterns. 

For image generation, the AI model learns visual components such as colours, shapes, textures, and compositions. Instead of pulling pre-existing images, it synthesises a new one by piecing together learned attributes. Alfie Potter of ITV illustrated this process with the example of generating an image of a "purple cat wearing sunglasses", the AI doesn’t retrieve an exact match but instead assembles elements based on its learned understanding of what a cat, purple fur, and sunglasses typically look like.

Alfie compared generative AI’s functioning to that of a chef or artist: both learn through exposure to many examples, allowing them to predict and create combinations of familiar elements. This analogy underscored the importance of training data, as AI models depend entirely on the data they are fed.

The impact of training data on AI bias 

Following Alfie’s explanation, Harry Muncey of Elsevier expanded on how training data shapes AI’s representation of different communities, including queer individuals. They highlighted that AI models learn from their training datasets, which serve as their "worldview." If training data lacks diverse representation or contains biases, the AI will inevitably replicate and amplify those biases. 

Key concerns raised included

  • AI reinforcing stereotypes due to biased training data 

  • A lack of diverse representation leading to AI-generated content that erases or misrepresents marginalised communities 

  • Overly aggressive AI safety filters that may incorrectly flag queer-related content as inappropriate 

This leads to AI models that either fail to represent LGBTQ+ individuals accurately or actively suppress their visibility in content generation. 

Prevalent biases in AI-generated content 

Dawn McAra-Hunter of Scottish alliance continued the discussed by outlining common biases in AI-generated text and images. Some specific issues included: 

  • Stereotyping and Oversimplification: AI often portrays queer individuals based on outdated or clichéd representations, such as gay men being hyper-stylised with "killer abs" or lesbians being depicted with tattoos and piercings. 

  • Lack of Intersectionality: AI models tend to default to white, cisgender, and able-bodied representations, neglecting the diverse realities of queer identities. 

  • Oversexualisation and Misrepresentation: Trans women are often hypersexualised, while nonbinary identities are either ignored or inaccurately categorised. 

  • Misgendering in Text Generation: AI struggles with gender-neutral pronouns, frequently defaulting to binary gender assumptions, which contributes to further misrepresentation. 

Dawn referenced a Wired article that highlighted the recurring stereotype of AI-generated queer individuals frequently having purple hair, reflecting a narrow and outdated view of queer aesthetics. 

The Broader Implications of AI Bias 

Dr Eleanor Drage of Leverhulme Centre for the Future of Intelligence contributed by discussing how AI systems, if poorly designed, can reinforce harmful narratives. She referenced an experiment at Stanford University that attempted to use AI to "predict" sexuality based on facial features, an approach rooted in flawed assumptions about identity. This example illustrated how AI bias isn't just a minor issue but can have deeply concerning ethical and social implications. 

Eleanor stressed that AI must not only recognise diverse identities but also respect the right of individuals not to be categorised. She emphasised the need for a participatory approach, where queer communities play a direct role in shaping AI development. 

Moving Forward: Creating More Inclusive AI 

The discussion concluded with practical steps for improving AI inclusivity: 

  • Diverse Representation in AI Development: Ensuring LGBTQ+ individuals and other marginalised groups are directly involved in AI research and development. 

  • Better Training Data: Expanding datasets to include diverse voices and experiences while removing harmful biases. 

  • Policy and Regulation: Advocating for stronger regulations to ensure AI does not perpetuate discrimination. 

  • Ethical AI Design: Developing AI models that allow for ambiguity and fluidity rather than rigid categorization. 

The key takeaway from the panel is that AI has the potential to amplify diverse voices, but only if the right people are at the table shaping how it’s built. Our speakers reminded us that inclusive technology doesn’t happen by accident; it grows from listening, good data training, and making space for the communities AI is meant to represent.

If you'd like to continue the conversation please contact [email protected]


  TechTogether - Hubpage CTA


techUK’s TechTogether campaign, taking place throughout March, is a collection of activities highlighting the UK’s technology sector pursuit to shape a more equitable future. In 2025 we are exploring: Inclusive AI, investing in diverse founders and entrepreneurs, the power of allyship and mentorship, and empowering young people. 

 

Skills, Talent and Diversity updates

Sign-up to get the latest updates and opportunities from our Skills, Talent and Diversity programme.

 

Here are the five reasons to join the Skills, Talent and Diversity programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Harriet Allen

Harriet Allen

Programme Assistant, Technology and Innovation, techUK