AI, bias and experiments: how Women in News is tackling tech’s inbuilt stereotypes


Issues surrounding bias in AI are deeply rooted in the accuracy, trustworthiness, and quality of data, which, if overlooked, can significantly skew outcomes. Lyndsey Jones, an AI author and transformation coach, delves into these concerns, offering valuable insights for newsrooms on monitoring and reviewing data.

Madhumita Murgia, an AI journalist and the first artificial intelligence editor of the Financial Times, sheds light on how women, migrants, precarious workers, and minority groups are disproportionately affected by the technical limitations of Generative AI. Murgia emphasizes the lack of representation of these groups in the development process of AI technologies, highlighting the need for inclusive participation.

WAN-IFRA Women In News workshops on the Age of AI in the newsroom have brought bias effects to the forefront. Through the Digital ABCs training program, media professionals are equipped with skills to navigate the digital landscape and drive organizational change.

A newly launched module focuses on AI, with over 100 participants in eastern Europe taking part, now extended to journalists in parts of Africa, the Middle East, and Southeast Asia. Instances of bias surfaced during the training, such as generating offensive avatars and misinterpretation of accents in AI tools.

Google CEO Sundar Pichai’s acknowledgment of biased AI tools reflects ongoing concerns in the industry. Timnet Gebru’s dismissal from Google for highlighting biases further underscores the need for vigilance in addressing these issues.

Diverse teams in WIN’s Age of AI program are experimenting with various tools like fact-checking and enhancing staff skill sets in AI usage. Projects under consideration for further EU funding include a video lab for content amplification and an AI avatar for journalist safety.

Media companies must ensure diverse staff collaboration when testing AI tools. Quotas for women in AI research and cross-border partnerships may be necessary for smaller media groups to compete effectively.

Journalists can take steps to improve content quality by examining storytelling practices and ensuring diversity in sources and representation. Consistency of data collection across departments and assessing biases in data sets are crucial for ethical AI usage in journalism. Ultimately, AI tools should be used to enhance journalism’s quality and integrity, rather than generating clickbait or misinformation.