- The Data Lab has teamed up with 25 researchers from six leading UK universities and 23 partner organisations to contribute to the Responsible AI UK programme (RAi UK) project on participatory AI auditing.
- The project will ensure that people likely to be affected by decisions made by AI systems play a role in ensuring fair and reliable outputs.
- The Data Lab are supporting the dissemination of the training toolkits to help people audit predictive AI use cases in health and media content, as well as generative AI use cases in cultural heritage and collaborative content generation.
The Data Lab, Scotland’s AI and data science innovation centre, has joined a consortium of 29 other organisations from across the UK to contribute to the creation of a £3.5m project designed to maximise the potential benefits of predictive and generative AI, while minimising their potential harms arising from bias and ‘hallucinations’, where AI tools present false or invented information as fact.
The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project is one of three new initiatives supported by RAi’s £12 million Keystone Funding to tackle emerging concerns of generative and other forms of AI currently being built and deployed across society.
Empowering people affected by AI
The project will pioneer participatory AI auditing, where non-experts, including regulators, end-users and people likely to be affected by decisions made by AI systems, will play a role in ensuring that those systems provide fair and reliable outputs.
The University of Glasgow will lead the consortium, with support from colleagues at The Data Lab at the University of Edinburgh and Universities Sheffield, Stirling, Strathclyde, York and King’s College London.
The project will develop new tools to support the auditing process in partnership with relevant stakeholders, focusing on four key use cases for predictive and generative AI, and create new training resources to help encourage widespread adoption of the tools.
The predictive AI use cases in the research will focus on health and media content, analysing data sets for predicting hospital readmissions and assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media.
In the generative AI use cases, the project will look at cultural heritage and collaborative content generation. It will explore the potential of AI to deepen understanding of historical materials without misrepresentation or bias, and how AI could be used to write accurate Wikipedia articles in under-represented languages without contributing to the spread of misinformation.
Fostering participation and adoption
The Data Lab will support the dissemination of the training toolkits which will help people to audit AI-powered technologies. From hosting training activities to organising engaging events, The Data Lab and partners will empower individuals and organisations across the UK to understand and engage with the AI auditing toolkits effectively. Together with partners, The Data Lab will spread awareness and foster widespread adoption of these toolkits, translating the academic research into actionable industry insights.
Adam Turner, Head of External Funding Services at The Data Lab, said
“Our world is changing; AI has the potential to transcend almost every part of society over the coming years, ushered in by an explosion of computing power and advancements in AI techniques, we expect to see this technology becoming pervasive in nearly all walks of life.
“We need to be prepared for this ever-growing thirst for AI solutions by anticipating the potential harms. Working alongside an esteemed set of collaborators, we’re producing toolkits, training and certification programmes which will help people across the UK audit AI-powered technologies – allowing us to enjoy the benefits of AI whilst empowering us to hold it to account.
“The Data Lab is honoured to contribute to the Responsible AI UK programme to advance the dialogue surrounding AI ethics and governance, ensuring that the benefits of AI are equitably distributed and accessible to all.”
Adam Turner, Head of External Funding Services at The Data Lab
Dr Simone Stumpf, of the University of Glasgow’s School of Computing Science, is the project’s principal investigator. She said:
“AI is a fast-moving field, with developments often at risk of outpacing the ability of decisionmakers to ensure that the technology is used in ways that minimise the risk of harms. Regulators around the world are working to ensure a balance between harnessing AI’s potentially transformative benefits for society and the most effective level of oversight on its outputs.
“Auditing the outputs of AI can be a powerful tool to help develop more robust and reliable systems, but until now auditing has been unevenly applied and left mainly in the hands of experts. The PHAWM project will put auditing power in the hands of people who best understand the potential impact in the four fields these AI systems are operating in. That will help produce fairer and more robust outcomes for end-users and help ensure that AI technologies meet their regulatory obligations.
“By the project’s conclusion, we will have developed a robust training programme and a route towards certification of AI solutions, and a fully-featured workbench of tools to enable people without a background in artificial intelligence to participate in audits, make informed decisions, and shape the next generation of AI.”
Dr Simone Stumpf, Principal Investigator of PHAWM funded by RAi UK
Professor of Artificial Intelligence Gopal Ramchurn, from the University of Southampton and CEO of RAi UK, said the projects are multi-disciplinary and bring together computer and social scientists, alongside other specialists:
“These projects are the keystone of the Responsible AI UK programme and have been chosen because they address the most pressing challenges that society faces with the rapid advances in AI.
“The projects will deliver interdisciplinary research that looks to address the complex socio-technical challenges that already exist or are emerging with the use of generative AI and other forms of AI deployed in the real-world.
“The concerns around AI are not just for governments and industry to deal with – it is important that AI experts engage with researchers and policymakers to ensure we can better anticipate the issues that will be caused by AI.”
Professor Gopal Ramchurn, CEO of RAi UK
Read more about the three AI projects or RAi UK at www.rai.ac.uk.
Leave a Reply