The Challenge of Responsible AI
Reflections from September’s RAi UK All-Hands
This blog is co-authored by Jason Du Preez.
Today, business leaders are under pressure to deliver AI and reap the benefits for their organizations and customers. However, the speed of innovation of these technologies presents huge challenges when it comes to overseeing new AI programs. While technology continues to develop apace, governance, regulation and policy are struggling to keep up. At the same time, leaders are primarily focused on coming to grips with mastering the use of this new technology within their organizations and demonstrating its business value. Consequently, the crucial questions of governance, compliance and risk management could take second place.
Regulatory Developments and Shifting Narratives
Only last year, the UK convened the Bletchley AI Summit, emphasizing the security of AI systems and issuing stark warnings about potential negative consequences. In August, the EU AI Act became law, introducing new obligations for those developing and deploying AI systems. Since then, however, the narrative has shifted from safety towards opportunity, with the mood music playing a decidedly more positive tune.
AI is increasingly seen as a key technology for delivering change, economic growth and improved services. Given the potential rewards, fear should not impede progress! However, most will agree that the deployment of AI systems needs to be responsible to ensure long-term, sustainable outcomes. This was a common thread throughout many of the conversations at the RAi UK All-Hands meeting we attended in Cardiff this month.
Developing and deploying AI systems in a responsible manner requires organizations to consider many facets. Not only must the AI deliver valid and accurate outputs, but it should also be ethical, secure and resilient, privacy-enhanced, fair, accountable and explainable. Appropriate checks and guardrails need to be applied at each stage of the AI lifecycle, including data discovery and preparation, model selection, training and fine-tuning, validation, deployment and monitoring.
Pioneering Responsible AI Practices
Responsible Ai UK (RAi UK) is a major UK research programme and collaborative network that brings together universities, industries, policymakers and civil society to share ideas and conduct research on responsible AI practices.1 The program started a year ago, so we were excited to learn more about it and its achievements so far.
The ambition of RAi UK is to deliver world-leading best practices for the design, regulation and operation of responsible AI developments. We learned that so far, a strong focus has been on strengthening the existing fragmented research network in this space around central themes. Through a series of over 20 roundtable events across the UK, the program has brought together academia, industry, NGOs and government to shape the core research challenges to be addressed.
Already the programme has funded over 23 different projects that span various sectors (governance, science & technology, transport, education, health, defence and creative industries). The projects include large new multi-disciplinary collaborations tackling emerging concerns linked to AI currently being built and deployed across society. Additionally, international partnerships and impact accelerators provide funding to existing mature projects to maximize their impact and rapidly realize their potential benefits.
Challenges and Innovations in AI Auditing
Despite the importance of auditing AI systems, real challenges exist in doing it effectively. Often, individuals with the necessary domain expertise may lack the technical skills needed. Moreover, in the context of generative AI, there are no robust benchmarks for evaluating LLMs. The projects showcased at the RAi UK All-Hands event, in particular the PHAWM and AdSoLve projects, are geared towards solving these challenges.
The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project addresses the issues in AI auditing by enabling the participation of a diverse set of stakeholders in the audit process.2 It aims to create a certification framework for democratizing trustworthy AI development. Complementing this, the AdSoLve team is working on transferable technical solutions for augmenting LLMs with temporal reasoning and situational awareness, along with a novel and robust evaluation benchmark for AI systems in the healthcare and legal sectors.
Partnership and Forward Look
At Informatica, we are excited to partner with RAi UK. We care deeply about providing our customers with the right tools to prepare their data for AI. For data to be AI-ready, it needs to be robust, responsible and relevant. RAi UK's themes and ambitions are crucial in developing novel solutions to manage and govern data for AI. We are looking forward to the programme's outputs over the coming years, and the opportunities to harness this research to bring practical solutions to our customers.
Find out more about RAi UK: https://rai.ac.uk/.
Additional Resources
To learn more about Informatica’s approach to responsible AI, check out one or more of these resources:
- Watch “Ready, set, AI: Harnessing responsible AI data governance for better business.”
- Download “Chart the Course for Responsible AI Governance.”
- Read “EU AI Act: How to Create an Effective Data Governance Strategy for Your Organization.”
1https://rai.ac.uk/#:~:text=Empowering%20and%20Connecting%20the
2https://www.dhi.ac.uk/projects/phawm#:~:text=Develop%20novel%20participatory%20AI%20auditing