Have you noticed all the recent advancements and buzz around artificial intelligence (AI)?
Have you thought about the societal impact of these technologies?
Without a doubt, AI is rapidly developing and becoming increasingly integrated into our lives. But while AI has the potential to make our lives easier and more efficient, there are also concerns that it could be used to perpetuate bias and discrimination, among other ethical issues.
And data leaders are becoming more conscious of these considerations. Informatica recently published CDO Insights 2023: How to Empower Data-Led Business Resiliency, which revealed that around a third of the 600 data leaders surveyed around the globe identified AI governance/ethics as a top data strategy trend (AI governance/ethics was the top trend reported among U.S data leaders). Data leaders identified environmental, social and governance (ESG) initiatives as another leading trend in our survey. And within the “social” dimension, promoting diversity, equity and inclusion is emerging as a critical objective for many organizations. This would indicate that leaders are looking for ways to foster a more inclusive approach to their operations — including their AI initiatives.
We decided it was time for a critical internal conversation to discuss data ethics/ethical AI and bias. Because as our CEO Amit Walia puts it, “The ideals of inclusion, diversity, equity and belonging are embedded in Informatica’s DNA and together, we have the power to do the extraordinary and make a difference in the world and for each other."
Commemorating Juneteenth at Informatica with AI Ethics Expert Dr. Brandeis Marshall
Earlier this month, Informatica Black Resource Group (IBRG) hosted data equity strategist and author of Data Conscience, Dr. Brandeis Marshall, for an internal event commemorating Juneteenth. We invited Dr. Marshall to lead an interactive discussion on mitigating bias and reducing discrimination in data and AI.
During the insightful session, we covered topics including:
- How bias is defined
- Real-world examples of bias in AI
- Why we all should care about mitigating bias (even those of us who are not in technical roles)
Dr. Marshall shared an immense amount of valuable information with us. While it’s challenging to distill all we learned into just a few points, here are some of the key takeaways:
- Various types of bias exist…and they co-exist
For our discussion, Dr. Marshall focused on four types of bias, which she categorized into two major groups:
Statistical bias, which she describes as “grounded in the mathematical formulation.” It includes data and algorithmic bias.
Societal bias, which she describes as “grounded in the presence and dissemination of prejudice in our processes, systems and institutions.” It encompasses human and business bias.
Although these biases live in separate categories, Dr. Marshall suggests we should consider how they interact. For instance, societal inequity may associate with or amplify a statistical error.
- To help mitigate bias, use “The Bias Wheel”
To characterize the relationship among the different forms of bias, Dr. Marshall describes the four types of bias described above as part of a wheel, aptly called “The Bias Wheel.” Each spoke within the wheel represents one of the four types of bias.
The Bias Wheel can start rotating at any point. As the wheel turns, each spoke (type of bias) is impacted by the other three. For example, human bias can affect data creation for algorithms that ultimately inform business decisions.
When building AI tools, thinking about these relationships and potential sources of bias can help encourage discussions around solutions (policies, procedures, etc.) to help mitigate bias.
- Where to start? Ask questions!
Regarding practical steps for getting started and adopting the bias wheel: Dr. Marshall suggests simply asking these questions about the data you’re sourcing to improve your awareness of potential bias:
Does the data we need already exist?
If so, thoroughly understand where the data originated and its original purpose.
If not, what are your data-gathering procedures? And how much harm could they be causing?
Do we own the right to do what we want with the data?
If not, you’ll need to stop and consider your next steps, such as whether you wish to purchase a license.
Whether you are technical or business-oriented, asking questions like these will help you develop the transparency you need to begin intentionally addressing bias.
Supporting Ethical AI with Effective Data Management
While our discussion was centered on bias, there are many other aspects to consider within the realm of ethical and responsible AI. Other elements include data privacy, transparency, accountability and more. With so many components involved, you may wonder how to address each of them adequately.
Well, having a comprehensive data ethics framework underpinned by solid data management capabilities such as data cataloging, data quality, data lineage and data governance can certainly assist with your ethical AI endeavors by helping to improve data trust and transparency.
- Discover how this investment management company is leveraging data management tools to enable ethical use of their data: Accelerating Data Governance for ESG — A Fireside Chat with Federated Hermes
- Learn about these data management solutions that can support your AI ethics efforts
- Read more about Informatica’s sustainability efforts and commitment to inclusivity, diversity, equity and belonging (IDEB).