6 Key Principles for Responsible AI
Organizations are increasingly leveraging AI to power their operations and enable a broad range of applications, spanning trivial to mission-critical. However, widespread adoption of AI is hindered by concerns over privacy, security, and ethics.
Controversial outcomes of employing AI are becoming well-publicized. Apple and Goldman Sachs were accused—and later cleared—of gender bias when granting credit card limits that occurred after historical data reflecting latent bias was used in developing and testing their AI model. Facebook was fined for violating consumer privacy after Cambridge Analytica leveraged Facebook users’ personal data and AI in digital political campaigning to influence voters. As a result, trust in Facebook dropped 66%. Occurrences like these reinforce fears and concerns around AI. Can your organization withstand a similar scandal?
The significant decline in trust by Facebook users demonstrates how today’s consumers are conscientious and privacy-aware. They want to know how AI is used to make decisions that impact their lives and whether they can contest those decisions. They want answers to questions like:
Why was I denied a loan? Why isn’t my credit limit higher? Why is my insurance premium so costly?
In essence, consumers demand transparency and explanations behind AI-driven decision-making. They will choose to do business with brands that provide this transparency – brands they can trust. To empower privacy-aware consumers in this age of AI, enterprises need to move beyond trading on customer experience and engagement to trading on trust and transparency; these are the new principles on which brands differentiate.
To fulfill consumers’ trust and transparency imperatives, enterprises need to understand and respect compliance and ethics boundaries. They need to reimagine how they communicate and leverage consent. Consumers willingly share their data with brands, trusting the brands will protect their personal data and use it to deliver superior experiences.
Ensuring Trust and Transparency in AI
Market researcher Gartner® predicts, “By 2024, 60% of AI providers will include a means to mitigate possible harm as part of their technologies.”[1]
Enterprises should consider creating a responsible AI strategy for successfully leveraging AI at scale to drive business outcomes. This will help in building trust with customers and users, minimizing risks and maximizing business potential. The following are key guiding principles for a responsible and ethical approach to leveraging data and AI:
Fair and equitable – AI initiatives should strive to incorporate fairness and minimize bias. AI-based applications and systems should be designed to avoid discrimination and perpetuation of any historical bias. To ensure the integrity of AI development processes, there should be an independent review committee to assess bias.
Social ethics – AI initiatives should respect human dignity and values. AI-based applications should be designed to benefit a broad range of human populations. Underlying data should reflect diversity and inclusion.
Accountability and responsibility – AI initiatives should incorporate an operating model that identifies various roles and stakeholders who are accountable, responsible, provide oversight, or conduct due diligence and verification at various stages of implementing AI projects. Evaluating your AI systems, both when they perform as intended and when they don’t, is crucial to building accountable products.
Systemic transparency – AI systems should be designed to provide full visibility into the data and AI lifecycle including assumptions, operations, changes, user consent, and more. They should also incorporate metrics like data quality, bias, data drift, model drift, anomalies, rules, algorithm selection, and training methods. Different stakeholders will require different levels of transparency based on their roles.
Data and AI governance – AI initiatives should include robust and fully auditable governance and compliance standards, frameworks, and structures for both data and AI. They should account for requirements to comply with laws, regulations, and company policies, and management of risks. Existing governance and risk management frameworks should be reviewed and refined to incorporate new considerations, standards, principles, and risks.
Interpretability and explainability – AI initiatives should account for the maximum feasible level of explainability. AI-based decisions and actions should be explainable to all relevant stakeholders. Interpretable and explainable AI promotes trust and drives informed decisions.
Enabling Responsible AI
Your AI journey begins by acknowledging the need, establishing the vision, and identifying key functional roles and capabilities for enabling responsible and ethical AI.
Key functional roles must be anchored at the executive level. This ensures buy-in, budget, and the ability to scale responsible AI practices at the organization level. These roles will be critical in establishing, evolving, and ensuring responsible AI in your organization.
Key capabilities for enabling responsible AI include:
Data quality – Poor data quality cascades, causing negative, downstream effects from data issues. Consider an end-to-end data quality capability that delivers clean, trusted data to ensure that all your AI projects meet your business objectives.
Data and AI governance – Managing AI governance is a continuous balancing effort between driving innovation and managing trust. Adopt an integrated data and AI governance capability that provides full visibility into both data and model lifecycles.
Data privacy management - Customers will opt-in and share their data if they are convinced that their privacy is respected. Implement a robust data privacy capability that enables you to discover, analyze, and secure all your data, as well as manage consent to reduce data privacy risks and remain compliant.
DataOps and ModelOps – Employ DataOps and ModelOps capabilities to enable you to operationalize, manage, and monitor both data and model lifecycles while delivering a clear understanding of the full process of algorithmic decision-making.
Get started with your AI initiatives
To learn how Informatica can power your responsible AI initiatives, visit our Intelligent Data Management Cloud page.
1. Gartner®, Predicts 2021: Artificial Intelligence and Its Impact on People and Society,” Magnus Revang, et al, 31, December 2020