Why the Biden Administration’s Executive Order on AI Matters

Last Published: Dec 02, 2024 |
Nathan Turajski
Nathan Turajski

Senior Director, Product Marketing

Artificial intelligence (AI) is at a crossroads that will determine the next phase of growth and adoption, with organizations weighing the risks and rewards to their business today.

The potential upsides for AI include streamlining operations, uncovering new opportunities and insights, and helping to connect with customers. The potential downsides relate to the ethical use of AI in new applications, the data privacy impact, and the trustworthiness of AI models. Within this context, the Biden administration’s Executive Order on Artificial Intelligence1 can be framed when determining its impact.

Expert tip: Join our webinar, “Drive Better AI with Trusted Data,” on December 5 to learn from industry experts and get your questions answered.

Where organizations head next depends mainly on the proposed guardrails in place in the months ahead that address emerging risks and build confidence in AI in the future. The latest executive order (EO) is a welcome step. It shines a spotlight on addressing AI’s impact through emerging governance standards while keeping AI open and relevant for business.

It should be noted that government scrutiny did not start with this latest executive order — the EU AI Act2 is already blazing a trail, focused on profiling data risks, and serves as a good starting point to familiarize the hard work already in progress as this topic builds momentum globally.

What Does the Executive Order on Artificial Intelligence Include?

Without repeating the executive order word-for-word, let’s summarize critical points. Eight primary sections of the EO detail the scope:

  • New Standards for AI Safety and Security
  • Protecting Americans’ Privacy
  • Advancing Equity and Civil Rights
  • Standing Up for Consumers, Patients, and Students
  • Supporting Workers
  • Promoting Innovation and Competition
  • Advancing American Leadership Abroad
  • Ensuring Responsible and Effective Government Use of AI

To simplify what these mean in practice, we can break them down into three primary themes:

  • Regulating AI to drive responsible use by lowering the risk of abuses (bullet one)
  • Protecting individuals – privacy, bias, ethical use, (bullets two–five)
  • Accelerating innovation (bullets six–eight)

Regulating AI

Helping to ensure AI systems are “safe, secure, trustworthy” for critical applications and developing standards is the focus of the first point to promote AI safety and reduce risks of abuse, such as “fraud and deception.” (Think deep fakes on the last point for content authenticity challenges.) Cybersecurity impact is also considered — both using AI to enhance security and help ensure safe, ethical and effective military and intelligence use.

Protecting Individuals

The recent artist strikes demonstrate the concerns over abusing artificially generated content, including actors’ likenesses. Moreover, protecting data privacy and shielding individuals from AI abuse are key to points two through five in the EO. “Privacy-preserving” techniques that make it harder to “extract, identify, and exploit personal data” are critical, as well as the potential for AI to introduce bias and discrimination when not properly trained. In general, using confidential data should empower society rather than harm it. (Asimov’s Laws for robotics should equally apply to AI.3)

Accelerating Innovation

America’s expected leadership can characterize the last few EO points through innovation. Investments in AI research, promoting open competition and recruiting future talent while promoting American leadership on the world stage. This includes defining “AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.”

Why the Executive Order on Artificial Intelligence Matters to Today’s Businesses

Claiming a leadership role for America with AI raises the bar while also playing a bit of catch-up to how Europe’s AI Act is gaining momentum. In effect, organizations can begin to incorporate responsible AI use into their charters with this initial EO blueprint, build better AI and data literacy when organizations handle sensitive data, and develop better customer experience programs that build more trusted relationships.

Many organizations are on the fence when evaluating the risks and benefits of groundbreaking AI technology, such as generative AI. Now, with the EO, organizations can start to build confidence that the benefits of AI can potentially outweigh the risks with proper data governance and AI governance in place. So, how do organizations get started?

Drive Better AI with Trusted Data

Informatica’s data and AI governance tools on IDMC help organizations balance AI risks and rewards in their favor through data and AI governance on IDMC. We can help you get ready for AI – including your data!

Do you have the right tools needed? Do you understand the impact of trusted data and AI? Do you need your questions answered? We’ve got your back:

 

 

1https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
2https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
3https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

First Published: Nov 03, 2023