Article

The U.S. and EU's Dance with AI Risks: A Tale of Two Approaches

By Rihonna Scoggins Feb 08, 2023

Artificial intelligence (AI) is one of the most transformative technologies of our time, and its impact is already being felt in many industries. However, with great power comes great responsibility, and both the U.S. and the European Union are taking steps to manage the risks posed by emerging AI technologies. In this article, we will take a closer look at the U.S.’s new AI Risk Management Framework and the EU’s approach to AI risk management and compare their key differences. 

The U.S.’s AI Risk Management Framework 

The U.S. National Institute of Standards and Technology (NIST) recently released its AI Risk Management Framework, which provides organizations with guidance on how to assess and manage the risks associated with AI technologies. The framework focuses on the design, development and deployment of AI systems and includes guidelines for organizations to assess risks such as data privacy, security and accountability. 

The framework provides a systematic approach for organizations to identify and assess the risks posed by AI technologies and provides guidance on how to mitigate these risks. It includes best practices for security, transparency and accountability in the design, development and deployment of AI systems. The framework also provides guidance on how organizations can implement measures to mitigate these risks, such as conducting regular risk assessments, implementing security controls and ensuring the transparency of AI systems. 

The EU's Approach to AI Risk Management 

The EU’s approach to AI risk management is more comprehensive, taking into account the ethical implications of AI, as well as its technical and security risks. The European Commission is developing a comprehensive regulatory framework for AI that is expected to include requirements for transparency, accountability and data protection. The framework is also expected to provide guidelines on how organizations can ensure that AI systems are designed and used in a manner that is consistent with EU values and rights. 

The EU's framework is expected to cover areas such as accountability and transparency, which will provide guidance on how organizations can ensure that AI systems are fair and unbiased. Moreover, a European added value assessment conducted by the European Parliamentary Research Service (EPRS) analyzed and compared three policy options: a status quo baseline scenario, a “uniform” action plan and a “coordinated” plan that would require “joint responsibility between EU and national levels.” The analysis concluded that a collective approach to ethical standards of AI provides a massive economic stimulus that could generate nearly €295 billion (over $356 billion) in additional GDP and 4.6 million additional jobs across the European Union by 2030. 

Key Differences

When comparing the U.S. and the EU's approaches to AI risk management, the main difference lies in where these policy frameworks believe AI’s application across the world will be the most impactful. While the U.S.’s plan focuses more on the technical and security risks posed by AI, the EU’s approach is focusing on the ethical and societal implications of AI across a range of business sectors. Given the largely unknown and untested implementation of AI in areas such as business practices, public administration or human health, this broader focus makes the EU's approach to AI risk management more comprehensive and better suited to ensuring that AI systems are used in a manner that is consistent with human rights and values. Look no further than the emergence of ChatGPT, with questions already beginning to arise around how education and the arts may be ethically compromised, and especially, how jobs may ultimately be replaced by AI technologies. 

In conclusion, both the U.S. and the EU are taking important steps to manage the risks posed by AI technologies. With no comprehensive legal framework currently in place, a multitude of approaches are important to consider, discuss and implement around the world to ensure the safe and responsible deployment of AI systems in the future. As AI continues to evolve and play an increasingly important role in our lives, it is critical that we take a proactive approach to managing its risks and ensuring that it is used in a responsible and ethical manner.