Global AI Regulation: The (Mis)Alignment Challenge – Reflections on an ILW 2023 Panel
By ABILA 2023 Student Ambassador Bohdan Krivuts, LLM Candidate at University of Georgia School of Law.
This blog is part of a series of reflections on ILW 2023 by our Student Ambassadors. Each Student Ambassador engaged with a variety of panels and will be sharing their experiences over the spring and in the lead up to ILW 2024.
“Artificial intelligence (AI) cannot be regulated like other things. You cannot regulate AI in the same way as operating a nuclear power plant or participating in other commercial activities.” This quote from Professor Thomas Streinz perfectly captures the approach different lawmakers should adopt during the development of AI regulations, whether global, regional, or local.
On October 21, 2023, during the third day of International Law Weekend, Professor Thomas Streinz moderated a panel titled “Global AI Regulation (Mis)Alignment Challenge.” The panelists, including Adele Barzelay, Nathalie Smuha, and Yirong Sun discussed the rapid development of AI technologies and the current state of AI regulations in various jurisdictions, including China, the European Union, and the US. Additionally, the panelists explored the alignment of AI regulations with the needs of both AI systems and humans.
Globally, AI technologies are advancing at an incredible pace. The progress in AI technologies, especially in generative AI (such as ChatGPT), has outpaced the development of legislation intended to regulate AI and mitigate AI-related risks. Since 2020, the EU, US, and China have been separately working to create a framework for AI regulations, and as of writing, only China has adopted specific legislation targeting AI technologies. The absence of relevant AI legislation in most jurisdictions poses numerous risks for governments, companies, and citizens worldwide. Risks associated with unregulated AI include data protection, copyright infringement, safety concerns, ethical policies, and misinformation prevention, among others. Furthermore, Professor Streinz reminded us that the risks posed by AI technologies are amplified by the fact that these technologies transcend borders. Anyone with internet access may be exposed.
As was noted by one of the panelists, Adele Barzelay, the world is progressing toward establishing global governance for AI, through the World Economic Forum’s AI Governance Alliance. AI governance may be implemented at two different levels. The first level of AI governance involves substantive governance, in which international organizations establish global norms, principles, recommendations, and sector-specific guidelines covering all risks related to AI usage. The second level of AI governance involves procedural governance; here, international organizations must engage multiple stakeholders and allow participation from different regions worldwide while developing global AI regulations. Nevertheless, before further developing AI-related regulations, international organizations and States should carefully analyze whether they can adapt the current legal framework to regulate AI or if entirely new regulations are necessary.
In April 2021, the EU Commission introduced a comprehensive regional regulatory framework, proposing harmonized rules regulating AI and amending several existing EU legislative acts. The primary purpose of this regulation is to protect human rights, ensure data privacy, and standardize AI regulations across EU member states. Nathalie Smuha explained that this regulation classifies AI systems based on the following risk levels: Unacceptable-Risk AI Systems, High-Risk AI Systems, and Low and Minimal Risk AI Systems. Unacceptable-Risk AI Systems are AI systems deemed to carry unacceptable risks and therefore will be prohibited for use within EU borders. Examples include AI systems capable of materially distorting a person’s behavior in ways that cause or are likely to cause physical or psychological harm. Additionally, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes falls into this category. AI systems identified as High-Risk will be restricted to those that may significantly harm the health, safety, and fundamental rights of individuals in the EU. Such restrictions aim to minimize any potential impact on international trade. High-risk AI systems will be permitted in the EU market only if they comply with specific mandatory requirements. Finally, the category of Low and Minimal Risk AI Systems encompasses AI systems with low and minimal risks, including chatbots. These systems are considered less impactful in terms of potential harm, thus are subject to less strict regulatory measures.
At present, the EU Commission has struggled with aligining the proposed draft regulation with the needs of different member states and the EU as a whole. Although the regulatory framework has not yet been officially adopted, the use of EU users’ data is already governed by the EU’s General Data Protection Regulation (GDPR). Consequently, any utilization of personal data in AI processing falls under the scope of the GDPR, irrespective of the simplicity or complexity of the AI technology involved. Simultaneously, the Council of Europe is actively working on the AI Framework Convention, which aims to establish a European treaty on AI. The convention outlines broad principles which AI developers will have to adhere to. These principles include privacy, personal data protection, accountability, responsibility, transparency, oversight, safety, safe innovation, and non-discrimination. The Council of Europe aims to adopt the Convention by Spring 2024.
During the panel, Yirong Sun discussed China’s regulatory policies in the field of AI. According to China’s AI legislation, research entities involved in AI technology development in China are required to establish committees for disclosing specific research aspects. China has implemented specific regulations for generative AI technologies, such as DeepFake, by incorporating algorithm filing and security assessment procedures to evaluate the safety of AI technologies intended for public use. In the context of DeepFake technologies, providers bear significant obligations to justify the utilization of the technology and to identify the ultimate users of DeepFake products. These regulatory measures emphasize China’s commitment to fostering responsible and secure advancements in the rapidly evolving sphere of AI.
Overall, the uncontrolled usage of AI technologies poses numerous risks to users, governments, and society at large. However, when subjected to appropriate regulations, the use of AI may be beneficial. In today’s world, we can observe how AI technologies have already permeated various aspects of our lives, much like the invention of the internet reshaped the course of human progress decades ago. Therefore, to ensure the effective, and more importantly, safe utilization of AI technologies, international organizations and foreign nations should collaborate in the development of comprehensive AI regulations.