Img

AI regulation is not only a technology issue 

AI regulation is not only a technology issue 

By: Isaías López, Director of Public Policy and Government Relations, Speyside Mexico

Artificial Intelligence goes well beyond ChatGPT and similar services, and plays a crucial role in different industries, such as developing vaccines against SARS-CoV-2. But its democratization also raises the importance of discussing its regulation, which is already very important in the international agenda where concepts beyond technology, such as ethics, safety discussions, and technical standards, among many other aspects, are being discussed.

 Current discussions around the globe

In July, China released “The Interim Measures for the Management of Generative Artificial Intelligence Services,” which states that authorities should formulate gradual and categorized supervision based on the characteristics of different fields and industries. Among the measures that drew attention are conducting safety assessments and algorithm registrations to AI service providers with social mobilization capabilities and the obligation of providers to point out those contents that AI generates to distinguish them from other contents, among others.

In October, the G7 announced the development of the Guiding Principles and International Code of Conduct to guide organizations developing and using AI systems to address risks and challenges posed by these technologies, notably disinformation, invasion of privacy, and intellectual property infringement. These documents include mitigation measures to be taken by organizations.

In early November, 21 other countries (plus the G7 and the European Union) signed the Bletchley Declaration, stating that AI must be designed, developed, and used in a way that is safe and responsible, recognizing that there is the potential for severe and even catastrophic harm, highlighting the commitment of governments and companies to “work together on the safety of new AI models before they are launched.”

Going a step further, on December 8, the EU reached a regulatory agreement with a risk-based approach, including prohibitions in “unacceptable” risk, such as facial recognition, biometric surveillance, emotion recognition, and social scoring and behavioral manipulation systems. However, some exceptions are foreseen, such as the use of biometrics to locate criminals and victims of kidnapping or trafficking, as well as to prevent terrorist threats.

In contrast, Southeast Asian countries are taking a more cautious stance, moving away from positions of strict regulation and considering the countries’ cultural differences. This is according to a draft of the Association of Southeast Asian Nations (ASEAN) “Guidance on AI Ethics and Governance,” the final version of which could be ready by early 2024.

In Mexico, efforts to design public policies in this area have been led by the National Artificial Intelligence Alliance (ANIA), which, during 2023, held various working groups to prepare a document that will serve as a roadmap and will be delivered to the presidential candidates in the first quarter of the year. In addition, in November, the collaboration of the British Embassy and the Mexican Academy of Cybersecurity and Digital Law (AMCID) was launched to innovate in the regulatory environment of AI through a “Regulatory Sandbox,” a kind of laboratory that will allow testing to anticipate and understand the impact of new technologies under specific guidelines and in controlled spaces before they become widespread. If this Regulatory Sandbox materializes, Mexico could be at the forefront of the efforts being made worldwide.

 Recommendations from companies such as Google

It should also be noted that efforts have been driven by more than just the government sector. Companies like Google have expressed the importance of a global approach to address AI regulation. Through “The AI Opportunity Agenda,” the company recommended that governments adopt a legal framework that is proportional and based on specific risk analysis, recognizing that regulation should be tailored to particular use cases. It also recommended using “technical standards” to provide a level playing field for all companies.

 What would be a good AI regulation?

The input from the global organization The Ambit provides a good starting point. The paper entitled “Voices from Southeast Asia. On Global AI Governance” has outlined approaches to AI regulation under three broad headings: a risk-based approach, a principles-based approach, and a more values-based approach.

The risk-based approach consists of identifying and evaluating potential risks and the measures that must be adapted to mitigate them. Although one of the advantages of this approach is that it makes it possible to focus efforts to avoid harm to society in specific situations, it raises doubts about the risks that may exist and how they should be dealt with. The different levels of danger imply a greater or lesser degree of regulation, which may range from prohibition to self-regulation, including supervision by a governmental body.

The principle-based approach focuses on principles that serve as the basis for decision-making. This approach prioritizes ethical and moral considerations and adheres to pre-established principles (such as avoiding using technology for discriminatory purposes). However, this approach must provide specific guidance on addressing detailed and complex issues. Moreover, applying the principles uniformly in diverse contexts may take time and effort.

Finally, the values-based approach focuses on achieving and defending objectives. For example, it prioritizes values such as democracy, human rights, and sustainability. In this way, it provides a clear and comprehensive framework for decision-making based on shared values, preventing, for example, AI from being used for repressive or manipulative purposes. As in the previous approach, implementation can be complex at the regional or global level because states may have different values and cannot adapt to changing circumstances.

Any regulation to be designed will depend on the specific objectives and policy intended for AI. The choice of approach will depend on the circumstances of each country or region seeking regulation.

What to expect in terms of regulation

From the review carried out in the different regions referred to here, it seems that there is a consensus to avoid overregulation in this area and, on the contrary, that what is sought is to achieve a design that allows innovators to continue innovating, avoiding harm and threats to society to ensure that the benefits of AI reach the most significant possible number of people.

Based on what we saw in 2023, regulation will be in the not-too-distant future. The ideal will be to achieve triple helix coordination (government, academia, and society) to align the interests of the regulated entities with public interests. To this end, some have proposed the establishment of a global governance regime like the Intergovernmental Panel on Climate Change.

Suppose coordination is not achieved at a global level. In that case, the risk is that we will end up with as many regulations as there are regions in the world, not only with dissimilar levels of law but also with opposed and counter-intuitive measures from region to region.

AI regulation is not just a technology issue but an issue that will affect all businesses and sectors.  Diverse voices from across these sectors need to get involved in the discussion; to understand what is going on and contribute their perspectives. We are Speyside are helping organizations to do this, and would be happy to discuss how we can help you.

Leave a Reply