Skip to main content

How to further improve international AI governance within the Global Digital Compact

How to further improve international AI governance within the Global Digital Compact

Igor Kumagin, Senior Project Manager, Kaspersky

Dmitry Fonarev, Senior Public Affairs Manager, Kaspersky

In September 2021, the secretary-general of the United Nations issued a report titled “Our Common Agenda”, that particularly enshrines the international organization’s commitment to elaborate a Global Digital Compact (GDC). These proposed guidelines, that are currently under development, seek to establish general principles, clear objectives and specific actions to promote an inclusive, free and secure digital future for the world.

Although the GDC, expected to be agreed at the Summit of the Future in September 2024, is non-binding, it would be a significant step towards addressing pressing digital challenges and regulating emerging technologies on a global scale. Notably, the document, as currently drafted, sets the objective to close digital, data and innovation divides, with a focus on improving and making internet connectivity more affordable, enhancing digital skills, and developing appropriate capacity-building programs. In addition, the GDC places the issue of human rights in the digital environment at the center of the discussion, promoting the protection of children’s rights online and the fight against sexual and gender-based violence that occurs with the use of new sophisticated technologies. It also highlights the commitment to an unfragmented internet and information integrity, along with adherence to the principles of secure and trusted cross-border data flows.

A separate section of the draft document is devoted to the regulation of AI, with a specific focus on Objective 5 “Enhance international governance of emerging technologies, including artificial intelligence, for the benefit of humanity”. Kaspersky believes that, with its extensive expertise in addressing AI threats and risks, its recommendations could help develop a comprehensive, security-focused approach to AI governance under the GDC.

1. Enhance multistakeholder cooperation.

Given the growing interconnectivity of the modern world and complexity of AI technologies, developing rules in this area requires close collaboration among diverse groups of stakeholders, as outlined in the GDC. In this context, Kaspersky welcomes the proposal to establish an International Scientific Panel on AI and an Annual Global Dialogue on AI Governance under the auspices of the UN. These multistakeholder platforms should prioritize the establishment of international consensus on the fundamental cybersecurity requirements essential for the development of AI-enabled systems.

2. Prioritize security at all stages.

In line with the GDC’s reasonable call for digital technology companies and developers to increase transparency of their systems and processes, Kaspersky suggests incorporating a recommendation for these enterprises to champion safety as paramount in the document. This involves integrating cybersecurity principles into the foundation of AI system development to minimize threats to customers and the wider public. As the vulnerability of a single element poses a threat to the entire chain, comprehensive assessment mechanisms aimed at identifying potential risks should be implemented throughout the entire AI system development lifecycle.

3. Ensure personal data protection.

Embracing the GDC’s commitment to the principles of responsible data collection and privacy protection, Kaspersky advocates for digital technology companies and AI developers to prioritize the implementation of an opt-out mechanism. This mechanism enables data owners to explicitly prohibit the use of their data to train and fine-tune AI by implementing distinct markers. This approach will safeguard personal information while fostering the development of AI systems that are both effective and ethically sound.

4. Invest in AI models assessment.

Among other initiatives, the GDC sets the ambitious goal of launching a Global Fund for AI and Emerging Technologies for Sustainable Development under authority of the UN secretary-general in 2025, with an initial capital of $100 million. This mechanism, inter alia, is expected to finance capacity-building trainings, encourage the development of quality datasets for AI systems, and foster AI-based solutions aimed at achieving the Sustainable Development Goals.

Recognizing the indisputable need for additional investment to support AI governance, Kaspersky proposes that, as a crucial recommendation, governments establish certification laboratories for high-risk AI-backed models. This is to ensure that such systems, particularly those applied in critical areas like national security, healthcare, and transportation, operate adequately with acceptable levels of risk. The objective of these labs would be to provide an independent assessment of the safety and efficacy of AI systems before they are deployed.

In conclusion, the GDC faces a complex challenge of harmonizing the divergent views and approaches of various stakeholders in order to make the online space accessible and safe. However, once adopted, this initiative could evolve into a holistic framework for agile AI governance, incorporating international best practices in this area, and lay the groundwork for the further development of regulatory standards in the digital domain.

How to further improve international AI governance within the Global Digital Compact

How to further improve international AI governance within the Global Digital Compact
Kaspersky logo

Latest Articles