Back
to top

HEADLINE

2024-09-14

Building the technical cornerstone of a trusted ecosystem for generative AI

By



Generative artificial intelligence technology has garnered widespread attention across various sectors, especially with the rise of ChatGPT, and has experienced rapid development in recent years. However, its formidable learning and generation capabilities have also raised significant security concerns, including issues such as malicious output, copyright infringement, and deep forgery. On October 18, 2023, during his keynote speech at the opening ceremony of the 3rd Belt and Road Forum for International Cooperation, President Xi proposed the Global Artificial Intelligence Governance Initiative. He emphasized that “AI governance is a universal challenge for all countries” and stressed the need to “collaborate in preventing risks, establishing a broad consensus on AI governance frameworks and standards, and continuously enhancing the safety, reliability, controllability, and fairness of AI technologies.” In response, the Cyberspace Administration of China issued the Measures for the Identification of Synthetic Content Generated by Artificial Intelligence (Draft for Comment) (hereinafter referred to as the “Measures”) and the accompanying mandatory national standard for the Identification of Synthetic Content generated by Artificial Intelligence for Network Security Technology (Draft for Comment)(hereinafter referred to as the “Standards”). The “Measures” and “Standards” are introduced to address the emerging challenges stemming from the advancement of artificial intelligence technology. Their aim is to effectively identify and regulate synthetic content by establishing an identification verification technology system, ensuring information authenticity and fairness, and sustaining the stable development of technology and social order.


I: Standardizing content identification to prevent the risk of misunderstanding and abuse

As artificial intelligence technology rapidly advances, the authenticity of AI-generated synthetic content continually improves, blurring the line between real and simulated information. This lifelike content not only risks public misunderstanding but also opens avenues for potential criminal exploitation, posing significant threats to personal privacy, organizational reputation, social stability, and national security. The “Measures” and “Standards” are centered on standardizing methods for identifying AI-generated synthetic content, outlining technical application and fundamental objectives, and proposing various detection methods and additional goals to tackle prevalent challenges within the realm of artificial intelligence.

The primary challenge lies in the confusion and misidentification of AI-generated synthetic content. Presently, the public often struggles to discern AI-generated content when accessing and sharing information, leading to the inadvertent dissemination of false information. To address this, the Standard employs an explicit identification method, prominently alerting users that the content has been generated and synthesized by artificial intelligence. This measure aims to enhance the public’s ability to recognize and promote the authenticity and reliability of disseminated information, while also offering clear operational guidelines for content creators and compliance testing platforms.

Another critical issue involves the potential misuse or abuse of AI-generated content. With current generative AI boasting exceptional performance and low entry barriers, individuals can effortlessly create substantial volumes of AI-generated content, escalating the risk of misuse or malicious exploitation. The Standard utilizes an implicit identification approach to embed traceability information within the generated content without altering its presentation. This method facilitates swift tracing of the content’s source and the identification of responsible parties in cases of abuse or malicious use, effectively curbing such misuse and upholding order in cyberspace. Additionally, this approach furnishes essential technical support for subsequent regulatory law enforcement and dispute resolution.


II:Standardizing multi-modal identification and full-cycle supervision to promote the healthy development of AI

Generative artificial intelligence (AI) has fundamentally altered human-technology interaction and revolutionized the production and dissemination of knowledge and information across diverse application modes and complex scenarios. Insufficient, uncoordinated, and incomplete identification technology and its limited application coverage pose significant latent threats to the security of networked information spaces.

The “Measures” and “Standards” comprehensively integrate typical AI-generated synthetic content application scenarios, proposing several provisions addressing key technical aspects. They also advocate suitable identification methods to address a spectrum of potential risks, thereby effectively fostering the robust advancement of generative AI technology. Tailoring to various scenarios, the “Standard” primarily relies on explicit identification, complemented by implicit identification, and introduces a range of measures around relevant technical facets to reinforce regulatory oversight.

Firstly, it outlines a plan to combine explicit and implicit identification. Unsupervised generated content harbors inherent risks, including challenges in discerning authenticity, lack of transparency, and the potential for propagating misinformation. The Standard emphasizes the seamless addition and detection of explicit identifiers, alongside the secure and immutable embedding of implicit identifiers, ensuring traceability and risk mitigation for generated content.

Secondly, it aims to establish diverse governance modes. Multi-modal technology development entails unique data representations for different modes. The “Standard” proposes corresponding constraints tailored to the distinct application characteristics of sound, vision, imagery, and text—the primary modes of mainstream generative AI applications—enhancing society’s capability to authenticate generated content.

Thirdly, it encompasses the entire artificial intelligence lifecycle. The “Standard” ensures that each stage, from production to release and download, adheres to requisite security standards, presenting stringent technical specifications and management measures at each juncture. This guarantees that AI technology consistently meets privacy protection and security standards during development, guiding the compliant application of standardized generative AI comprehensively.


III:Promoting the ecological construction of open industries and providing China's answer

The formulation and introduction of the “Measures” and “Standards” stand as pivotal actions for China to advance governance in the generative artificial intelligence sector, foster the healthy development of industry norms, and steer technology towards positive outcomes. As generative artificial intelligence enters a new developmental phase, the delineation of a reasonable safety threshold, establishment of a secure, open, fair, and nationally conducive artificial intelligence industry ecology, and the promotion of healthy technological growth have become significant challenges for managers and industry practitioners.

In response to this challenge, the Standards have taken proactive measures. Firstly, they aim to enhance technology to bolster the identification and detection capabilities of AI-generated synthetic content across multi-modal full-cycle scenarios, guiding the establishment of relevant security platforms. Secondly, they seek to improve management by actively steering enterprises to understand and implement mandatory national standards correctly. This also involves aiding departments in encouraging and guiding local enterprises to comply with these standards, thereby enhancing overall regulatory efficiency. Thirdly, they emphasize an “inclusive and prudent” approach, acknowledging China’s technological development and industrial status quo, actively listening to public suggestions, and refining systems and policies.

During this rapid evolution of artificial intelligence technology, a balance must be struck between safety supervision and innovation guidance to foster healthy development, stimulate social vitality, and establish a robust industrial ecology. The “Measures” and “Standards” establish a secure governance environment for generative AI, emphasizing equal emphasis on development and security principles, aiding the public in identifying AI-generated synthetic content, ensuring information security, and promoting collaborative governance and sound development.

Generative AI holds immense potential and is poised to significantly impact economic, political, social, cultural, and military realms in the future. Faced with the challenges brought about by core technology innovation, the Measures and Standards play a crucial guiding role in the development, deployment, testing, evaluation, compliance management, and security application within the artificial intelligence industry, positioning China at the forefront of the global intelligent technology wave. We anticipate the implementation of the Measures and Standards to accumulate Chinese experience for subsequent artificial intelligence legislation, contribute Chinese wisdom to global AI governance, foster societal joint governance, and establish an open and harmonious cyberspace community for a shared future.



The College of Computer Science and Technology educates future leaders in computer science with interdisciplinary innovation capabilities to address global challenges in the AI2.0 world.