The official website of the China Cybersecurity Standardization Technical Committee has released a draft for comments on the "Basic Security Requirements for Generative Artificial Intelligence Services of Cybersecurity Technology". The standard details the security requirements for generative artificial intelligence services, including training data security, generated content security, and model security requirements. Among them, for training data, it is required to manage and verify the data source, increase the diversity of data sources, and stipulate relevant regulations for the use of open source, self-collected, and commercial training data.
For generated content, content filtering and intellectual property management are required, especially for training data containing personal information, use authorization and management channels are required. In terms of model security requirements, the standard requires corresponding security measures and management requirements in model training, output, monitoring, updating, upgrading, and software and hardware environment.
The release of this standard demonstrates China's emphasis on security management in the field of generative artificial intelligence, and safeguards the healthy development of this field. At the same time, this is also China's continuous introduction of safety management regulations, which provides guarantees for the implementation and application security of artificial intelligence application scenarios. The release of the entire standard shows the importance attached to innovative technologies, while also ensuring the security of artificial intelligence applications.
《Generative AI The content of the Basic Safety Requirements is as follows:
Data source security
The requirements for service providers are as follows.
a) Collection source management:
1) Before collecting data from a specific source, a security assessment should be conducted on the source data. If the data contains more than 5% of illegal and negative information, the source data should not be collected;
2) After collecting data from a specific source, the collected data from that source should be verified. If it contains more than 5% of illegal and negative information, the source data should not be used for training.
b) Combination of training data from different sources:
1) The diversity of training data sources should be improved. There should be multiple training data sources for each language, such as Chinese, English, etc., and each type of training data, such as text, pictures, audio, video, etc.;
2) If it is necessary to use training data from overseas sources, it should be reasonably combined with training data from domestic sources.
c) The source of training data can be traced:
1) When using open source training data, the open source license agreement or relevant authorization documents of the data source should be available;
2) When using self-collected training data, there should be a record of the collection, and data that others have clearly stated cannot be collected should not be collected; web page data that cannot be collected, or personal information that an individual has refused to authorize to be collected, etc.
3) When using commercial training data:
There should be a transaction contract, cooperation agreement, etc. with legal effect;
If the transaction party or partner cannot provide commitments and relevant certification materials on data source, quality, security, etc., the training data should not be used;
The training data, commitments and materials provided by the transaction party or partner should be reviewed.
4) When using user input information as training data, there should be a record of user authorization.
Data content security
a) Training data content filtering: For each type of training data, such as text, images, audio, video, etc., all training data should be filtered before using the data for training. The filtering methods include but are not limited to keywords, classification models, manual sampling, etc., to remove illegal and negative information in the data.
b) Intellectual Property:
1) There should be a training data intellectual property management strategy and the responsible person should be clearly identified;
2) Before the data is used for training, the main intellectual property infringement risks in the data should be identified. If intellectual property infringement and other issues are found, the service provider should not use the relevant data for training;
Note: If the training data contains literary, artistic, or scientific works, it is necessary to focus on identifying copyright infringement issues in the training data and generated content.
3) A complaint and reporting channel for intellectual property issues should be established;
4) In the user service agreement, users should be informed of the intellectual property risks associated with the use of generated content and the relevant provisions should be agreed with the users.
Responsibilities and obligations;
5) Intellectual property-related strategies should be updated in a timely manner according to national policies and third-party complaints;
6) The following intellectual property measures should be in place:
Publicize summary information on the intellectual property rights in the training data; support third parties to inquire about the use of training data and related intellectual property rights through the complaint and reporting channels.
c) Personal information:
1) Before using training data containing personal information, the consent of the corresponding individual or other circumstances specified by laws and administrative regulations should be obtained;
2) Before using training data containing sensitive personal information, the separate consent of the corresponding individual or other circumstances stipulated by laws and administrative regulations should be obtained.
Model security requirements
The requirements for service providers are as follows.
a) Model training:
1) During the training process, the security of the generated content should be considered as one of the main considerations for evaluating the quality of the generated results;
Note: Model-generated content refers to the native content directly output by the model without any other processing.
2) Security audits should be conducted regularly on the development frameworks and codes used, with attention paid to open source framework security and vulnerability-related issues, and security vulnerabilities should be identified and fixed.
b) Model output:
1) In terms of the accuracy of generated content, technical measures should be taken to improve the ability of generated content to respond to the user's input intentions, improve the degree of conformity of the data and expressions in the generated content with scientific common sense and mainstream cognition, and reduce the erroneous content;
2) In terms of the reliability of generated content, technical measures should be taken to improve the rationality of the generated content format framework and the content of effective content, so as to increase the helpfulness of the generated content to users;
3) Regarding refusal to answer questions, questions that are obviously extreme or obviously induce the generation of illegal and negative information should be refused to be answered; other questions should be answered normally;
4) The identification of generated content such as pictures and videos should comply with relevant national regulations and standard document requirements.
c) Model monitoring:
1) Continuously monitor the model input content to prevent malicious input attacks, such as injection attacks, backdoor attacks, data theft, adversarial attacks, etc.;
2) Regular monitoring and evaluation methods and model emergency management measures should be established to promptly deal with safety issues discovered during the service provision process through monitoring and evaluation, and optimize the model through targeted instruction fine-tuning, reinforcement learning, etc.
d) Model update and upgrade:
1) A security management strategy should be formulated when the model is updated and upgraded;
2) A management mechanism should be established to organize security assessments again after major updates and upgrades to the model.
e) Software and hardware environment:
1) Computing systems used for model training and reasoning:
The supply chain security of chips, software, tools, computing power, etc. used in the system should be evaluated, with a focus on supply continuity and stability.
The chips used should support hardware-based secure boot, trusted boot process and security verification.
2) The model training environment and the inference environment should be isolated to avoid security incidents such as data leakage and improper access. Isolation methods include physical isolation and logical isolation.
The above is only part of the content. The entire safety standard book is very detailed. If you are interested, you can go to the official website to view the full content.
my country is also one of the few countries in the world that has successively issued safety management regulations in the field of generative artificial intelligence. On the one hand, it demonstrates the country's emphasis on innovative and transformative technologies, and on the other hand, it ensures the scenario-based implementation and application security of generative artificial intelligence.