Home Intangible Assets AI ethics rules tighten scrutiny, test compliance: lawyers | China
Intangible Assets

AI ethics rules tighten scrutiny, test compliance: lawyers | China

Share


AI ethics rules

New rules on the ethics and application of artificial intelligence (AI) technology has significantly raised the level and rigour of review, leaving startups in the AI sector exposed to compliance risks, lawyers say.

China’s Ministry of Industry and Information Technology, together with nine other departments, jointly issued the trial measures on a review of the ethics, application and use of AI technology. The measures set out provisions on the scope of ethical review, service facilitation, responsible entities, working procedures and supervision, which aim to standardise ethics governance in related fields.

Grace Wang, Zhong Lun Law Firm
Grace Wang

Grace Wang, a partner at Zhong Lun Law Firm, said the introduction of the measures “means that AI ethics is no longer an optional ‘bonus point’ for companies, but a mandatory legal compliance baseline”. The measures specify that institutions and enterprises engaged in AI-related activities are the responsible parties for establishing and managing ethical review mechanisms.

China has stepped up governance of AI ethics in recent years in response to development in technology. As early as March 2022, the General Office of the Central Committee of the Chinese Communist Party and the General Office of the State Council issued the Opinions on Strengthening the Governance of Science and Technology Ethics.

The following year, the Ministry of Science and Technology, along with nine other departments, released the Measures for Science and Technology Ethics Review (Trial), detailing review bodies and procedures.

Zou Danli, Commerce & Finance Law Offices
Zou Danli

Zou Danli, a partner at Commerce & Finance Law Offices, said the newly issued measures provided more specific rules for applying the aforementioned documents within the AI field.

She said startups in particular should be alert to compliance risks: “A large number of startups are emerging in the AI sector. The most immediate risk is that companies may not fully understand their compliance obligations and proceed with AI activities without conducting the ethical reviews required under the measures, thereby exposing themselves to administrative penalties.”

Zou added that the most notable institutional breakthrough lies in the establishment of clear procedures for AI ethics review, which addressed startups’ needs.

Under these measures, relevant authorities may establish designed service centres to accept commissions from other entities, offering ethical review, re-examination, training and consulting services for AI activities.

“These arrangements help address the shortage of specialised ethics personnel in smaller AI companies and reduce the operational burden and costs associated with compliance,” she said.

Wang pointed to articles 21 to 25 of the measures, which established an expert re-examination and ongoing review system for AI activities placed on a re-examination list, as having the most significant and far-reaching impact. Under these provisions, high-risk AI activities, after passing preliminary review by an internal ethics committee or a designated service centre, must be submitted to the competent authorities or relevant local bodies for expert re-examination.

She said the impact would be threefold: a marked elevation in the level of review, more stringent ongoing oversight requirements, and binding compliance obligations spanning the entire lifecycle of R&D, launch and operation.

“This marks a shift in AI ethics review from an ‘encouraged’ requirement to a mandatory, enforceable and accountable legal obligation,” Wang said. “Companies can no longer rely solely on internal reviews to complete their compliance loop, but have to accept independent evaluation from external experts, significantly raising the compliance threshold.”

As for high-risk areas, she noted that companies were most vulnerable in five stages: organisational set-up, prior review, high-risk procedures, dynamic management, and registration and filing.

AI research, development and applications involving highly sensitive areas, such as human dignity, life and health, public order and the ecological environment, could be deemed non-compliant if conducted without prior ethical review or without submitting complete documentation, Wang said.

She advised companies to promptly establish an ethics governance framework, strengthen ex ante review and risk assessment mechanisms, strictly implement re-examination procedures for high-risk projects, introduce dynamic monitoring and emergency review processes, and fulfil registration and filing obligations.

“Ethics compliance should be embedded throughout the entire lifecycle of AI development, testing, deployment and operation, to balance technological innovation with ethical safety,” Wang added.



Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Cloud computing arrangements based on IFRS

The configuration or customisation service may be performed by the cloud vendor,...

Value of global intangible assets reaches all-time $79.4 trillion high | Press Release

Brand Finance data estimates 79% of global intangible asset value is not...

Axon 2024 revenue grows 33% to $2.1 billion; third consecutive year of 30%+ annual growth

Axon 2024 revenue grows 33% to $2.1 billion; third consecutive year of...