Abstract:
It is difficult to distinguish AI-generated synthetic content from human intellectual achievements, which can easily lead to negative consequences such as a crisis of trust, data risks, and infringement disputes. The establishment of labeling obligations is a useful attempt to optimize the AI governance system. However, the Measures for the Labeling of Artificial Intelligence Generated Synthetic Content have legislative defects such as overlapping obligation subjects, inconsistent performance standards and a lack of relief rules. There are also difficulties in legally interpreting the relationship between violations of labeling obligations and AIGC infringement. Furthermore, labeling challenges in professional fields and new technological applications pose practical challenges to the labeling system. Labeling obligations possess dual functional value in both public and private law. The regulatory and governance function in public law requires that labeling obligations be clearly and unambiguously defined to ensure predictability in their fulfillment. The function of allocating rights and responsibilities in private law requires clarifying the positioning of labeling obligations within the framework of tort liability, providing guidance for dispute resolution. The labeling boundaries for each obligation subject and the principles for handling incorrect labeling should be clarified, without excessively expanding the scope of liability for labeling obligations in infringement disputes. Additionally, in response to technological developments, exceptions can be established to balance the rigidity and flexibility of the law.