规范平台算法推荐,不那么简单
Return to the Library

Standardizing Platform Recommendations: It’s Not That Simple

规范平台算法推荐,不那么简单

This in-depth article from the director of the Competition Law Research Center of Nankai University lays out the legal intricacies of regulating algorithm recommendations on online platforms, using an infringement case between iQIYI and ByteDance as an example.


FacebookTwitterLinkedInEmailPrintCopy Link
Original text
PDF
English text
PDF
See an error? Drop us a line at
View the translated and original text side-by-side

Recently, China’s first algorithm recommendation case was adjudicated, Beijing iQIYI Technology Co., Ltd. (hereinafter “iQIYI”) v. Beijing ByteDance Technology Co., Ltd. (hereinafter “ByteDance”).

近期,中国首例算法推荐案——北京爱奇艺科技有限公司(下称“爱奇艺”)诉北京字节跳动科技有限公司(下称“字节跳动”)案宣判。

iQIYI claimed that it legally enjoys the exclusive right to the worldwide dissemination of the hit television show “Story of Yanxi Palace” over information networks. Without authorization, ByteDance used information stream recommendation technology on the Toutiao App which it operates to disseminate and recommend to the public short videos uploaded by users during the show’s broadcasting. These videos received a very high number of playbacks. Given that ByteDance knew, or should have known, that this was infringing content, it failed to perform its duty of reasonable care, there was subjective fault, and it had infringed on iQIYI’s right of information network dissemination for “Story of Yanxi Palace.”

爱奇艺诉称,其依法享有热播影视作品《延禧攻略》在全球范围内独占的信息网络传播权。字节跳动未经授权,在该剧热播期间,在其运营的今日头条App上利用信息流推荐技术,将用户上传的自行截取的短视频向公众传播并推荐,播放量极高。字节公司在应知或明知侵权内容的情况下,未尽到合理注意义务,存在主观过错,侵害了爱奇艺对《延禧攻略》享有的信息网络传播权。

To this, ByteDance responded that the short videos involved were all captured and uploaded by users, while ByteDance itself had only provided information storage space services; it had fulfilled its duty of reasonable care, there was no subjective fault, and this did not constitute infringement.

对此,字节跳动辩称:涉案的短视频均系用户自行截取上传,字节跳动仅提供信息存储空间服务,已尽到合理注意义务,不存在主观过错,不构成侵权。

The People’s Court of Haidian District, Beijing held that ByteDance had sufficient conditions, capabilities, and reasonable grounds to know that many of its Toutiao users had committed a large number of infringing behaviors, and that this constituted a case of “should know” as stipulated in law. The measures taken by ByteDance in this case had failed to reach the “necessary” level. ByteDance had not only provided information storage space services, but had also provided information stream recommendation services, and should have fulfilled a higher duty of care with respect to users’ infringements. In the end, the court held that ByteDance’s conduct in this case constituted assisted infringement, and ordered that ByteDance compensate iQIYI for 1.5 million RMB in economic losses and 500,000 RMB in reasonable litigation expenses, a total of 2 million RMB.

北京市海淀区人民法院审理认为,字节跳动具有充分的条件、能力和合理的理由,知道其众多头条号用户大量地实施了涉案侵权行为,属于法律所规定的应当知道情形。字节跳动在本案中所采取的相关措施,尚未达到“必要”程度。字节跳动不仅提供了信息存储空间服务,且同时提供了信息流推荐服务,理应对用户的侵权行为负有更高的注意义务。最终,法院认为字节跳动的涉案行为构成帮助侵权,判定其赔偿爱奇艺经济损失150万元及诉讼合理开支50万元,共计200万元。

This case prompted people to think about tort liability in the use of algorithm recommendation technology. For example, how can we identify subjective fault and behavioral components of a platform when using algorithm recommendations? Does the platform have the obligation to review the specific content recommended by algorithms, and what is the legal basis for this? If the platform must undertake this obligation, to what extent does this obligation extend? As a provider of algorithm recommendation services, how can the platform effectively defend itself legally? Etc.

该案件引发人们对算法推荐技术使用中有关侵权责任的思考。比如,如何认定平台在使用算法推荐时的主观过错及行为构件?平台对算法推荐的具体内容是否负有审查义务,且法律依据是什么?平台如需承担义务,那么该义务要达到何程度?作为算法推荐服务的提供者,平台如何提供有效抗辩?等等。

In fact, when a platform provides algorithm recommendation services, there are two difficulties in judging whether its recommendation behavior constitutes infringement.

事实上,在平台提供算法推荐服务时,判断其推荐行为是否构成侵权,存在两个难点。

First, the service model of algorithm recommendation is to obtain user authorization. The platform then pushes personalized content after analyzing user behavior through the algorithm. Although the right to decide to accept the pushed content rests with the user, the control over material actions, such as the generation and synthesis of specific content as well as the push method, time, frequency, and effect, rests with the platform. The question is whether, after accepting the user’s choice, the platform still needs to take the initiative to conduct a substantive review of the recommended content and methods, and based on this, judge whether the platform knows or should know the effect of its behavior and must bear the legal duty of care.

其一,算法推荐的服务模式是获得用户授权,平台通过算法对用户行为分析后作出的个性化推送,虽然决定接受推送的权利在用户方,但对具体内容的生成、合成以及推送方式、时间、频次及效果等实质行为的控制权则在平台。这是否意味着平台在接受用户选择后,仍需主动对推荐内容及方式方法做出实质性审查,并据此判断平台是否明知或应知其行为效果,并需对此承担法定注意义务。

Second, based on the platform’s material control over the algorithm recommendation service, it should assume more obligations for the legal and regulatory compliance of the recommended content and formats. According to the controller principle ubiquitous in the data field, the determination of responsibility for platform algorithm recommendation behavior can also be handled with reference to the controller principle. That is, the platform, as the main controller of the algorithm recommendation service, has the first and primary responsibility, and it is necessary to clarify the effectiveness and appropriateness of necessary measures taken by the platform. In this way, we can identify and determine what obligations the platform should undertake in practice as a user and controller of algorithm recommendation technology as well as the legal defense rights enjoyed by the platform. At the same time, in addition to the platform’s duty of care with regard to the use of algorithm recommendation technology, should it also undertake necessary review responsibilities for the content and objects involved in the technology? If so, does this mean that the platform needs, to some extent, to intervene or restrict its users’ right to choose or even their right to self-determination with respect to information services?

其二,基于平台对算法推荐服务所拥有的实质控制权,理应对推荐内容和形式的合法合规承担更多义务。依据数据领域广泛存在的控制者原则,对平台算法推荐行为的责任认定亦可以参照控制者原则予以处理,即平台作为算法推荐服务的主要控制者负有首责与主责,有必要明确平台采取必要措施的实效性与适度性,以此识别和判定平台作为算法推荐技术使用者与控制者在实践中应当承担何种义务,以及享有何种抗辩权利。同时,平台除了对算法推荐技术使用负有注意义务,是否还应对技术所涉及内容及对象承担必要的审核责任?如果是,是否意味着平台需要在一定程度上干预或限制用户对信息服务的选择权甚或自决权?

This once again raises the possibility of detracting from the rights and interests of users in order to regulate platforms. That is, for the purpose of consolidating or even expanding the platform’s duty of care, should they be granted more control powers? Although this could prevent and limit the harm caused by a platform’s abuse of algorithm technology and protect the legitimate rights and interests of other operators, it may also improperly detract from the legitimate rights and interests of consumers and users on the platform. If we want to balance the legitimate rights and interests of various subjects in the algorithm recommendation service process, this is an unavoidable problem. Further, at a deeper level, this involves finding a balance between regulating the use of algorithms and incentivizing algorithm innovation.

在此,又引发了为了规范平台而减损用户权益的可能,即为了轧实甚或扩大平台注意义务而不得课以其更多控制权能,虽然能防治和约束平台可能滥用算法技术侵权的危害,保护其他经营者合法权益,但是也可能不当减损平台上消费者用户的合法权益。若要平衡算法推荐服务过程中多元主体的合法权益,这是不能回避的问题。甚至从更深层次看,这还关系到规制算法使用与激励算法创新之间的平衡。

How to Identify What a Platform “Should Know”

如何认定平台“应当知道”

It is generally believed that “knows” (明知) implies that the platform is clearly aware of the existence of infringement. In practice, this is generally reflected in knowledge from a notification (通知知道) or “actual knowledge” (实际知道) about the existence of infringement through a channel such as a contract negotiation or business cooperation. The term “should know” (应知), meanwhile, indicates that, after fulfilling its reasonable duty of care, the platform should be aware of infringement. The former [“knows”] looks at whether the platform has actively intervened in infringing content and behavior, while the latter [“should know”] is more concerned with whether the platform has passively failed to fulfill its due duty of care obligations. At the same time, the latter is more complicated than the former.

一般认为,“明知”意味着平台明确知晓侵权行为存在,在实践中通常体现为通知知道,或经由合同谈判、业务合作等途径“实际知道”侵权行为存在等情形。“应知”则意指其尽到自身合理的注意义务后,应当对侵权行为知情。前者考察平台是否对侵权内容、行为做出主动介入,后者则更为关注平台是否消极不履行自身应尽的注意义务。同时,后者较前者的情形更为复杂。

When investigating whether a platform “should know” of the existence of infringement, the judgment must take the platform’s duty of care and its performance thereof into account.

考察平台是否“应知”侵权行为的存在,则需要结合平台的注意义务及其履行情况来认定。

When interpreting “should know”, attention should be paid to the difference between “should have known” (应当知道) and “presumed to know” (推定知道). “Should have known” should be interpreted as “should have known but did not know,” and not as “presumed to know.” A judgment of “should have known but did not know” is based on the existence of a legal or agreed obligation that the entity “should know,” and so it corresponds to the subjective state of negligence on the part of the actor. However, there is no corresponding basic obligation implied by “presumption of knowledge,” and the state of an actor’s “knowing” is a conclusion drawn from the existence of certain facts or the occurrence of certain behaviors. Article 1197 of the Civil Code stipulates: “If a network service provider knows or should know that a network user uses its network services in infringement of the civil-law rights and interests of others and fails to take necessary measures, it shall be jointly and severally liable with the network user.” By interpreting “should know” as “should have known but did not know” we include the state of negligence in the scope of application of Article 1197 of the Civil Code, which gives “should know” an independent position in relation to “knows” and harmonizes the internal structure of the law. This is a fairly reasonable interpretation.

在对“应知”予以解释时,需注意“应当知道”与“推定知道”之间存在的区别。“应当知道”应被解释为“应知而未知”,而非“推定知道”。“应知而未知”判断的基础是有法定或约定的“应知”义务的存在,对应的是行为主体过失的主观状态。而判断“推定知道”则不存在对应的基础义务,行为主体“知道”的状态是通过某些事实的存在或行为的发生而得出的结论。《民法典》第1197条规定:“网络服务提供者知道或者应当知道网络用户利用其网络服务侵害他人民事权益,未采取必要措施的,与该网络用户承担连带责任。”将“应知”解释为“应知而未知”,将过失状态纳入了《民法典》第1197条的适用范围,使得“应知”获得了独立于“知道”的地位,协调了法条的内部结构,是较为合理的解释。

Let us take the iQIYI vs. ByteDance case as a specific example. The court held that although in the warning letter and lawyer’s letter sent by iQIYI to ByteDance there was no information, such as URLs of relevant short videos, which could be used to precisely locate infringing files, based on factors such as the TV drama’s popularity, the characteristics of the content pushed by the algorithm used by ByteDance, the short videos’ popularity and user feedback, it could be determined that ByteDance had the sufficient conditions, capabilities, and reasonable grounds to know about the existence of infringing behavior on the platform which it operates.

具体以爱奇艺诉字节跳动案为例。法院认为,尽管在爱奇艺向字节跳动发送的预警函及律师函中并未存在涉案短视频的URL等可以精确定位侵权文件的信息,但是依据涉案电视剧热度、字节跳动所用算法的推送特点、涉案短视频的播放热度及用户反馈等因素,认定字节跳动有充分的条件、能力和合理的理由知道其运营的平台上存在侵权行为。

Considering the vast amount of internet information, as network service providers, platform entities generally do not have the obligation to actively review content, and fulfilling the obligation of active review would increase the burden on platforms to an unreasonable degree. However, compared with the previous provisions of Article 36, paragraph 3 of the Tort Law, the amendment of Article 1197 of the Civil Code that changed “knows” to “knows or should know” actually reflects more stringent requirements: due to factors such as the high risk of infringement in some sectors, the method and nature of network service providers’ services, and their own nature and professional capabilities, even in the face of massive amounts of information, some network service providers should bear the obligation of actively reviewing content before the fact.

考虑到互联网信息海量性特点,作为网络服务提供者的平台主体一般不负有主动审查义务,履行主动审查义务会加大其不合理的负担。然而,《民法典》第1197条规定相较于其之前的《侵权责任法》第36条第3款的规定,将“知道”修改为“知道或应当知道”,实际上体现了更严格的要求:基于一些板块具有高度侵权风险、网络服务提供者提供服务的方式与性质,及其本身的性质和专业能力等因素,即使面对海量信息,部分网络服务提供者也应当承担事前主动审查的义务。

On this point, compared with ordinary network service providers, platforms have advantages with respect to data, technology, and capital scale. As soon as they begin using algorithm technology to realize or assist in realizing infringing behavior, the damage caused will be more serious. In practice, platforms undertake public affairs functions such as platform-centric cyberspace control and management, which is the biggest difference between platforms and ordinary network service providers. If a platform algorithm has added instructions or standards for actively screening and pushing infringing content, its infringing behavior constitutes “knowing” infringement. If the platform has the information management capabilities to intervene and adjust the push results, and other factors are present such as alleged infringement on relatively well-known works, it can be determined that the platform acted with subjective fault, and this constitutes an instance where the platform “should know” of the infringement. Based on this, it is reasonable to require platforms to undertake a certain degree of duty of care.

为此,相较于普通网络服务提供者,平台具有数据、技术、资本规模等优势,一旦其借助算法技术实施或帮助实施侵权行为,造成的损害要更为严重。实践中,平台承担了以其为中心的网络空间控制、管理等类公共事务职能,这是其与普通网络服务提供者最大的区别。如果平台算法中加入了主动筛选和推送侵权内容的指令或标准,其侵权行为构成“明知”;如果平台具有对推送结果干预、调整的信息管理能力,还存在涉嫌侵权的作品较为知名等因素,即可认定平台具有主观过错,其对侵权行为构成“应知”。基于此,要求平台承担一定程度的注意义务具有合理性。

Of course, the understanding and interpretation of platform duty of care are not unchanging. The specific type of platform, technical conditions, degree of involvement, popularity of works, and so on are all factors that affect the interpretation of duty of care. However, in identifying the degree of duty of care borne by the platform, the basic principles and analytical framework used should remain stable. That is, it should be judged in light of the platform’s ability to use and manage information, and the relationship between the platform’s legitimate rights and corresponding obligations should be balanced. On this basis, comprehensive consideration should be given to the platform business type, scale, industry standards, business model, and other factors and individual circumstances. It should not be too mechanical.

当然,对平台注意义务的理解与解释并非一成不变,平台的具体类型、技术条件、涉事程度、作品热度等,都是影响注意义务解释的因素。然而,对平台所承担注意义务程度的识别,其基本的原则与分析框架应保持其稳定性,即应当与平台对信息的使用和管理能力相适应,应平衡平台正当权益与相应义务之间的关系。在此基准上,结合平台业务类型、规模大小、行业标准、商业模式等要素与个案情况进行综合考量,不应过于机械。

How to Identify the Platform’s “Necessary Measures”

如何认定平台的“必要措施”

In the case of iQIYI vs. ByteDance, the court held that in judging whether the platform had taken the necessary measures, it had to look not only at means and methods, but also at whether the due effect and purpose had been achieved. Determining effect could not be based solely on whether infringing content was deleted and the time of deletion. Specifically, first, after the date on which ByteDance claimed to have taken copyright management measures, a large number of infringing short videos were still appearing. Second, ByteDance did not deal with the allegedly infringing short videos in a timely manner. Therefore, judging from the actual effect of the measures taken by ByteDance, it did not meet the substantive requirements of effectively stopping and preventing obvious infringements, so its behavior could not be determined to have met the standard of “necessary measures.”

在爱奇艺诉字节跳动案中,法院认为,判断平台是否采取了必要措施,既要关注手段与方式,也要关注是否实现了应有的效果与目的。对效果认定不能仅以是否删除,以及删除时间作为判断标准。具体而言,首先,在字节跳动声称采取版权管理措施的日期之后,依然出现了大量侵权的短视频;其次,字节对涉嫌侵权短视频的处理并不及时。因此,从字节跳动采取措施的实际效果来看,并不符合有效制止、预防明显侵权行为的实质要求,不能认定其行为达到“必要措施”的标准。

If platforms are required to screen infringing works using a content comparison method, the main point is to distinguish between adaptations that require prior permission and legitimate use that does not. Current algorithm screening technology can only make judgments based on objective criteria and cannot adapt to specific scenarios which are flexible and changeable. It may not be able to filter out infringing content, and may even mistakenly filter out legitimate content. In iQIYI vs. ByteDance, the court held that even if algorithm recommendation technology could not identify the specific content of short videos, the platform should have taken measures related to how it defined the scope of algorithm-recommended content, how it set up and optimized the specific methods of algorithm recommendation, and how it included infringing content which had already entered the scope of recommendation in reviews so as to avoid its large-scale and long-term dissemination.

如果要求平台按照内容对比的方式来筛查侵权作品,则重点在于区分需要取得事前许可的改编行为与无需获取事前许可的合理使用行为。现行算法过滤技术只能依据客观标准作出判断,并无法适应灵活多变的具体场景,可能无法过滤掉侵权内容,甚至有可能错误过滤掉合法内容。在爱奇艺诉字节跳动案中,法院认为,即使算法推荐技术不能识别短视频具体内容,但对于平台如何划定算法推荐内容的范围,如何设置和优化算法推荐的具体方式,以及如何将已经进入推荐范围的侵权内容纳入复审,避免其大范围、长时间传播,平台应当采取措施。

In practice, algorithm technology is already widely applied in business fields such as monitoring information suspected of infringement, sending infringement notifications, and disposing of suspected infringing information. Specifically, using algorithm technology to actively monitor and intercept information suspected of infringement has become the norm; copyright authorities have already implemented requirements for online platforms to actively monitor and intercept information suspected of infringement; and copyright monitoring and rights protection services using algorithm technology have become marketized service projects of considerable scale. In view of this, platforms have the basis for taking more active measures, so the corresponding scope and requirements of their duty of care also need to be adjusted with reference to developments in algorithm technology.

实践中,算法技术已经广泛运用到监测涉嫌侵权信息、发送侵权通知、处置涉嫌侵权信息等业务领域中。具体体现在,利用算法技术主动监测、拦截涉嫌侵权信息已成为常态;版权主管部门已对网络平台主动监测、拦截涉嫌侵权信息做出要求;利用算法技术的版权监测、维权服务已成为具有相当规模的市场化服务项目。鉴于此,平台完全有采取更积极措施的基础,由此所对应的注意义务范畴与要求也需要结合算法技术的进步予以调整。

For example, platforms adopt shielding measures for infringing content that is actively filtered out by algorithms or through notification by the right holder in order to stop the same infringing content from being pushed again, which would result in repeated infringement and expanded damage. It can be considered that the increase in a platform’s duty of care will push it to actively improve its algorithm recommendation services’ compliance requirements and quality, and to take more active and effective measures to incorporate the costs of preventing infringement on the intellectual property rights of others into their overall operating cost considerations. This will be conducive to the stable and long-term development of the platform.

譬如,平台对利用算法主动过滤出和权利人自发通知的侵权内容采取屏蔽措施,防止相同侵权内容再次被推送,造成反复侵权和损害扩大。可以认为,注意义务的提高会促使平台主动提升算法推荐服务的合规要求和水平,采取更积极有效的措施,将不侵犯他人知识产权的预防成本纳入到总体运营成本考量范围,有利于平台行稳致远。

In fact, with the continuous rise in the economic status and importance of platforms, the content and extent of the duty of care that should be borne by platform operators are also expanding and strengthening. Therefore, our understanding and application of their “necessary measures” must also be constantly adjusted. This involves more than the expansion of normative interpretations of legal texts. We also need to weigh up our choices about what kind of attitude and model of thinking to apply. For example, further observation is needed to understand whether a theory of strict interpretation or a pragmatic application method is more conducive to the integration of the normative value and the application value of the law, to maintain the authority, feasibility, and accessibility of law.

事实上,随着平台经济地位和重要性的不断提升,平台经营者所应承担的注意义务的内容和程度也在不断扩大和加强,故对其必要措施的理解和适用也必须不断调整。这其中不仅涉及对法律文本的规范性解释的扩容,还需要考虑选择用什么样的态度和模式思路,比如,严格教义学意义上的解释论,还是实用主义的适用方法,哪一种更有利于融合法律的规范价值与应用价值,保持法律的权威性、可行性及可及性还有待进一步观察。

However, what is certain is that faced with the need to evaluate new legal relationships and behaviors arising from the development of scientific and technological (S&T) innovation, it is necessary to adjust the goals and interpretation rules and methods in a way that is more conducive to problem solving. For infringement that may be caused by algorithm recommendation service behavior, we must properly handle the relationship between incentivizing algorithm innovation and regulating the improper use of algorithms, and carry out governance on the premise of respecting the basic laws of algorithm operation and acknowledging the bounded rationality of algorithm technology cognition. That is, it is necessary to fully recognize the advantages and benefits of platforms’ use of algorithm technology, but also to be alert to the risks and harms of their misuse of algorithm technology.

然而可以肯定的是,面对科技创新发展所引发的新型法律关系及行为的评价,必须以更有利于问题解决调整目标和解释规则及方法。具体到算法推荐服务行为可能引致的侵权而言,必须处理好激励算法创新与规制算法不当使用之间的关系,在尊重算法运行基本规律,承认对算法技术认知的有限理性前提下,展开治理,即既要充分认识到平台利用算法技术的优势与利好,也要警惕其滥用算法技术的风险与危害。

Collaborative Governance and Balancing the Rights and Interests of Multiple Entities

协同治理与多元主体权益平衡

It should be noted that, as an important factor in promoting the development of the platform economy, algorithms have significantly improved the efficiency of users’ access to information and provide an important tool for the development of internet information.

应该到,作为推动平台经济发展的重要要素,算法大幅提升了用户获取信息的效率,为互联网信息发展提供了重要工具。

At the same time, the unreasonable use of algorithms has also given rise to a series of problems: in addition to the intellectual property infringement problem caused by algorithm recommendations, problems such as algorithm collusion, big data swindling (大数据杀熟), and false information all pose threats to the platform economy’s regulated, healthy, and sustainable development. Algorithmic governance should coordinate information security and technological development and, at the same time as doing a good job of risk regulation, further promote the continuous improvement of algorithm technology.

同时,算法的不合理运用也引发了一系列问题:除去算法推荐引发的的知识产权侵权问题,算法共谋、大数据杀熟、虚假信息等问题对平台经济规范健康持续发展都产生威胁。算法治理应当统筹信息安全与技术发展,在做好风险规制的同时,进一步推动算法技术不断完善。

The Provisions on the Administration of Internet Information Service Algorithmic Recommendations (hereinafter “Provisions”) jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation will enter into force on March 1, 2022. These Provisions provide clear, targeted, and operational legal norms for penalizing chaotic algorithm recommendation and are hugely significant for regulating the algorithm recommendation behavior, and promoting the healthy and orderly development, of internet information services.

国家网信办、工信部、公安部、市场监督管理总局联合发布的《互联网信息服务算法推荐管理规定》(下称《规定》)已于2022年3月1日起施行。这一《规定》为惩治算法推荐乱象提供了明确且具有针对性和操作性的法律规范,对规范互联网信息服务算法推荐行为,促进互联网信息服务健康有序发展具有重大意义。

For intellectual property infringement cases in algorithm recommendation, the Provisions introduce important ideas on adjusting the duty of care system. First, Article 12 of the Provisions is clear on encouraging algorithm recommendation service providers to optimize the degree of transparency and interpretability of rules used in retrieval, sorting, selection, push, and display. Article 24 clarifies the algorithm filing system. It requires algorithm recommendation service providers with public opinion attributes or social mobilization capabilities to submit information such as the service provider’s name, form of service, domain of application, algorithm type, algorithm self-assessment report, and content intended for public notification; to perform filing procedures; and to promptly complete modification procedures if the filed information changes.

对于算法推荐中的知识产权侵权案件而言,《规定》在调整注意义务体系上提出了重要思路。首先,《规定》第12条明确,鼓励算法推荐服务提供者优化检索、排序、选择、推送、展示等规则的透明度和可解释性。第24条则明确了算法备案制度,要求具有舆论属性或者社会动员能力的算法推荐服务提供者应当提交服务提供者的名称、服务形式、应用领域、算法类型、算法自评估报告、拟公示内容等信息,履行备案手续;备案信息发生变更的,还需及时办理变更手续。

In a narrow sense, we can think of algorithm transparency as meaning that algorithm programmers and implementers are obliged to disclose algorithm information. That is, they are required to publicly disclose, and report to the competent authorities, the code, data, decision trees, and other information on the algorithms they adopt. The establishment of an algorithm filing system and relevant rules for algorithm interpretation will help improve algorithm transparency, avoid the forming of “algorithm black boxes,” and avoid the use of algorithmic technologies suspected of infringing on intellectual property rights.

可以理解,狭义上的算法透明是指,算法程序设计者与实施者有义务公开算法信息,即要求其公开披露并向主管部门报告其采用算法的代码、数据、决策树等信息。算法备案制度与算法解释相关规则的确立,有助于提升算法透明度,避免“算法黑箱”形成,避免出现涉嫌侵犯知识产权的算法技术运用。

However, it remains to be seen how well this form of algorithmic disclosure actually works. This is because: first, even if information such as the operating principle of the algorithm is made public, it is still difficult for the public and other groups without specialized knowledge to truly understand an algorithm’s operating mechanism, much less detect whether there is a risk of infringement. Second, if the process of algorithm disclosure lacks comments, feedback, and supervision by relevant entities, the credibility of the disclosed information will also be questionable.

然而,这种形式的算法公开实际效果如何,尚待观察,原因在于:其一,即使将算法运行原理等信息公开,社会公众等不具有专业知识的群体也很难从中真正了解算法的运行机制,更难以觉察其中是否存在侵权风险。其二,算法公开的过程如果缺乏有关主体的评论、反馈、监督,则公开信息的可信度也值得怀疑。

In addition, the healthy and orderly development of the industry needs, on the one hand, legal norms and administrative supervision to form external constraints and, on the other hand, industry self-discipline. Industry self-discipline achieved through industry associations has a more direct impact on industry activities and a greater level of flexibility. Article 5 of the Provisions clearly encourages relevant industry organizations to strengthen industry self-discipline, to establish and improve industry standards, industry guidelines, and self-discipline management systems, and to urge and guide algorithm recommendation service providers to formulate and improve service standards, provide services in accordance with the law, and accept social supervision. The problem of governing intellectual property infringement in algorithm recommendation services also requires joint consultation, co-construction, and co-governance by various entities such as platforms and industry associations.

其次,行业健康有序发展,一方面需要法律规范和行政监管形成外部约束,另一方面也需要行业自律,通过行业协会开展的行业自律对行业活动影响更为直接且更加灵活。《规定》第5条明确鼓励相关行业组织加强行业自律,建立健全行业标准、行业准则和自律管理制度,督促指导算法推荐服务提供者制定完善服务规范、依法提供服务并接受社会监督。治理算法推荐服务中的知识产权侵权问题,同样需要平台、行业协会等多元主体共商、共建、共治。

Finally, Article 23 of the Provisions clarifies that relevant departments should establish a graded and categorized security management system for algorithms. Based on the public opinion attributes and social mobilization capabilities of algorithm recommendation services, content type, user scale, and the importance of the data processed by the algorithm recommendation technology, and the degree of intervention in user behavior, algorithm recommendation service providers are to be placed under graded and categorized management.

最后,《规定》第23条明确,有关部门应当建立算法分级分类安全管理制度,根据算法推荐服务的舆论属性或者社会动员能力、内容类别、用户规模、算法推荐技术处理的数据重要程度、对用户行为的干预程度等对算法推荐服务提供者实施分级分类管理。

Specifically, the Provisions divide algorithm technologies into types such as generation and synthesis, personalized push, sorting and selection, retrieval and filtering, and scheduling and decision-making. But there are significant differences between risk level, risk type, need to balance interests, entity characteristics, and so on across different application fields, such as finance, transportation, healthcare, and news.

具体来说,《规定》将算法技术分为生成合成类、个性化推送类、排序精选类、检索过滤类、调度决策类等类型。而在金融、交通、医疗、新闻等应用领域之间的风险级别、风险类型、利益平衡需求、主体特点等要素之间存在显著差别。

When it comes to the specific issue of intellectual property infringement by algorithm recommendation services, in order to prevent algorithm recommendation services from infringing on the legitimate rights and interests of others, and the occurrence of situations like the iQIYI vs. ByteDance case, we can prioritize focusing on intellectual property protection issues in sorting and selection-type, personalized push-type, and other specific types of algorithm technologies, strengthen the contrast between short videos and copyrighted works (权源作品), and thereby avoid losses caused by the expansion of infringement due to such types of algorithm recommendation services.

具体到算法推荐服务的知识产权侵权问题上,为了避免算法推荐服务侵犯到他人合法权益,发生类似于爱奇艺诉字节跳动案中的情形,可以重点关注排序精选类、个性化推送类的算法技术中知识产权保护问题,强化短视频与权源作品的对比,避免因此类算法推荐服务扩大侵权带来的损失。

While regulating platform algorithm recommendation service behavior, user rights to self-determination, such as freedom of data dissemination, freedom of sharing, and ability to choose cannot be ignored. In the “user chooses to share > platform pre-review > algorithm recommendation > user receives push content” process, platform pre-review is a decisive point in determining whether user-shared content can be pushed to more users through algorithm recommendation. Therefore, the platform needs to formulate complete pre-review standards and make them public for all platform users to view so that content can be shared after users have understood and agreed.

在规范平台算法推荐服务行为的同时,数据传播自由、分享自由、可选择性等用户自决权益也不能被忽视。在用户选择分享-平台事前审查-算法推荐-用户收到推送的流程中,用户分享是否能经过算法推荐推送至更多用户,平台事前审查是具有决定性的一环,因此平台需要在制定完善的事前审查标准,并将其公开供所有平台用户查阅,用户知晓并同意后方可进行分享。

At this point, the platform and the user have actually reached an agreement, which should strictly abide by the provisions of Articles 496 and 497 on standard terms in the Civil Code. Platforms cannot excessively censor user-published content in order to reduce compliance costs and lessen their responsibilities. They cannot use standard terms or other such methods to restrict or exclude a user’s main rights, and they must fulfil the obligation to notify of restrictive terms that exceed a user’s reasonable expectations.

此时,平台与用户实际上达成了协议,应严格遵守《民法典》第496条和497条关于格式条款的规定,平台不能为了降低合规成本、克减应有责任,而对用户发布的内容进行过度审查,不能使用格式条款等方式限制或排除用户主要权利,对超出用户合理期待的限制性条款应尽到提示义务。

In the process from algorithm recommendation to the user’s receipt of pushed content, the platform’s review standards should also strive for precision. For example, in the case of iQIYI vs. ByteDance, short video creators edited episode outlines such that people could understand the main plot by watching the short video. This resulted in some users being unwilling to then go to iQIYI to pay to watch the episode. This situation infringed on iQIYI’s right to disseminate information on the internet and resulted in actual losses to the company. At the same time, some short videos only showed a part of the screen image. To some extent, this is reasonable use and does not constitute infringement. If the platform does not push the content which does not risk infringement, it will actually detract from the rights and interests of users and, in some cases, this is not conducive to the dissemination of works and there is a possibility of harming the interests of the copyright owner.

在算法推荐到用户收到推送这一过程中,平台的审查标准也应努力做到精准化。譬如,在爱奇艺诉字节跳动案中,短视频创作者剪辑出剧集的故事梗概,人们通过观看短视频即可了解主要剧情,使得部分用户不愿意再去爱奇艺付费观看剧目,此种情况侵犯了爱奇艺享有的信息网络传播权,给其带来实际损失。同时,另有部分短视频仅仅涉及部分画面,在一定程度上属于合理使用,尚不构成侵权。如果平台将不具有侵权风险的内容一并不予推送,则在实际上减损了用户权益,且在某些情况下也并不利于作品传播,存在损害版权方利益的可能。

Therefore, when regulating platforms’ algorithm recommendation behavior, it is necessary both to consider the legitimate rights and interests of the relevant intellectual property rights holders and to respect users’ right to self-determination with regards to their information, enabling them to receive reasonably-used content recommendations. As well as this, consideration must also be given to the legitimate rights and interests of all parties in the platform regarding algorithm recommendation service business.

故此,在对平台算法推荐行为予以规制时,既要考虑到知识产权相关权利人的合法权益,又要尊重用户对于其信息自决的权利,使其能够接收到合理使用的内容推荐。此外,也要考虑到平台内各方就算法推荐服务业务所享有的正当权益。

In general, when evaluating the effect of regulations on platform algorithm recommendation service behavior, we should be paying attention to balancing and coordinating between the legitimate interests of all the various entities and, using a combination of qualitative and quantitative analysis, think about it from the perspective of the entire industry’s innovation, development, and security while also considering proportionality with respect to the actual damage and behavior in a specific case.

总体而言,在对平台算法推荐服务行为的规制效果予以评价时,应注重多元主体各自正当利益的平衡与协调,从整个产业创新发展与安全的角度,结合个案实际损害与行为之间的比例关系,展开定性与定量的分析。

To top

Cite This Page

陈兵 (Chen Bing). "Standardizing Platform Recommendations: It's Not That Simple [规范平台算法推荐,不那么简单]". CSIS Interpret: China, original work published in Caijing E-Law [财经E法], March 24, 2022

FacebookTwitterLinkedInEmailPrintCopy Link