Return to the Library

Standardizing Platform Recommendations: It’s Not That Simple


This in-depth article from the director of the Competition Law Research Center of Nankai University lays out the legal intricacies of regulating algorithm recommendations on online platforms, using an infringement case between iQIYI and ByteDance as an example.

FacebookTwitterLinkedInEmailPrintCopy Link
Original text
English text
See an error? Drop us a line at
View the translated and original text side-by-side

Recently, China’s first algorithm recommendation case was adjudicated, Beijing iQIYI Technology Co., Ltd. (hereinafter “iQIYI”) v. Beijing ByteDance Technology Co., Ltd. (hereinafter “ByteDance”).


iQIYI claimed that it legally enjoys the exclusive right to the worldwide dissemination of the hit television show “Story of Yanxi Palace” over information networks. Without authorization, ByteDance used information stream recommendation technology on the Toutiao App which it operates to disseminate and recommend to the public short videos uploaded by users during the show’s broadcasting. These videos received a very high number of playbacks. Given that ByteDance knew, or should have known, that this was infringing content, it failed to perform its duty of reasonable care, there was subjective fault, and it had infringed on iQIYI’s right of information network dissemination for “Story of Yanxi Palace.”


To this, ByteDance responded that the short videos involved were all captured and uploaded by users, while ByteDance itself had only provided information storage space services; it had fulfilled its duty of reasonable care, there was no subjective fault, and this did not constitute infringement.


The People’s Court of Haidian District, Beijing held that ByteDance had sufficient conditions, capabilities, and reasonable grounds to know that many of its Toutiao users had committed a large number of infringing behaviors, and that this constituted a case of “should know” as stipulated in law. The measures taken by ByteDance in this case had failed to reach the “necessary” level. ByteDance had not only provided information storage space services, but had also provided information stream recommendation services, and should have fulfilled a higher duty of care with respect to users’ infringements. In the end, the court held that ByteDance’s conduct in this case constituted assisted infringement, and ordered that ByteDance compensate iQIYI for 1.5 million RMB in economic losses and 500,000 RMB in reasonable litigation expenses, a total of 2 million RMB.


This case prompted people to think about tort liability in the use of algorithm recommendation technology. For example, how can we identify subjective fault and behavioral components of a platform when using algorithm recommendations? Does the platform have the obligation to review the specific content recommended by algorithms, and what is the legal basis for this? If the platform must undertake this obligation, to what extent does this obligation extend? As a provider of algorithm recommendation services, how can the platform effectively defend itself legally? Etc.


In fact, when a platform provides algorithm recommendation services, there are two difficulties in judging whether its recommendation behavior constitutes infringement.


First, the service model of algorithm recommendation is to obtain user authorization. The platform then pushes personalized content after analyzing user behavior through the algorithm. Although the right to decide to accept the pushed content rests with the user, the control over material actions, such as the generation and synthesis of specific content as well as the push method, time, frequency, and effect, rests with the platform. The question is whether, after accepting the user’s choice, the platform still needs to take the initiative to conduct a substantive review of the recommended content and methods, and based on this, judge whether the platform knows or should know the effect of its behavior and must bear the legal duty of care.


Second, based on the platform’s material control over the algorithm recommendation service, it should assume more obligations for the legal and regulatory compliance of the recommended content and formats. According to the controller principle ubiquitous in the data field, the determination of responsibility for platform algorithm recommendation behavior can also be handled with reference to the controller principle. That is, the platform, as the main controller of the algorithm recommendation service, has the first and primary responsibility, and it is necessary to clarify the effectiveness and appropriateness of necessary measures taken by the platform. In this way, we can identify and determine what obligations the platform should undertake in practice as a user and controller of algorithm recommendation technology as well as the legal defense rights enjoyed by the platform. At the same time, in addition to the platform’s duty of care with regard to the use of algorithm recommendation technology, should it also undertake necessary review responsibilities for the content and objects involved in the technology? If so, does this mean that the platform needs, to some extent, to intervene or restrict its users’ right to choose or even their right to self-determination with respect to information services?


This once again raises the possibility of detracting from the rights and interests of users in order to regulate platforms. That is, for the purpose of consolidating or even expanding the platform’s duty of care, should they be granted more control powers? Although this could prevent and limit the harm caused by a platform’s abuse of algorithm technology and protect the legitimate rights and interests of other operators, it may also improperly detract from the legitimate rights and interests of consumers and users on the platform. If we want to balance the legitimate rights and interests of various subjects in the algorithm recommendation service process, this is an unavoidable problem. Further, at a deeper level, this involves finding a balance between regulating the use of algorithms and incentivizing algorithm innovation.


How to Identify What a Platform “Should Know”


It is generally believed that “knows” (明知) implies that the platform is clearly aware of the existence of infringement. In practice, this is generally reflected in knowledge from a notification (通知知道) or “actual knowledge” (实际知道) about the existence of infringement through a channel such as a contract negotiation or business cooperation. The term “should know” (应知), meanwhile, indicates that, after fulfilling its reasonable duty of care, the platform should be aware of infringement. The former [“knows”] looks at whether the platform has actively intervened in infringing content and behavior, while the latter [“should know”] is more concerned with whether the platform has passively failed to fulfill its due duty of care obligations. At the same time, the latter is more complicated than the former.


When investigating whether a platform “should know” of the existence of infringement, the judgment must take the platform’s duty of care and its performance thereof into account.


When interpreting “should know”, attention should be paid to the difference between “should have known” (应当知道) and “presumed to know” (推定知道). “Should have known” should be interpreted as “should have known but did not know,” and not as “presumed to know.” A judgment of “should have known but did not know” is based on the existence of a legal or agreed obligation that the entity “should know,” and so it corresponds to the subjective state of negligence on the part of the actor. However, there is no corresponding basic obligation implied by “presumption of knowledge,” and the state of an actor’s “knowing” is a conclusion drawn from the existence of certain facts or the occurrence of certain behaviors. Article 1197 of the Civil Code stipulates: “If a network service provider knows or should know that a network user uses its network services in infringement of the civil-law rights and interests of others and fails to take necessary measures, it shall be jointly and severally liable with the network user.” By interpreting “should know” as “should have known but did not know” we include the state of negligence in the scope of application of Article 1197 of the Civil Code, which gives “should know” an independent position in relation to “knows” and harmonizes the internal structure of the law. This is a fairly reasonable interpretation.


Let us take the iQIYI vs. ByteDance case as a specific example. The court held that although in the warning letter and lawyer’s letter sent by iQIYI to ByteDance there was no information, such as URLs of relevant short videos, which could be used to precisely locate infringing files, based on factors such as the TV drama’s popularity, the characteristics of the content pushed by the algorithm used by ByteDance, the short videos’ popularity and user feedback, it could be determined that ByteDance had the sufficient conditions, capabilities, and reasonable grounds to know about the existence of infringing behavior on the platform which it operates.


Considering the vast amount of internet information, as network service providers, platform entities generally do not have the obligation to actively review content, and fulfilling the obligation of active review would increase the burden on platforms to an unreasonable degree. However, compared with the previous provisions of Article 36, paragraph 3 of the Tort Law, the amendment of Article 1197 of the Civil Code that changed “knows” to “knows or should know” actually reflects more stringent requirements: due to factors such as the high risk of infringement in some sectors, the method and nature of network service providers’ services, and their own nature and professional capabilities, even in the face of massive amounts of information, some network service providers should bear the obligation of actively reviewing content before the fact.


On this point, compared with ordinary network service providers, platforms have advantages with respect to data, technology, and capital scale. As soon as they begin using algorithm technology to realize or assist in realizing infringing behavior, the damage caused will be more serious. In practice, platforms undertake public affairs functions such as platform-centric cyberspace control and management, which is the biggest difference between platforms and ordinary network service providers. If a platform algorithm has added instructions or standards for actively screening and pushing infringing content, its infringing behavior constitutes “knowing” infringement. If the platform has the information management capabilities to intervene and adjust the push results, and other factors are present such as alleged infringement on relatively well-known works, it can be determined that the platform acted with subjective fault, and this constitutes an instance where the platform “should know” of the infringement. Based on this, it is reasonable to require platforms to undertake a certain degree of duty of care.


Of course, the understanding and interpretation of platform duty of care are not unchanging. The specific type of platform, technical conditions, degree of involvement, popularity of works, and so on are all factors that affect the interpretation of duty of care. However, in identifying the degree of duty of care borne by the platform, the basic principles and analytical framework used should remain stable. That is, it should be judged in light of the platform’s ability to use and manage information, and the relationship between the platform’s legitimate rights and corresponding obligations should be balanced. On this basis, comprehensive consideration should be given to the platform business type, scale, industry standards, business model, and other factors and individual circumstances. It should not be too mechanical.


How to Identify the Platform’s “Necessary Measures”


In the case of iQIYI vs. ByteDance, the court held that in judging whether the platform had taken the necessary measures, it had to look not only at means and methods, but also at whether the due effect and purpose had been achieved. Determining effect could not be based solely on whether infringing content was deleted and the time of deletion. Specifically, first, after the date on which ByteDance claimed to have taken copyright management measures, a large number of infringing short videos were still appearing. Second, ByteDance did not deal with the allegedly infringing short videos in a timely manner. Therefore, judging from the actual effect of the measures taken by ByteDance, it did not meet the substantive requirements of effectively stopping and preventing obvious infringements, so its behavior could not be determined to have met the standard of “necessary measures.”


If platforms are required to screen infringing works using a content comparison method, the main point is to distinguish between adaptations that require prior permission and legitimate use that does not. Current algorithm screening technology can only make judgments based on objective criteria and cannot adapt to specific scenarios which are flexible and changeable. It may not be able to filter out infringing content, and may even mistakenly filter out legitimate content. In iQIYI vs. ByteDance, the court held that even if algorithm recommendation technology could not identify the specific content of short videos, the platform should have taken measures related to how it defined the scope of algorithm-recommended content, how it set up and optimized the specific methods of algorithm recommendation, and how it included infringing content which had already entered the scope of recommendation in reviews so as to avoid its large-scale and long-term dissemination.


In practice, algorithm technology is already widely applied in business fields such as monitoring information suspected of infringement, sending infringement notifications, and disposing of suspected infringing information. Specifically, using algorithm technology to actively monitor and intercept information suspected of infringement has become the norm; copyright authorities have already implemented requirements for online platforms to actively monitor and intercept information suspected of infringement; and copyright monitoring and rights protection services using algorithm technology have become marketized service projects of considerable scale. In view of this, platforms have the basis for taking more active measures, so the corresponding scope and requirements of their duty of care also need to be adjusted with reference to developments in algorithm technology.


For example, platforms adopt shielding measures for infringing content that is actively filtered out by algorithms or through notification by the right holder in order to stop the same infringing content from being pushed again, which would result in repeated infringement and expanded damage. It can be considered that the increase in a platform’s duty of care will push it to actively improve its algorithm recommendation services’ compliance requirements and quality, and to take more active and effective measures to incorporate the costs of preventing infringement on the intellectual property rights of others into their overall operating cost considerations. This will be conducive to the stable and long-term development of the platform.


In fact, with the continuous rise in the economic status and importance of platforms, the content and extent of the duty of care that should be borne by platform operators are also expanding and strengthening. Therefore, our understanding and application of their “necessary measures” must also be constantly adjusted. This involves more than the expansion of normative interpretations of legal texts. We also need to weigh up our choices about what kind of attitude and model of thinking to apply. For example, further observation is needed to understand whether a theory of strict interpretation or a pragmatic application method is more conducive to the integration of the normative value and the application value of the law, to maintain the authority, feasibility, and accessibility of law.


However, what is certain is that faced with the need to evaluate new legal relationships and behaviors arising from the development of scientific and technological (S&T) innovation, it is necessary to adjust the goals and interpretation rules and methods in a way that is more conducive to problem solving. For infringement that may be caused by algorithm recommendation service behavior, we must properly handle the relationship between incentivizing algorithm innovation and regulating the improper use of algorithms, and carry out governance on the premise of respecting the basic laws of algorithm operation and acknowledging the bounded rationality of algorithm technology cognition. That is, it is necessary to fully recognize the advantages and benefits of platforms’ use of algorithm technology, but also to be alert to the risks and harms of their misuse of algorithm technology.


Collaborative Governance and Balancing the Rights and Interests of Multiple Entities


It should be noted that, as an important factor in promoting the development of the platform economy, algorithms have significantly improved the efficiency of users’ access to information and provide an important tool for the development of internet information.


At the same time, the unreasonable use of algorithms has also given rise to a series of problems: in addition to the intellectual property infringement problem caused by algorithm recommendations, problems such as algorithm collusion, big data swindling (大数据杀熟), and false information all pose threats to the platform economy’s regulated, healthy, and sustainable development. Algorithmic governance should coordinate information security and technological development and, at the same time as doing a good job of risk regulation, further promote the continuous improvement of algorithm technology.


The Provisions on the Administration of Internet Information Service Algorithmic Recommendations (hereinafter “Provisions”) jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation will enter into force on March 1, 2022. These Provisions provide clear, targeted, and operational legal norms for penalizing chaotic algorithm recommendation and are hugely significant for regulating the algorithm recommendation behavior, and promoting the healthy and orderly development, of internet information services.


For intellectual property infringement cases in algorithm recommendation, the Provisions introduce important ideas on adjusting the duty of care system. First, Article 12 of the Provisions is clear on encouraging algorithm recommendation service providers to optimize the degree of transparency and interpretability of rules used in retrieval, sorting, selection, push, and display. Article 24 clarifies the algorithm filing system. It requires algorithm recommendation service providers with public opinion attributes or social mobilization capabilities to submit information such as the service provider’s name, form of service, domain of application, algorithm type, algorithm self-assessment report, and content intended for public notification; to perform filing procedures; and to promptly complete modification procedures if the filed information changes.


In a narrow sense, we can think of algorithm transparency as meaning that algorithm programmers and implementers are obliged to disclose algorithm information. That is, they are required to publicly disclose, and report to the competent authorities, the code, data, decision trees, and other information on the algorithms they adopt. The establishment of an algorithm filing system and relevant rules for algorithm interpretation will help improve algorithm transparency, avoid the forming of “algorithm black boxes,” and avoid the use of algorithmic technologies suspected of infringing on intellectual property rights.


However, it remains to be seen how well this form of algorithmic disclosure actually works. This is because: first, even if information such as the operating principle of the algorithm is made public, it is still difficult for the public and other groups without specialized knowledge to truly understand an algorithm’s operating mechanism, much less detect whether there is a risk of infringement. Second, if the process of algorithm disclosure lacks comments, feedback, and supervision by relevant entities, the credibility of the disclosed information will also be questionable.


In addition, the healthy and orderly development of the industry needs, on the one hand, legal norms and administrative supervision to form external constraints and, on the other hand, industry self-discipline. Industry self-discipline achieved through industry associations has a more direct impact on industry activities and a greater level of flexibility. Article 5 of the Provisions clearly encourages relevant industry organizations to strengthen industry self-discipline, to establish and improve industry standards, industry guidelines, and self-discipline management systems, and to urge and guide algorithm recommendation service providers to formulate and improve service standards, provide services in accordance with the law, and accept social supervision. The problem of governing intellectual property infringement in algorithm recommendation services also requires joint consultation, co-construction, and co-governance by various entities such as platforms and industry associations.


Finally, Article 23 of the Provisions clarifies that relevant departments should establish a graded and categorized security management system for algorithms. Based on the public opinion attributes and social mobilization capabilities of algorithm recommendation services, content type, user scale, and the importance of the data processed by the algorithm recommendation technology, and the degree of intervention in user behavior, algorithm recommendation service providers are to be placed under graded and categorized management.


Specifically, the Provisions divide algorithm technologies into types such as generation and synthesis, personalized push, sorting and selection, retrieval and filtering, and scheduling and decision-making. But there are significant differences between risk level, risk type, need to balance interests, entity characteristics, and so on across different application fields, such as finance, transportation, healthcare, and news.


When it comes to the specific issue of intellectual property infringement by algorithm recommendation services, in order to prevent algorithm recommendation services from infringing on the legitimate rights and interests of others, and the occurrence of situations like the iQIYI vs. ByteDance case, we can prioritize focusing on intellectual property protection issues in sorting and selection-type, personalized push-type, and other specific types of algorithm technologies, strengthen the contrast between short videos and copyrighted works (权源作品), and thereby avoid losses caused by the expansion of infringement due to such types of algorithm recommendation services.


While regulating platform algorithm recommendation service behavior, user rights to self-determination, such as freedom of data dissemination, freedom of sharing, and ability to choose cannot be ignored. In the “user chooses to share > platform pre-review > algorithm recommendation > user receives push content” process, platform pre-review is a decisive point in determining whether user-shared content can be pushed to more users through algorithm recommendation. Therefore, the platform needs to formulate complete pre-review standards and make them public for all platform users to view so that content can be shared after users have understood and agreed.


At this point, the platform and the user have actually reached an agreement, which should strictly abide by the provisions of Articles 496 and 497 on standard terms in the Civil Code. Platforms cannot excessively censor user-published content in order to reduce compliance costs and lessen their responsibilities. They cannot use standard terms or other such methods to restrict or exclude a user’s main rights, and they must fulfil the obligation to notify of restrictive terms that exceed a user’s reasonable expectations.


In the process from algorithm recommendation to the user’s receipt of pushed content, the platform’s review standards should also strive for precision. For example, in the case of iQIYI vs. ByteDance, short video creators edited episode outlines such that people could understand the main plot by watching the short video. This resulted in some users being unwilling to then go to iQIYI to pay to watch the episode. This situation infringed on iQIYI’s right to disseminate information on the internet and resulted in actual losses to the company. At the same time, some short videos only showed a part of the screen image. To some extent, this is reasonable use and does not constitute infringement. If the platform does not push the content which does not risk infringement, it will actually detract from the rights and interests of users and, in some cases, this is not conducive to the dissemination of works and there is a possibility of harming the interests of the copyright owner.


Therefore, when regulating platforms’ algorithm recommendation behavior, it is necessary both to consider the legitimate rights and interests of the relevant intellectual property rights holders and to respect users’ right to self-determination with regards to their information, enabling them to receive reasonably-used content recommendations. As well as this, consideration must also be given to the legitimate rights and interests of all parties in the platform regarding algorithm recommendation service business.


In general, when evaluating the effect of regulations on platform algorithm recommendation service behavior, we should be paying attention to balancing and coordinating between the legitimate interests of all the various entities and, using a combination of qualitative and quantitative analysis, think about it from the perspective of the entire industry’s innovation, development, and security while also considering proportionality with respect to the actual damage and behavior in a specific case.


To top

Cite This Page

陈兵 (Chen Bing). "Standardizing Platform Recommendations: It's Not That Simple [规范平台算法推荐,不那么简单]". CSIS Interpret: China, original work published in Caijing E-Law [财经E法], March 24, 2022

FacebookTwitterLinkedInEmailPrintCopy Link