Return to the Library

Shutting Algorithms into a Legal Cage


A legal scholar from Renmin University analyses the legal aspects of algorithm regulations, effective March 2022, and points out existing issues on technology platforms which the regulations seek to address.

FacebookTwitterLinkedInEmailPrintCopy Link
Original text
English text
See an error? Drop us a line at
View the translated and original text side-by-side

Building on our understanding of the risks and harms of algorithm technology, establishing different regulatory and supervisory paths according to the type and severity of algorithmic harm, using targeted regulation, weighing and balancing through granting rights, and the pursuit of accountability, is an appropriate approach for law to respond to the risks and challenges of new AI technologies.


As algorithm application contexts expand, the tentacles of algorithm technology reach into every corner of digital life: e-commerce shopping, browsing for information and knowledge, financial credit, performance management, AI-assisted sentencing, biometric identification, and monitoring and security—all of these bear the imprint of algorithms. People are perpetually entangled in a net of algorithms.


The E-Commerce Law and the Consumer Protection Law were early legislative responses to problems related to algorithms such as big data swindling, algorithmic discrimination, algorithmic black boxes, and algorithmic collusion.


The Personal Information Protection Law (“Personal Protection Law”), which was passed on August 20, 2021, introduced the concept of automated decision-making (自动化决策) and included specific provisions on its regulation. On September 17, 2021, the Cyberspace Administration of China and eight other departments issued the Guiding Opinion on Strengthening the Overall Governance of Internet Information Service Algorithms, which stated: “Within about three years, [we should] gradually establish a comprehensive algorithm security governance structure with robust governance mechanisms, a refined supervisory system, and a standardized algorithm ecosystem.” On January 4, 2022, four departments (the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation) jointly issued the Provisions on the Administration of Internet Information Service Algorithmic Recommendation  (“Provisions”). These Provisions will go into effect on March 1, 2022.


To date, China has created an initial three-layered algorithm regulatory system of different forms of algorithm-specific regulations from sectoral laws, and data laws to joint departmental rules. If 2021 can be called “year one” of China’s algorithm governance due to the transition from responding to algorithms using scattered, traditional sectoral laws to governing them at data sources, then from 2022, with the introduction of the Provisions, China has entered the age of “algorithm law” (“算法法”) with whole-of-process pre- and during-use dynamic algorithm regulations.


“The problems resulting from unreasonable applications of algorithms, such as algorithmic discrimination, big data swindling, and induced addiction, profoundly affect the normal dissemination order, market order, and social order and pose a challenge to the maintenance of ideological security, social fairness, and the lawful rights and interests of internet users. There is an urgent need to establish rules and systems for algorithmic recommendation services and strengthen the norms governing them, and to strive to enhance our ability to prevent and defuse security risks relating to algorithmic recommendations and promote the healthy, orderly development of algorithm-related industries.” This official explanation by the Cyberspace Administration of China on the Provisions reveals the thinking that motivated this effort to regulate algorithmic risk. It is also how we interpret the Provisions’ underlying logic: while automated algorithmic decision-making brings efficiency and convenience, it also contains technological risks.


The flip side to the benefits of algorithmic technology is algorithmic harm, including algorithmic labeling, algorithmic manipulation, and algorithmic discrimination. The logical starting point for laws and regulations governing algorithms is to ask how to defend against algorithmic harm and risks at the same time as sharing algorithms’ benefits.


Algorithmic Labeling: Precision Marketing and Over-Recommending


An algorithm provides big data portraits of users. It creates electronic identity labels that help algorithm service providers or network platforms to sell products or services. Once these labels are formed, they are filed and then circulated among different algorithm service providers, with long-term negative effects on individuals or society.


By attaching different labels to users, big data portraits type users so that merchants can engage in precision marketing and over-recommending. For example, a social-networking app may record keywords containing information relating to money worship and ostentation, pornography and vulgarity in users’ labels and then push similar information to them, thereby encouraging tendencies towards money worship, extravagance, and vulgarity.


Evaluations by the Nandu Internet Content Ecology Governance Research Center found that the recommendation interface on the youth-oriented homepage of the Dragonfly FM app contained advertising links to adult content or dating or to vulgar short videos. Some social-networking apps have been exposed as having pushed private pictures of minors and videos that skirt the line with pornography to users who were searching parent-child topics. It is possible that user labels using keywords involving illegal child pornography-related information were set up for these users and that relevant information was pushed to them on this basis.

南都网络内容生态治理研究中心测评曾发现,蜻蜓FM App青少年模式主页推荐界面存在成人内容或情趣交友、低俗短视频等广告链接。一些社交类App此前也被曝向搜索亲子话题的用户推送未成年人隐私图片、色情擦边球视频,可能是对这部分用户以未成年人色情的违法信息关键词设置用户标签,并据此推送相关信息。

To respond to inappropriate algorithmic labeling and the harm it may cause, and to prohibit the use of keywords involving illegal or harmful information to attach labels to, classify, and push information to users in a targeted way, Article 10 of the Provisions stipulates: “Algorithmic recommendation service providers shall strengthen user model and user label management and perfect rules for recording interest points in user models and rules for managing user labels. They may not enter keywords involving unlawful or harmful information into user interest points or use them as user labels and use this as a basis for pushing information.” If algorithmic recommendation service providers can strengthen management and standardize conduct at the point in algorithm design and deployment where user models and labels are managed, and if they put an end to the occurrence of keywords involving illegal or harmful information in interest point rules for user models or in user labels, they will be able to reduce user exposure to pornographic and vulgar information and to purify information network space. They will also help to maintain mainstream values and to protect minors.


It is worth noting that, compared with the Draft for Comments [of the Provisions], the effective version deleted the stipulation that, “it is forbidden to set up discriminatory or biased user labels.” Since “discriminatory and biased” was overly broad and its meaning was fuzzy and elastic, limiting prohibited labels to those with keywords involving illegal and harmful information was clearly more lenient and more operationally practical.


In addition, Article 17 of the Provisions specifies that algorithmic recommendation service providers should provide users with an option of not having their personal characteristics targeted or a convenient option to turn off the algorithmic recommendation service and provide users with a function for selecting or deleting user labels that are used in algorithmic recommendation services to target their own personal characteristics. This uses the approach of granting rights to users, passing the initiative to them to prevent themselves from having inappropriate algorithm labels attached to them. However, user labels differ from personal information. The question of how technically to use the personal information deletion rights granted under the Personal Protection Law to support user label deletion and management rights is something that needs to undergo further practical testing.


Algorithmic Manipulation: Information Cocoons and Squeezing Labor


Algorithms can use collected and pooled user data to create huge information advantages. They push information at users to influence their choices and decisions and to manipulate individuals and groups. For example, news recommendations give rise to “information cocoons” and the phenomenon of catering to people’s tastes, pushing to users only the information content that they like, which may then lead users to develop bigoted viewpoints or even to become addicted. Toutiao’s algorithmic recommendations, by capturing, pooling, classifying, ranking, and extracting user-browsed content data, perform matching and push information at users in a precise way based on their interests and tastes. Not only does this involve the problems of blindly catering to users and seeking to maximize traffic, it may also infringe on user privacy and violate public morality and even laws, and Toutiao have been singled out by official media for criticism. People who have become wrapped up in information cocoons receive a narrow range of information and find it difficult to get a comprehensive perspective or form correct values, which results in gaps between social groups and even social cleavages as well as obstacles to reaching a consensus on public issues.


On August 10, 2021, the Central Propaganda Department and four other departments jointly issued the Guiding Opinion on Strengthening Literary Criticism Work in the New Era, which emphasized “strengthening network algorithm research and guidance, carrying out comprehensive governance of network algorithmic recommendations, and not providing dissemination channels to erroneous content.” While that Opinion was just an administrative guide, the Provisions establish rules at the joint departmental regulatory level with regard to algorithmic manipulation which might result in information cocoons, such as algorithmic information blocking, over-recommending, manipulation of lists, and control of hot searches. These rules include: “Algorithmic recommendation service providers shall provide users with an option of not having their personal characteristics targeted, or provide users with a convenient option to turn off algorithmic recommendation services” (Article 17) and prohibition of “the use of algorithms to block information, over-recommend, manipulate lists or search result rankings, control hot searches or selections, et cetera, to interfere in information presentation, engaging in behaviors to influence network public opinion or evade regulation” (Article 14). In relation to the protection of minors, algorithmic recommendation service providers are prohibited from “pushing information to minors which may adversely influence minors’ physical and mental health, such as that which may incite minors to imitate unsafe conduct or violate social morality or may lead minors towards harmful tendencies, and must not use algorithmic recommendation services to induce minors to develop internet addiction” (Article 18).


What merits commending is that the Provisions are not limited to [stipulating on] inducing addiction, influencing public opinion, and other such applications of algorithms in the results stage, but also institute regulation and supervision earlier on at the algorithm development and design stage. They penetrate the technical layer of algorithms and make clear the principle of open and transparent algorithmic recommendation services, encouraging algorithmic recommendation service providers to comprehensively employ strategies such as content de-weighting and intervention scattering, and encouraging them to boost rule transparency and explicability, in order to establish technical guard rails against abusive phenomena from algorithmic manipulation such as information cocoons and public opinion manipulation (Article 12). The Provisions introduce the perspective of AI law regulating a fusion of technology and law. They also highlight the regulatory characteristic of “algorithm law,” namely the dynamic and technical regulation of the entire process, both before and during algorithm operation.


Aside from information cocoons and inducing addiction, algorithms manipulate people in areas such as performance management and platform algorithm dispatching. The use by food delivery platforms of distribution algorithms to squeeze and manipulate riders, violating riders’ labor rights and even threatening their lives and health by causing traffic accidents, was captured in the article “Delivery Riders, Stuck in the System” in 2020, attracting widespread attention to the relationship between algorithms and digital labor. Under tremendous pressure from public opinion, Meituan adjusted its distribution algorithm, integrated multiple time-estimation models, and replaced the original predicted order delivery “time point” with a flexible “time period.” In setting up the algorithm, it took into consideration protection of rider labor rights.


In the age of AI algorithms, it is not only delivery riders who are stuck in the system, the work modes of couriers, online car-hailing service drivers, and of other occupations are all increasingly reliant on, and are commanded and constrained by, algorithms. In view of this, the Provisions establish standards specifically for algorithm applications directed at workers, stipulating that “where algorithmic recommendation service providers provide work dispatch services to workers, they shall protect the lawful rights and interests of workers such as remuneration for labor, breaks, and vacations, and establish and improve algorithms relating to platform orders and allocation, remuneration composition and payment, work time, rewards and penalties, et cetera” (Article 20).


We can expect that, with the promulgation and enforcement of the Provisions, in the protection of rights and interests associated with platform economy digital labor we will bid farewell to an era wherein protection can only depend upon hot events and viral online essays. The protection of digital labor rights and interests will be implemented in an institutionalized way as an important part of platform enterprise compliance. The hope is that the above stipulations can truly play a role, guiding platforms—as algorithmic recommendation service providers—to design and apply algorithms which are more humane and compassionate, and that workers will no longer be pushed around by algorithms and reduced to serving as pawns in a heartless system.


Algorithmic Discrimination and Big Data Swindling


“Big data swindling” (大数据杀熟) which has often been denounced by consumers in the digital economy, uses algorithms to achieve price discrimination and is another example of algorithmic discrimination. The algorithm creates user portraits by searching user data and provides different list prices based on their preferences so as to achieve precision marketing and profit maximization.


On November 26, 2020, the Consumer Attitudes and Perceptions Concerning Online Platform Conduct and Market Competition questionnaire survey report published by the Nandu Anti-Monopoly Research Team revealed that the problems which most concerned survey respondents were, in order of importance: misuse of personal data (50%), big data swindling (20%), excessive precision pushing, hard-to-discern paid search results, either-or ultimatum platforms, internet blocking, forced tie-in sales, and platform self-preferencing. Seventy-three percent of respondents were opposed to “big data swindling.”


Previously, in August 2020, the Ministry of Culture and Tourism issued Interim Regulations Governing Online Tourism Business Services, prohibiting online tourism businesses from using pricing algorithms to achieve big data “swindling.” Article 18 of the E-Commerce Law and Article 24 of the Personal Protection Law stipulate the obligation that, when using information-pushing and commercial marketing algorithms, an option should be provided to avoid targeting of personal characteristics. The State Council Anti-Monopoly Commission Antitrust Guidelines on the Platform Economy, issued on February 7, 2021, taking an antitrust law perspective, also regulates on conduct of platform economy businesses in dominant market positions which implements differentiated treatment based on big data and algorithms.


The Provisions which soon go into effect establish targeted norms for algorithmic recommendation service providers. They stipulate that, when selling consumers products or providing them with services, providers should protect consumers’ fair trading rights and must not use algorithms based on consumer preferences, transaction habits, or other such characteristics to engage in unlawful conduct such as extending unreasonably differentiated treatment in trading conditions such as trading prices (Article 21).


Given the advantage of platforms with respect to data and algorithm technology, to defend against the above forms of algorithm harm, the Provisions, as well as regulating algorithmic recommendation service entities’ conduct at both the result and behavioral stages, also continue the notion of data rights granting used in the GDPR and the Personal Protection Law, “granting algorithm rights” to users while imposing corresponding obligations on platforms. For example, stipulating on algorithm explicability and safeguards for the right of users to be informed, they require that platforms conspicuously notify of algorithms’ basic principles, objectives, and main operating mechanisms; they require that platforms give users the right to withdraw from personalized recommendations by providing them with an option to not have their personal characteristics targeted and with a convenient option to turn off algorithmic recommendation services; and they grant users the right to manage labels and provide the possibility of input data intervention. Though the relevant technology still awaits testing to determine operability, the innovative efforts made with respect to granting algorithm rights merit approval.


At the same time, in keeping with the principle of algorithm accountability, the Provisions require algorithmic recommendation service providers to assume responsibility as algorithm security entities and establish and improve management systems and technical measures such as algorithm mechanism audits, S&T ethical reviews, user registration, information release audits, data security and personal information protections, counter measures for telecommunication and online fraud, security assessment and monitoring, and security incident emergency response arrangements. Articles 31 through 34 establish the corresponding legal liability clauses, which provide a basis for holding algorithm violators accountable.


In sum, the Provisions go beyond the data protection GDPR thinking regarding the granting of personal rights, introducing risk regulation thinking which sets out from a focus on algorithmic decision-making and its risks of harm, while coinciding with related European Union AI policy guides and regulatory approaches.


The EU’s Ethics Guidelines for Trustworthy AI point out that AI systems, while bringing substantive benefits to individuals and society, also bring risks and negative effects, and that some such risks are hard to predict, identify, and measure. They state that the development, deployment, and use of AI systems must conform to the three major ethical principles of respect for human autonomy, prevention of harm, and fairness and explicability.


The EU’s White Paper On Artificial Intelligence—A European Approach to Excellence and Trust, continuing this risk and harm-oriented thinking, points out that AI harm may be physical (safety and health of individuals, loss of life and property) or immaterial (invasion of privacy, violation of human dignity, employment discrimination, impairment of autonomy, et cetera) and that the regulatory framework needs to concentrate on lowering the various risks of potential harm, especially the most serious ones.


As such, the Provisions establish a line of reasoning that entails differentiated regulation and graded, categorized management. They require that algorithmic recommendation service providers be subjected to graded, categorized management based on the public opinion attributes or social mobilization capabilities of algorithmic recommendation services, content category, user scale, the importance of data processed by algorithmic recommendation technology, the degree of interference in user behavior, and so on, and they implement a filing management system for algorithmic recommendation services that have public opinion attributes or social mobilization capabilities.


Building on our understanding of the risks and harms of algorithm technology, establishing different regulatory and supervisory paths according to the type and severity of algorithmic harm, using targeted regulation, weighing and balancing through granting rights, and the pursuit of accountability, is an appropriate approach for law to respond to the risks and challenges of new AI technologies.


To top

Cite This Page

王莹 (Wang Ying). "Shutting Algorithms into a Legal Cage [把算法关进法律的笼子]". CSIS Interpret: China, original work published in Caijing E-Law [财经E法], January 8, 2022

FacebookTwitterLinkedInEmailPrintCopy Link