把算法关进法律的笼子
Return to the Library

Shutting Algorithms into a Legal Cage

把算法关进法律的笼子

A legal scholar from Renmin University analyses the legal aspects of algorithm regulations, effective March 2022, and points out existing issues on technology platforms which the regulations seek to address.


FacebookTwitterLinkedInEmailPrintCopy Link
Original text
PDF
English text
PDF
See an error? Drop us a line at
View the translated and original text side-by-side

Building on our understanding of the risks and harms of algorithm technology, establishing different regulatory and supervisory paths according to the type and severity of algorithmic harm, using targeted regulation, weighing and balancing through granting rights, and the pursuit of accountability, is an appropriate approach for law to respond to the risks and challenges of new AI technologies.

在理解算法技术风险与侵害的基础上,依据算法侵害类型及其严重程度确立不同规制与监管路线,针对性地监管、赋权制衡、问责追责,是法律应对人工智能新兴技术风险与挑战的妥当方案。

As algorithm application contexts expand, the tentacles of algorithm technology reach into every corner of digital life: e-commerce shopping, browsing for information and knowledge, financial credit, performance management, AI-assisted sentencing, biometric identification, and monitoring and security—all of these bear the imprint of algorithms. People are perpetually entangled in a net of algorithms.

随着算法应用的场景拓展,算法技术的触角伸入数字化生活的每个角落:电商购物、信息知识浏览、金融信贷、绩效管理、AI辅助量刑、生物识别及监控安保都有算法的身影,人们无时无刻不笼罩在算法之网中。

The E-Commerce Law and the Consumer Protection Law were early legislative responses to problems related to algorithms such as big data swindling, algorithmic discrimination, algorithmic black boxes, and algorithmic collusion.

面对大数据杀熟、算法歧视、算法黑箱、算法共谋等算法问题,早前的《电子商务法》与《消费者权益保护法》相继从立法层面作出回应。

The Personal Information Protection Law (“Personal Protection Law”), which was passed on August 20, 2021, introduced the concept of automated decision-making (自动化决策) and included specific provisions on its regulation. On September 17, 2021, the Cyberspace Administration of China and eight other departments issued the Guiding Opinion on Strengthening the Overall Governance of Internet Information Service Algorithms, which stated: “Within about three years, [we should] gradually establish a comprehensive algorithm security governance structure with robust governance mechanisms, a refined supervisory system, and a standardized algorithm ecosystem.” On January 4, 2022, four departments (the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation) jointly issued the Provisions on the Administration of Internet Information Service Algorithmic Recommendation  (“Provisions”). These Provisions will go into effect on March 1, 2022.

2021年8月20日通过的《个人信息保护法》(下称《个保法》)引入自动决策概念并设置专条进行规制。2021年9月17日,网信办等九部门发布《关于加强互联网信息服务算法综合治理的指导意见》,提出“利用三年左右时间,逐步建立治理机制健全、监管体系完善、算法生态规范的算法安全综合治理格局”。2022年1月4日,国家网信办、工信部、公安部和市场监管总局等四部门联合发布《互联网信息服务算法推荐管理规定》(下称《规定》),该规定将自2022年3月1日起施行。

To date, China has created an initial three-layered algorithm regulatory system of different forms of algorithm-specific regulations from sectoral laws, and data laws to joint departmental rules. If 2021 can be called “year one” of China’s algorithm governance due to the transition from responding to algorithms using scattered, traditional sectoral laws to governing them at data sources, then from 2022, with the introduction of the Provisions, China has entered the age of “algorithm law” (“算法法”) with whole-of-process pre- and during-use dynamic algorithm regulations.

至此,中国形成了从部门法、数据法到联合部门规章形式的专门算法管理规定等,三个层面的算法初步规制体系。如果说中国2021年开启算法治理元年,从零散传统部门法应对算法过渡到从数据源头端规制算法阶段,伴随《规定》的出台,中国则从2022年开始迈向对算法进行事前事中全流程、动态规制与监管的“算法法”时代。

“The problems resulting from unreasonable applications of algorithms, such as algorithmic discrimination, big data swindling, and induced addiction, profoundly affect the normal dissemination order, market order, and social order and pose a challenge to the maintenance of ideological security, social fairness, and the lawful rights and interests of internet users. There is an urgent need to establish rules and systems for algorithmic recommendation services and strengthen the norms governing them, and to strive to enhance our ability to prevent and defuse security risks relating to algorithmic recommendations and promote the healthy, orderly development of algorithm-related industries.” This official explanation by the Cyberspace Administration of China on the Provisions reveals the thinking that motivated this effort to regulate algorithmic risk. It is also how we interpret the Provisions’ underlying logic: while automated algorithmic decision-making brings efficiency and convenience, it also contains technological risks.

“算法歧视、大数据杀熟、诱导沉迷等算法不合理应用导致的问题也深刻影响着正常的传播秩序、市场秩序和社会秩序,给维护意识形态安全、社会公平公正和网民合法权益带来挑战,迫切需要对算法推荐服务建章立制、加强规范,着力提升防范化解算法推荐安全风险的能力,促进算法相关行业健康有序发展。”国家网信办的官方解读揭示了《规定》算法风险规制的出发点,也是我们解读该规定的底层逻辑:算法自动化决策带来效率、便捷的同时,也蕴含着技术风险。

The flip side to the benefits of algorithmic technology is algorithmic harm, including algorithmic labeling, algorithmic manipulation, and algorithmic discrimination. The logical starting point for laws and regulations governing algorithms is to ask how to defend against algorithmic harm and risks at the same time as sharing algorithms’ benefits.

算法技术红利的另一面是算法侵害,包括算法标签、算法操纵、算法歧视等等。如何在分享算法红利的同时抵御算法侵害与风险,是算法法律规制的逻辑基点。

Algorithmic Labeling: Precision Marketing and Over-Recommending

算法标签:精准营销与过度推荐

An algorithm provides big data portraits of users. It creates electronic identity labels that help algorithm service providers or network platforms to sell products or services. Once these labels are formed, they are filed and then circulated among different algorithm service providers, with long-term negative effects on individuals or society.

算法对用户进行大数据画像,形成便于算法服务提供者或网络平台进行产品或服务推销的电子身份标签,该标签一旦形成便会存档,在不同算法服务提供者之间流通,对个人或社会造成长期负面影响。

By attaching different labels to users, big data portraits type users so that merchants can engage in precision marketing and over-recommending. For example, a social-networking app may record keywords containing information relating to money worship and ostentation, pornography and vulgarity in users’ labels and then push similar information to them, thereby encouraging tendencies towards money worship, extravagance, and vulgarity.

大数据画像将用户贴上不同的标签进行分型,便于商家进行精准营销与过度推荐。例如社交App可将包含拜金炫富、色情低俗类的信息关键词记入用户标签,并向其推送类似信息,助长拜金、奢靡或低俗之风。

Evaluations by the Nandu Internet Content Ecology Governance Research Center found that the recommendation interface on the youth-oriented homepage of the Dragonfly FM app contained advertising links to adult content or dating or to vulgar short videos. Some social-networking apps have been exposed as having pushed private pictures of minors and videos that skirt the line with pornography to users who were searching parent-child topics. It is possible that user labels using keywords involving illegal child pornography-related information were set up for these users and that relevant information was pushed to them on this basis.

南都网络内容生态治理研究中心测评曾发现,蜻蜓FM App青少年模式主页推荐界面存在成人内容或情趣交友、低俗短视频等广告链接。一些社交类App此前也被曝向搜索亲子话题的用户推送未成年人隐私图片、色情擦边球视频,可能是对这部分用户以未成年人色情的违法信息关键词设置用户标签,并据此推送相关信息。

To respond to inappropriate algorithmic labeling and the harm it may cause, and to prohibit the use of keywords involving illegal or harmful information to attach labels to, classify, and push information to users in a targeted way, Article 10 of the Provisions stipulates: “Algorithmic recommendation service providers shall strengthen user model and user label management and perfect rules for recording interest points in user models and rules for managing user labels. They may not enter keywords involving unlawful or harmful information into user interest points or use them as user labels and use this as a basis for pushing information.” If algorithmic recommendation service providers can strengthen management and standardize conduct at the point in algorithm design and deployment where user models and labels are managed, and if they put an end to the occurrence of keywords involving illegal or harmful information in interest point rules for user models or in user labels, they will be able to reduce user exposure to pornographic and vulgar information and to purify information network space. They will also help to maintain mainstream values and to protect minors.

为应对不当算法标签及其可能带来的损害,禁止以违法和不良信息关键词对用户贴标签、归类并针对性推送信息,《规定》第10条规定,“算法推荐服务提供者应当加强用户模型和用户标签管理,完善记入用户模型的兴趣点规则和用户标签管理规则,不得将违法和不良信息关键词记入用户兴趣点或者作为用户标签并据以推送信息。”如果算法推荐服务者能够在用户模型和标签管理这一算法设计部署环节加强管理,规范行为,杜绝违法不良信息关键词出现在用户模型兴趣点规则或用户标签中,将能够减少色情、低俗信息对用户的滋扰,净化信息网络空间,也有利于主流价值观维护与未成年人保护。

It is worth noting that, compared with the Draft for Comments [of the Provisions], the effective version deleted the stipulation that, “it is forbidden to set up discriminatory or biased user labels.” Since “discriminatory and biased” was overly broad and its meaning was fuzzy and elastic, limiting prohibited labels to those with keywords involving illegal and harmful information was clearly more lenient and more operationally practical.

值得注意的是,相较于《征求意见稿》第10条,生效文本删除了“不得设置歧视性或者偏见性用户标签“的规定,鉴于“歧视性和偏见性”的范围过大、含义模糊弹性,将禁止性标签限定于违法、不良信息关键词显然更为宽松,也更具有可操作性。

In addition, Article 17 of the Provisions specifies that algorithmic recommendation service providers should provide users with an option of not having their personal characteristics targeted or a convenient option to turn off the algorithmic recommendation service and provide users with a function for selecting or deleting user labels that are used in algorithmic recommendation services to target their own personal characteristics. This uses the approach of granting rights to users, passing the initiative to them to prevent themselves from having inappropriate algorithm labels attached to them. However, user labels differ from personal information. The question of how technically to use the personal information deletion rights granted under the Personal Protection Law to support user label deletion and management rights is something that needs to undergo further practical testing.

此外,《规定》第17条规定算法推荐服务提供者应当向用户提供不针对其个人特征的选项,或者向用户提供便捷的关闭算法推荐服务的选项,以及向用户提供选择或者删除用于算法推荐服务的针对其个人特征的用户标签的功能,通过对用户赋权的方式将主动权交给用户,避免其被贴上不当的算法标签。但用户标签不同于个人信息,如何在技术上比照《个保法》赋予的个人信息删除权支持用户标签的删除、管理权,尚待实践进一步检验。

Algorithmic Manipulation: Information Cocoons and Squeezing Labor

算法操纵:信息茧房与劳动压榨

Algorithms can use collected and pooled user data to create huge information advantages. They push information at users to influence their choices and decisions and to manipulate individuals and groups. For example, news recommendations give rise to “information cocoons” and the phenomenon of catering to people’s tastes, pushing to users only the information content that they like, which may then lead users to develop bigoted viewpoints or even to become addicted. Toutiao’s algorithmic recommendations, by capturing, pooling, classifying, ranking, and extracting user-browsed content data, perform matching and push information at users in a precise way based on their interests and tastes. Not only does this involve the problems of blindly catering to users and seeking to maximize traffic, it may also infringe on user privacy and violate public morality and even laws, and Toutiao have been singled out by official media for criticism. People who have become wrapped up in information cocoons receive a narrow range of information and find it difficult to get a comprehensive perspective or form correct values, which results in gaps between social groups and even social cleavages as well as obstacles to reaching a consensus on public issues.

算法能够利用搜集、汇聚的用户数据形成巨大的信息优势,向用户推送信息,影响其选择与决策,实现对个人和群体的操纵。例如,新闻推荐造成“信息茧房”、投其所好,只给用户推送其喜欢的信息内容,可能导致用户意见观点偏执甚至沉迷。今日头条的算法推荐通过对用户浏览内容数据的抓取、汇聚、分类、排序、提取,根据其兴趣爱好进行匹配并向其精准推送,不仅存在一味迎合取悦用户及追求流量最大化问题,而且可能侵犯用户隐私,违反公德甚至法律,曾被官媒点名批评。包裹在信息茧房中的人们接受信息单一,难以获得全面的观察视角、形成正确的价值观,导致社群鸿沟甚至社会撕裂、阻碍公共议题的共识达成。

On August 10, 2021, the Central Propaganda Department and four other departments jointly issued the Guiding Opinion on Strengthening Literary Criticism Work in the New Era, which emphasized “strengthening network algorithm research and guidance, carrying out comprehensive governance of network algorithmic recommendations, and not providing dissemination channels to erroneous content.” While that Opinion was just an administrative guide, the Provisions establish rules at the joint departmental regulatory level with regard to algorithmic manipulation which might result in information cocoons, such as algorithmic information blocking, over-recommending, manipulation of lists, and control of hot searches. These rules include: “Algorithmic recommendation service providers shall provide users with an option of not having their personal characteristics targeted, or provide users with a convenient option to turn off algorithmic recommendation services” (Article 17) and prohibition of “the use of algorithms to block information, over-recommend, manipulate lists or search result rankings, control hot searches or selections, et cetera, to interfere in information presentation, engaging in behaviors to influence network public opinion or evade regulation” (Article 14). In relation to the protection of minors, algorithmic recommendation service providers are prohibited from “pushing information to minors which may adversely influence minors’ physical and mental health, such as that which may incite minors to imitate unsafe conduct or violate social morality or may lead minors towards harmful tendencies, and must not use algorithmic recommendation services to induce minors to develop internet addiction” (Article 18).

2021年8月10日,中宣部等五部门联合印发了《关于加强新时代文艺评论工作的指导意见》,强调“加强网络算法研究和引导,开展网络算法推荐综合治理,不给错误内容提供传播渠道”。该意见仅是一种行政指导,而此次《规定》从部门联合规章层面对算法屏蔽信息、过度推荐、操纵榜单及控制热搜等可能造成信息茧房的算法操纵行为作出规定。包括,“算法推荐服务提供者应当向用户提供不针对其个人特征的选项,或者向用户提供便捷的关闭算法推荐服务的选项”(第17条),禁止“利用算法屏蔽信息、过度推荐、操纵榜单或者检索结果排序、控制热搜或者精选等干预信息呈现,实施影响网络舆论或者规避监督管理行为”(第14条)。在未成年人保护方面,禁止算法推荐服务提供者“向未成年人推送可能引发未成年人模仿不安全行为和违反社会公德行为、诱导未成年人不良嗜好等可能影响未成年人身心健康的信息,不得利用算法推荐服务诱导未成年人沉迷网络”(第18条)。

What merits commending is that the Provisions are not limited to [stipulating on] inducing addiction, influencing public opinion, and other such applications of algorithms in the results stage, but also institute regulation and supervision earlier on at the algorithm development and design stage. They penetrate the technical layer of algorithms and make clear the principle of open and transparent algorithmic recommendation services, encouraging algorithmic recommendation service providers to comprehensively employ strategies such as content de-weighting and intervention scattering, and encouraging them to boost rule transparency and explicability, in order to establish technical guard rails against abusive phenomena from algorithmic manipulation such as information cocoons and public opinion manipulation (Article 12). The Provisions introduce the perspective of AI law regulating a fusion of technology and law. They also highlight the regulatory characteristic of “algorithm law,” namely the dynamic and technical regulation of the entire process, both before and during algorithm operation.

值得肯定的是,《规定》未局限在诱导沉迷、舆论影响等算法应用的结果端,也将规制与监管前置到算法开发、设计的行为端,深入算法技术层面,明确算法推荐服务公开透明的原则,鼓励算法推荐服务提供者综合运用内容去重、打散干预等策略、增强规则透明度和可解释性等,为信息茧房、舆论操纵等算法操纵乱象设置技术护栏(第12条)。该规定引入了人工智能法律规制技术与法律融合的视角,也彰显了“算法法”的事前事中全流程、动态性、技术性规制特征。

Aside from information cocoons and inducing addiction, algorithms manipulate people in areas such as performance management and platform algorithm dispatching. The use by food delivery platforms of distribution algorithms to squeeze and manipulate riders, violating riders’ labor rights and even threatening their lives and health by causing traffic accidents, was captured in the article “Delivery Riders, Stuck in the System” in 2020, attracting widespread attention to the relationship between algorithms and digital labor. Under tremendous pressure from public opinion, Meituan adjusted its distribution algorithm, integrated multiple time-estimation models, and replaced the original predicted order delivery “time point” with a flexible “time period.” In setting up the algorithm, it took into consideration protection of rider labor rights.

除了信息茧房与引诱沉迷,算法对人的操纵也体现在算法绩效管理、平台算法派单等方面。外卖平台利用配送算法对骑手进行压榨与操纵,侵害骑手劳动权甚至威胁骑手的生命健康,引发交通安全事故,2020年《外卖骑手,困在系统里》一文引起舆论对算法与数字劳动关系的广泛关注。在强大的舆论压力面前,美团对配送算法进行调整,整合多种估算时间模型,将原先的订单预计送达“时间点”变更为弹性的“时间段”,在算法设置中引入骑手劳动权利保护考量。

In the age of AI algorithms, it is not only delivery riders who are stuck in the system, the work modes of couriers, online car-hailing service drivers, and of other occupations are all increasingly reliant on, and are commanded and constrained by, algorithms. In view of this, the Provisions establish standards specifically for algorithm applications directed at workers, stipulating that “where algorithmic recommendation service providers provide work dispatch services to workers, they shall protect the lawful rights and interests of workers such as remuneration for labor, breaks, and vacations, and establish and improve algorithms relating to platform orders and allocation, remuneration composition and payment, work time, rewards and penalties, et cetera” (Article 20).

在人工智能算法时代,困在系统里的不仅仅是外卖骑手,快递员、网约车司机等职业工作模式都越来越多地依赖算法、受算法的指挥与制约。有鉴于此,《规定》专门为针对劳动者的算法应用设规立范,规定“算法推荐服务提供者向劳动者提供工作调度服务的,应当保护劳动者取得劳动报酬、休息休假等合法权益,建立完善平台订单分配、报酬构成及支付、工作时间、奖惩等相关算法”(第20条)。

We can expect that, with the promulgation and enforcement of the Provisions, in the protection of rights and interests associated with platform economy digital labor we will bid farewell to an era wherein protection can only depend upon hot events and viral online essays. The protection of digital labor rights and interests will be implemented in an institutionalized way as an important part of platform enterprise compliance. The hope is that the above stipulations can truly play a role, guiding platforms—as algorithmic recommendation service providers—to design and apply algorithms which are more humane and compassionate, and that workers will no longer be pushed around by algorithms and reduced to serving as pawns in a heartless system.

可以预计,随着《规定》的颁行,平台经济数字劳动权益保护将告别仅靠热点事件和网络爆文推动的时代,数字劳动权益保护将作为平台企业合规的重要内容予以制度化贯彻。期待上述规定能够真正发挥作用,引导平台作为算法推荐服务提供者设计及应用更具有人情味的、有温度的算法,劳动者也不再被算法裹挟、沦为冰冷的系统里任人摆布的棋子。

Algorithmic Discrimination and Big Data Swindling

算法歧视与大数据杀熟

“Big data swindling” (大数据杀熟) which has often been denounced by consumers in the digital economy, uses algorithms to achieve price discrimination and is another example of algorithmic discrimination. The algorithm creates user portraits by searching user data and provides different list prices based on their preferences so as to achieve precision marketing and profit maximization.

数字经济中颇受消费者诟病的“大数据杀熟”通过算法实现“价格歧视”,也属于一种算法歧视。算法通过搜集用户数据进行用户画像,根据其偏好提供不同定价,实现精准营销和收益最大化。

On November 26, 2020, the Consumer Attitudes and Perceptions Concerning Online Platform Conduct and Market Competition questionnaire survey report published by the Nandu Anti-Monopoly Research Team revealed that the problems which most concerned survey respondents were, in order of importance: misuse of personal data (50%), big data swindling (20%), excessive precision pushing, hard-to-discern paid search results, either-or ultimatum platforms, internet blocking, forced tie-in sales, and platform self-preferencing. Seventy-three percent of respondents were opposed to “big data swindling.”

2020年11月26日南都反垄断研究课题组发布的《消费者对在线平台行为与市场竞争的态度和认知》问卷调查报告显示:最受受访者重视的问题依序是个人数据被滥用(占比50%)、大数据杀熟(占比20%)、过度精准推送、难以辨别付费搜索结果、平台二选一、互联网屏蔽、强制搭售和平台自我优待。其中73%受访者对“大数据杀熟”持反对态度。

Previously, in August 2020, the Ministry of Culture and Tourism issued Interim Regulations Governing Online Tourism Business Services, prohibiting online tourism businesses from using pricing algorithms to achieve big data “swindling.” Article 18 of the E-Commerce Law and Article 24 of the Personal Protection Law stipulate the obligation that, when using information-pushing and commercial marketing algorithms, an option should be provided to avoid targeting of personal characteristics. The State Council Anti-Monopoly Commission Antitrust Guidelines on the Platform Economy, issued on February 7, 2021, taking an antitrust law perspective, also regulates on conduct of platform economy businesses in dominant market positions which implements differentiated treatment based on big data and algorithms.

此前,文旅部2020年8月发布《在线旅游经营服务管理暂行规定》,禁止在线旅游经营者利用定价算法实施大数据“杀熟”。《电子商务法》第18条与《个保法》第24条规定信息推送、商业营销算法应同时提供不针对其个人特征选项的义务。2021年2月7日发布的《国务院反垄断委员会关于平台经济领域的反垄断指南》也从反垄断法角度对具有市场支配地位的平台经济领域经营者基于大数据和算法实施差别待遇行为进行规制。

The Provisions which soon go into effect establish targeted norms for algorithmic recommendation service providers. They stipulate that, when selling consumers products or providing them with services, providers should protect consumers’ fair trading rights and must not use algorithms based on consumer preferences, transaction habits, or other such characteristics to engage in unlawful conduct such as extending unreasonably differentiated treatment in trading conditions such as trading prices (Article 21).

此次《规定》针对性地对算法推荐服务提供者设置行为规范,规定其向消费者销售商品或者提供服务的,应当保护消费者公平交易的权利,不得根据消费者的偏好、交易习惯等特征,利用算法在交易价格等交易条件上实施不合理的差别待遇等违法行为(第21条)。

Given the advantage of platforms with respect to data and algorithm technology, to defend against the above forms of algorithm harm, the Provisions, as well as regulating algorithmic recommendation service entities’ conduct at both the result and behavioral stages, also continue the notion of data rights granting used in the GDPR and the Personal Protection Law, “granting algorithm rights” to users while imposing corresponding obligations on platforms. For example, stipulating on algorithm explicability and safeguards for the right of users to be informed, they require that platforms conspicuously notify of algorithms’ basic principles, objectives, and main operating mechanisms; they require that platforms give users the right to withdraw from personalized recommendations by providing them with an option to not have their personal characteristics targeted and with a convenient option to turn off algorithmic recommendation services; and they grant users the right to manage labels and provide the possibility of input data intervention. Though the relevant technology still awaits testing to determine operability, the innovative efforts made with respect to granting algorithm rights merit approval.

鉴于平台的数据与算法技术优势,要抵御上述算法侵害,除了从结果端与行为端规制算法推荐服务主体的行为,《规定》也延续了GDPR与《个保法》数据赋权的理念,对用户进行“算法赋权”,同时对平台施加相应义务。例,如规定算法可解释性、保障用户知情权,要求平台以显著方式告知算法基本原理、目的意图和主要运行机制;要求平台向用户提供不针对其个人特征的选项、提供便捷的关闭算法推荐服务选项的义务等赋予用户退出个性化推荐的权利;赋予用户对标签的管理权及提供输入数据干预的可能性,尽管相关技术可操作性尚待检验,其在算法赋权方面的创新性努力值得肯定。

At the same time, in keeping with the principle of algorithm accountability, the Provisions require algorithmic recommendation service providers to assume responsibility as algorithm security entities and establish and improve management systems and technical measures such as algorithm mechanism audits, S&T ethical reviews, user registration, information release audits, data security and personal information protections, counter measures for telecommunication and online fraud, security assessment and monitoring, and security incident emergency response arrangements. Articles 31 through 34 establish the corresponding legal liability clauses, which provide a basis for holding algorithm violators accountable.

同时,《规定》也基于算法问责性原则,要求算法推荐服务提供者落实算法安全主体责任,建立健全算法机制机理审核、科技伦理审查、用户注册、信息发布审核、数据安全和个人信息保护、反电信网络诈骗、安全评估监测、安全事件应急处置等管理制度和技术措施,并在第31条-34条设置了相应法律责任条款,为算法侵害问责及追责提供依据。

In sum, the Provisions go beyond the data protection GDPR thinking regarding the granting of personal rights, introducing risk regulation thinking which sets out from a focus on algorithmic decision-making and its risks of harm, while coinciding with related European Union AI policy guides and regulatory approaches.

综上,《规定》超越数据保护的GDPR个人赋权思维,引入以算法决策及其侵害风险为出发点的风险规制思维,也与欧盟相关AI政策指引与规制路径不谋而合。

The EU’s Ethics Guidelines for Trustworthy AI point out that AI systems, while bringing substantive benefits to individuals and society, also bring risks and negative effects, and that some such risks are hard to predict, identify, and measure. They state that the development, deployment, and use of AI systems must conform to the three major ethical principles of respect for human autonomy, prevention of harm, and fairness and explicability.

欧盟《可信AI伦理指引》指出AI系统在给个人和社会带来实质性利益的同时也带来风险与负面影响,其中一些风险是很难预料、识别及测量的,并提出AI系统的开发、配置及适用需符合尊重人的自治、预防侵害、公平和可解释性的三大伦理原则。

The EU’s White Paper On Artificial Intelligence—A European Approach to Excellence and Trust, continuing this risk and harm-oriented thinking, points out that AI harm may be physical (safety and health of individuals, loss of life and property) or immaterial (invasion of privacy, violation of human dignity, employment discrimination, impairment of autonomy, et cetera) and that the regulatory framework needs to concentrate on lowering the various risks of potential harm, especially the most serious ones.

欧盟《人工智能白皮书——通往卓越与信赖的欧洲路径》延续这一以风险和侵害为导向的思维,指出AI侵害可能是物质性的(个人安全健康、生命与财产损失)和非物质性的(侵犯隐私、侵犯人类尊严、就业歧视、妨害意思自治等方面),规制框架需集中于降低不同的潜在侵害的风险,尤其是那些最为严重的侵害。

As such, the Provisions establish a line of reasoning that entails differentiated regulation and graded, categorized management. They require that algorithmic recommendation service providers be subjected to graded, categorized management based on the public opinion attributes or social mobilization capabilities of algorithmic recommendation services, content category, user scale, the importance of data processed by algorithmic recommendation technology, the degree of interference in user behavior, and so on, and they implement a filing management system for algorithmic recommendation services that have public opinion attributes or social mobilization capabilities.

因此,《规定》确立区分监管、分级分类管理的思路,要求根据算法推荐服务的舆论属性或者社会动员能力、内容类别、用户规模、算法推荐技术处理的数据重要程度、对用户行为的干预程度等对算法推荐服务提供者实施分级分类管理,并对具有舆论属性或者社会动员能力的算法推荐服务实施备案管理制度。

Building on our understanding of the risks and harms of algorithm technology, establishing different regulatory and supervisory paths according to the type and severity of algorithmic harm, using targeted regulation, weighing and balancing through granting rights, and the pursuit of accountability, is an appropriate approach for law to respond to the risks and challenges of new AI technologies.

在理解算法技术风险与侵害的基础上,依据算法侵害类型及其严重程度确立不同规制与监管路线,针对性地监管、赋权制衡、问责追责,是法律应对人工智能新兴技术风险与挑战的妥当方案。

To top

Cite This Page

王莹 (Wang Ying). "Shutting Algorithms into a Legal Cage [把算法关进法律的笼子]". CSIS Interpret: China, original work published in Caijing E-Law [财经E法], January 8, 2022

FacebookTwitterLinkedInEmailPrintCopy Link