FTC: Businesses Need to Exercise Caution When Relying on AI Tools to Reduce Harm Online | Holland & Knight LLP

In a Federal Trade Commission (FTC) report released on June 16, 2022, the agency criticizes the use of artificial intelligence (AI) to combat online harm. In the agency’s report, which was released in response to the congressional investigation,1 the FTC found that the use of AI did not significantly reduce online harm overall.

While FTC Report Acknowledges Use of AI in Combating Harmful Content and Other Positive Outcomes, Report Warns of “Overreliance”

FTC report finds AI effective in combating harms that require no context to detect, including illegal items sold online and child pornography, and recognizes AI systems effective in preventing the spread inadvertently harmful information. AI can be used for “interventions” or “friction” purposes before the spread of harmful content, including labeling, adding interstitials, sending warnings. These policies cannot prevent the malicious dissemination of information.2

Platforms can also use artificial intelligence tools to tackle online harm by finding the networks and actors behind them. AI tools can facilitate cross-platform mapping of certain communities spreading harmful content. However, these strategies can also inadvertently trap marginalized communities into using protected methods to communicate about authoritarian regimes.

Notwithstanding the inevitability of using AI, the FTC is concerned about the use of AI to fight online harm and warns against overreliance for several reasons.

First, AI tools have built-in inaccuracy. Datasets used to train AI systems are often not large, accurate, and representative enough, and classifications can be problematic. AI tools are generally deficient in detecting and including new phenomena, and the operation of AI tools is subject to platform moderation policies that can be significantly flawed.

Second, AI tools are often not reliable at understanding context, so they usually cannot effectively detect fraud, fake reviews, and other implicitly harmful content.

Third, the use of AI cannot solve and can instead exacerbate prejudice and discrimination. Inappropriate datasets and a lack of diverse perspectives among AI designers can exacerbate discrimination against marginalized groups. Big tech companies can influence institutions and researchers and set the agenda for government-funded AI research. Artificial intelligence tools used to uncover the networks and actors behind harmful content can inadvertently suffocate minority groups.

Fourth, the development of AI may incentivize invasive consumer surveillance, as improving AI systems requires the collection of large amounts of accurate and representative training data.

Fifth, bad actors can easily evade AI detection by hacking, using their own developing AI technology, or simply using typos and euphemisms.

Finally, the massive amount of ordinary and ubiquitous posts that express discriminatory sentiments cannot be effectively detected by AI, even under human supervision.

Proposed recommendations from the FTC report

The report identifies the need to increase transparency and accountability of those deploying AI as a top priority. The report highlights the importance of increasing the diversity of data and AI designers/moderators to combat bias and discrimination. The report also finds that human oversight is a necessity.

Transparency

The FTC report pointed out that to increase transparency, platforms and other entities should do the following.

  1. Make consumers sufficiently aware of their basic civil rights and how their rights are impacted by AI. The report emphasized that consumers have the right not to be subjected to inaccurate and biased AI, the right not to be subjected to pervasive or discriminatory surveillance and control, and the right to meaningful redress if the using an algorithm harms them.
  2. Give researchers access to sufficient, useful and intelligible data and algorithms so that they can properly analyze the usefulness of AI and the spread and impact of disinformation.
  3. Keep auditing and evaluation independent while protecting auditors and whistleblowers who report illegal AI use.

Responsibility

The FTC report emphasized that to increase accountability, platforms and other entities should conduct regular audits and impact assessments, be held accountable for the results and impact of their AI systems, and provide appropriate redress for erroneous or unfair algorithmic decisions.

Diversity – Assess through a diverse lens

The FTC report recommends increasing the diversity of datasets, AI designers, and moderators. Companies must retain people with diverse perspectives and must strive to create and sustain diverse, equitable and inclusive cultures. AI developers need to be aware of the context in which data is used and the potential discriminatory harms it could cause, and mitigate those harms in advance.

Human supervision

The FTC emphasizes the importance of proper training and workplace protection for AI moderators and listeners. Training should address the implicit biases of human moderators and their tendency to be overly respectful of AI decisions. The FTC encourages platforms and other Internet entities to use Algorithmic Impact Assessments (AIAs) and audits, and to document the results of the assessment in a standardized manner. AIAs are used to assess the impact of an AI system before, during or after its use. Companies can mitigate poor results in time with AIAs, and the FTC and other regulators can obtain information from AIAs for investigations into deceptive and unfair trade practices. An audit focuses on evaluating the output of an AI model.

Two commissioners criticize the report

Commissioner Noah Joshua Phillips issued a dissenting statement and Commissioner Christine Wilson also listed several disagreements she had with the report in a concurring statement. The two Commissioners based their criticisms on three grounds.

First, the agency did not solicit enough feedback from stakeholders. They view the FTC report as a literature review of academic papers and news stories about AI. They note that the authors did not consult any Internet platform to find out how they view the effectiveness of AI, and they find that the report frequently cites the work and opinions of current FTC employees, claiming that the quantity of self-reference calls for the objectivity of the report. in question.

Second, they believe the report’s recommendation could have the counter-effect of subjecting vulnerable compliant entities to FTC enforcement action.3

Third, they conclude that the report’s negative assessment of the use of AI in the fight against online harm lacks merit. They find that conclusions about the ineffectiveness of AI are sometimes based on the fact that harmful content is not completely eliminated by AI tools, and they say the report lacks a cost-benefit analysis for whether the time and money saved by using AI tools to combat harmful content outweighs the costs of AI tools missing a certain percentage of that content.

Takeaway meals

  1. Companies should only collect the information necessary to provide the service or product. The FTC is not against implementing innovative AI tools to prevent fraud or fake reviews. However, the FTC encourages data minimization. Companies must adapt data collection to their need to provide services or products.
  2. Be transparent. The FTC may require social media platforms and other Internet entities to disclose sufficient information to allow consumers to make intelligible decisions about whether and how to use certain platforms. The FTC may also require entities to grant researchers access to information and algorithms to a certain extent.
  3. Be responsible. The FTC can hold platforms and other Internet entities accountable for the impact of their AI tools, particularly if the AI ​​infringes the rights of marginalized groups, even if the tools are intended to combat harmful content.
  4. Improve human surveillance. The FTC can encourage the standardization of appropriate training for AI moderators/listeners and the improvement of their protection in the workplace.
  5. Refrain from invasive consumer monitoring. Consumer interests in privacy trump the accuracy and usefulness of AI tools.
  6. Beware of potential free speech conflicts when anticipating “disinformation”.
  7. The FTC could conduct more research into the use of AI to combat online harm, and its report could be subject to significant changes depending on the sources it decides to consult.

Remarks

1 See Statement by Commissioner Alvaro M. Bedoya on the Report to Congress on Combating Online Harm through InnovationFTC (June 16, 2022) (acknowledging that in the Appropriations Act of 2021, Congress asked the Commission to report on the use of AI to detect or address harmful online content, including fake reviews , opioid sales, hate crimes, and election-related misinformation).

2 Commissioners Christine S. Wilson and Noah Joshua Phillips are concerned about the “pre-overlay misinformation” found to be effective in the report. Both point out in their statements that prebunking information that is not verifiably false but may be false could create free speech issues. See Dissenting Statement by Commissioner Noah Joshua Phillips Regarding the Combatting Online Harms Through Innovation Report to Congress and Concurring Statement from Commissioner Christine S. Wilson Report to Congress on Combating Online Harm Through InnovationFTC Public Statements (June 16, 2022).

3 In 2021, the FTC filed a lawsuit against an ad exchange company for violating the Children’s Online Privacy Protection Act (COPPA) and Section 5 of the FTC Act. The company claimed to take a unique human and technological approach to traffic quality and used human review to ensure compliance with its policies and rank websites. Business human review failed. But it was only human scrutiny that provided the “actual knowledge” necessary for the Commission to obtain civil penalties under COPPA. Had the company relied entirely on automated systems, it might have avoided monetary liability. United States v OpenX Technologies, Inc.Civil Action No. 2:21-cv-09693 (CD Cal. 2021).

Lucas E. Kelly