For anyone who thinks they can develop, deploy, use, or market artificial intelligence with impunity, think again.
Today the Federal Trade Commission (FTC), the Department of Justice Civil Rights Division (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement (the “Joint Statement”) reminding the public that “[e]xisting legal authorities apply to the use of automated systems and innovative technologies just as they apply to other practices.” (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) Specifically, the Joint Statement warned that “automated systems marketed as ‘artificial intelligence’ or ‘AI’” have the potential to magnify problems that fall under agency enforcement authority, including “the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) In other words, automated systems/innovative technologies/artificial intelligence/AI are subject to agency authority and rules.
The Joint Statement outlined that these federal agencies are “responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections” and listed some of each agency’s prior advice concerning the potential negative impact automated systems labeled “artificial intelligence” or “AI” could have on such rights. (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.)
Regarding unlawful discrimination in automated systems, the Joint Statement noted that (1) data and data sets, (2) model opacity and access, and (3) design and use can each contribute to potential discrimination in automated systems. (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) These warnings, however, are not new, and the Joint Statement reminded the public of this fact, outlining previous warnings given by each agency concerning the “potentially harmful uses of automated systems.” The Joint Statement outlined the following such oversight warnings:
- The Consumer Financial Protection Bureau (CFPB) Circular 2022-03, “confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used.” (Bold emphasis added.) (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) The Joint Statement further noted that the so-called “it wasn’t me” and “black box” explanations are no defense to technology violating the consumer financial protection laws. (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems; see also my prior article Unpacking the FTC’s Warning to “Keep Your AI Claims in Check” for a further explanation of these so-called “defenses.”)
- The Department of Justice’s Civil Rights Division’s Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screening Practices (the Statement of Interest), “explaining that the Fair Housing Act applies to algorithm-based tenant screening services.” (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.)
- The Equal Employment Opportunity Commission (EEOC) guidance The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, “explaining how the Americans with Disability Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.” (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.)
- The Federal Trade Commission’s (FTC) prior warning to Keep your AI claims in check, among other things, and the FTC’s prior writings advising to be mindful and aware of the potential harms caused by AI when designing, deploying, using and marketing AI, and the consequences the FTC has previously imposed on companies that used improper data sets to train AI models. (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.)
If the Joint Statement was not clear enough for anyone designing, deploying, marketing, and using automated systems and artificial intelligence to understand the seriousness of this Joint Statement, the FTC’s release of the Joint Statement was even clearer, stating these federal agencies “resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.” (See FTC Chair Kahn and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI.) The FTC release went on to remind the public: “There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.” (See FTC Chair Kahn and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI.)
What is the Takeaway?
In my prior article Beware, Quack-Quack Is In the AIr: Is Artificial Intelligence/AI Really So Smart?, I concluded: “Like the characters in 1984, today’s artificial intelligence speaks not with a brain but a larynx. It produces ‘noise uttered in unconsciousness’—at least for now. We are still the humans in the room with voices connected to our brains—at least for now. It is time to use them to guide the conversation.”
In my article Unpacking the FTC’s Warning to “Keep Your AI Claims in Check”, I concluded that “the FTC is warning developers, sellers, and marketers of AI products to be very careful about testing, labeling, and advertising AI products and to ‘Keep your AI claims in check’ or risk experiencing the serious p(ai)n of a FTC investigation or enforcement action.”
My takeaway here is that the days of the wild west for the design, development, deployment, and marketing of artificial intelligence/AI are over; the proverbial federal agency sheriffs are coming to town. And just behind them are the litigators.
In December 2021, Shannon Boettjer, Esq. successfully completed the course Artificial Intelligence: Implications for Business Strategy through the Massachusetts Institute of Technology (MIT) in conjunction with MIT Sloan School of Management and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Shannon has presented to the New York State Bar Association (NYSBA) Commercial and Federal Litigation Section Data Security and Technology Litigation Committee concerning the role of federal agencies in regulating the use, development, deployment, and marketing of artificial intelligence and the inherent potential benefits and risks AI poses to society.