Artificial intelligence liability: the rules are changing

I.              Intro

Artificial intelligence (“AI”) use has blossomed. The AI market was valued at $27.3 in 2019 and is projected to grow to $266.92 billion by 2026. Associated AI applications have also grown. For example, the market for facial recognition technology, much of which uses AI, had a value of $3.72 billion in 2020 and is forecasted to grow to $11.62 billion by 2026.  At the same time, AI has been known to misidentify faces, among other things, when used in facial recognition technology. If you are an AI investor or entrepreneur, whether and under what circumstances an AI company can be held liable in the U.S. or E.U. for malfunctioning AI is essential.

II.            Applications

The benefits associated with AI applications have grown immensely. In 1996, for example, Lynn Cozart disappeared just days before he was to be sentenced by a Pennsylvania court to spend years in prison for molesting three children. For years, investigators searched for him. However, the case went frigid. Then, in 2015, the FBI’s team responsible for face recognition search, called the Facial Analysis, Comparison and Evaluation Services, matched the mug shot to the face of one “David Stone” who lived in Muskogee, Oklahoma, and who worked at a local Wal-Mart. “After 19 years,” FBI program analyst Doug Sprouse says, “[Cozart] was brought to justice.” AI has also been used to flag “fake news” and “deep fakes.” Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. This can assist with online digital reputation management.

Notwithstanding, AI programmed facial recognition technology can misidentify subjects. For example, a 2012 study titled “Face Recognition Performance: Role of Demographic Information”, which was co-authored by the FBI, found females more difficult to recognize than males. It also found that that the commercial algorithms tested had the lowest matching accuracy rates on subjects aged 18-30. These inaccuracy rates can reach high percentages. For example, the algorithm running the London Metropolitan Police’s facial recognition technology was reported at one time to have an error rate as high as 81%.

III.          Law

The law has been relatively slow to regulate AI. There has been some case law in the United States concerning the regulation of computerized robotics. For example, in Jones v. W + M Automation, Inc., the plaintiff’s complaint against a manufacturer and programmer of a robotic loading system for product defect was dismissed by New York’s Appellate Division. In the court’s view, the defendants were not liable for plaintiff’s injuries at the GM plant where he worked because these defendants showed they “manufactured only non-defective component parts.” As long as the robot – and associated software – was “reasonably safe when designed and installed,” the defendants were not liable for plaintiff’s damages. GM, the end user, however, could still be liable for improperly modifying the hardware or software. The implication is that creators of AI software or hardware aren’t liable for any injuries as long as these products were non-defective when made. That being said, defectively made AI, or AI that is modified by a licensee and causes damages as a result, can create liability for both the licensor and/or licensee. Whether AI is defectively made will depend, like in other product liability cases, on prevailing industry standards.

Recently, the Federal Trade Commission proposed guidelines concerning the regulation of AI. On April 8, 2020, the Commission wrote a blog “Using Artificial Intelligence and Algorithms.” In it, the FTC basically recommends that those who use or license AI in a way that affects consumer well-being do so in a way that is “transparent” – particularly regarding decisions that affect a consumer’s credit. As such, many of the decisions concerning the use and implementation of AI in the consumer context can be regulated by Section (5)(a) of the FTC Act, which provides that “unfair or deceptive acts or practices in or affecting commerce . . . are . . . declared unlawful.” Thus, to the extent that AI companies warrant or represent things about their products are untrue or deceptive, the FTC – along with private attorney’s general – could hold such companies liable for resulting damage. The FTC echoed many of these sentiments in a more recent April 19, 2021, post “Aiming for truth, fairness, and equity in your company’s use of AI.”

The E.U. has also issued AI liability guidelines. In 2019, the E.U. released Liability for Artificial Intelligence and other Emerging Technologies. The document explains that some applications of AI will warrant strict liability -- such as in the case of persons operating “AI-driven robots in public places.” Manufacturers of products that incorporate emerging digital technology -- including AI -- should, as with other products, be “liable for damage caused by defects in their products[.]” The manufacturer can be liable  “even if the defect was caused by changes made to the product [so long as it was still] under the producer’s control.” More recently, the E.U. released a White Paper on Artificial Intelligence. In it, the E.U. explains that additional compliance requirements  would apply to “high-risk AI applications,” such as healthcare, transport, and energy. These additional requirements include, among other items, keeping records concerning the algorithm used in AI.

IV.          Conclusion

AI liability road rules in the U.S. and E.U. are developing. One thing is clear: under what circumstances a company will be liable for its AI depends on whether a defect was present upon the AI’s release and whether, in the E.U. at least, the application is “high-risk.”

This article originally appeard in the London School of Economics Business Review on August 16, 2021: https://blogs.lse.ac.uk/businessreview/2021/08/16/artificial-intelligenc...

Add new comment