The last few months have been interesting. Two of the leading purveyors of facial recognition technology (Microsoft and Amazon Web Services) have all but admitted that the technology is not yet suitable for unsupervised automated decision-making.
Apparently, humans are still necessary after all. It seems that these companies may be sufficiently concerned about the consequences – and potential liability – that they don’t just recommend, but actually want to legislatively require, human decision-making. If Microsoft gets its way, the home state of these companies might be the first to legislate in this area.
What is going on here? In trying to drive the conversation, Microsoft and AWS are clearly attempting to lead regulators away from regulating the product and towards regulating the user.
It is an interesting gamble. These companies are shedding light on the limitations of their technologies. But behind Microsoft’s stated civil society concerns, there is also a strategic play. By focusing regulation on the user and ensuring that humans must be involved in decision-making to some extent, providers of automated-decision-making services using facial recognition are heading off liability for the decisions made or facilitated by their technology.
This strategy will be a more effective antidote than ethical codes to the potential regulation of the products themselves. Further, by sacrificing one application of machine learning to regulation, they preserve the freedom to operate unimpeded in other areas.
AWS – don’t look at us
Let’s begin with AWS, in part because AWS has taken the most tentative and narrow position in the discussion over potential regulation of facial recognition technologies. Earlier this month and following the introduction of Senate Bill 5376, AWS released its proposed guidelines in a blog post. The guidelines are vanilla and designed to ensure minimal governmental oversight of AWS’s Rekognition technology or, in most cases, its use. After reading the guidelines, one might be tempted to respond – “duh!”. Here they are:
1. Facial recognition should always be used in accordance with the law, including laws that protect civil rights.
2. When facial recognition technology is used in law enforcement, human review is a necessary component to ensure that the use of a prediction to make a decision does not violate civil rights.
3. When facial recognition technology is used by law enforcement for identification, or in a way that could threaten civil liberties, a 99% confidence score threshold is recommended.
4. Law enforcement agencies should be transparent in how they use facial recognition technology.
5. There should be notice when video surveillance and facial recognition technology are used together in public or commercial settings.
Michael Punke, Some Thoughts on Facial Recognition Legislation (February 7, 2019)
It is unfortunate that we need to even articulate that the use of a technology must comply with law.
However, what is interesting is that all of these guidelines are directed at the user – and more particularly the law enforcement user – of AWS’s technology. AWS does not appear to have any skin in this game when it comes to ensuring the responsible use of its technology and does not seem to want to shine a light on, or believe that government oversight is really required for, other commercial applications.
It is unfortunate that we need to even articulate that the use of a technology must comply with law. However, what is interesting is that all of these guidelines are directed at the user – and more particularly the law enforcement user – of AWS’s technology. AWS does not appear to have any skin in this game when it comes to ensuring the responsible use of its technology and does not seem to want to shine a light on, or believe that government oversight is really required for, other commercial applications.
Microsoft – we’re partly responsible but really its the humans
Microsoft’s proposed six principles were announced in a blog post by Microsoft’s President, Brad Smith. Although equally as high-level and malleable as AWS’s guidelines, what is striking is that, unlike AWS, Microsoft takes ownership and accountability for the use of its technology in a way that AWS does not. Moreover, Microsoft’s approach much more explicitly normative and linked to fundamental values. Microsoft’s proposed principles are:
1. Fairness. We will work to develop and deploy facial recognition technology in a manner that strives to treat all people fairly.
2. Transparency. We will document and clearly communicate the capabilities and limitations of facial recognition technology.
3. Accountability. We will encourage and help our customers to deploy facial recognition technology in a manner that ensures an appropriate level of human control for uses that may affect people in consequential ways.
4. Non-discrimination. We will prohibit in our terms of service the use of facial recognition technology to engage in unlawful discrimination.
5. Notice and consent. We will encourage private sector customers to provide notice and secure consent for the deployment of facial recognition technology.
6. Lawful surveillance. We will advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios, and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.
Brad Smith, Facial recognition: It’s time for action (December 6, 2018)
However, even Microsoft is eschewing regulation of the product. Instead, Microsoft’s proposal appears to be based on a shared-accountability model. Apart from requiring transparency from developers and requiring applications be available for third-party testing, Microsoft’s proposed government regulation is focused on the user. Microsoft’s regulatory recommendations are to:
- Require transparency in terms of the capabilities and limitations of the technology
- Enable third-party testing and comparisons
- Ensure meaningful human review of facial recognition results before making decisions
- Avoid use of facial recognition for unlawful discrimination
- Require notice (and clarify that consumers consent when they have that notice)
- Limit ongoing government surveillance of specified individuals
(As an aside, the suggestion that “the law should specify that consumers consent to the use of facial recognition services when they enter premises or proceed to use online service that have this type of clear notice” is hardly privacy protective – despite Microsoft’s claims.)
Washington State Senate Bill 5376
The Washington State Senate Bill generating discussion is a broad privacy law bill. However, the key provision for the purposes of this post is Section 14. Section 14 would enact Microsoft’s proposed regulatory approach.
In particular, section 14 would require entities using facial recognition “for profiling” to use “meaningful human review prior to making final decisions based on such profiling where such final decisions produce legal affects concerning consumers or similarly significant effects concerning consumers”.
For the purposes of the Senate Bill, “profiling” is defined as any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.
The types of decisions that humans must be involved in includes, among other things, “denial of consequential services or support, such as financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, and health care services.”
(As an aside, Senate Bill 5376 would also enact the deemed consent provision that Microsoft wants to see.)
Avoiding direct liability and regulation
It is encouraging to see that Microsoft and, to a lesser extent, AWS recognize the risks and potential biases in automated decision-making using facial recognition.
However, what is equally interesting is how they are approaching regulation. They must recognize that eventually civil rights proponents will challenge automated decisions for bias and the law may eventually recognize that automated decisions must meet certain minimum criteria of fairness – by being justifiable, transparent and intelligible. These fairness criteria would be squarely focused on the algorithm.
Imagine you are an employer or credit grantor sued for violations of civil rights, why would you not turn around and point to the provider of the algorithm? If you were a regulator, would you not eventually regulate the problem at its source?
By getting into the game early and inserting a human in the decision-making process, the providers of these technologies are ensuring that the responsibility for catching and correcting bias lies with the users of their technology.
Categories: AI Machine Learning, Privacy, Surveillance
Leave a Reply