AI Assistants or Digital Despots?

Share Button

Why we need an algorithmic bill of rights.
By Kartik Hosanagar


My wife and I were recently in search of a babysitter for our preschooler. Our job post attracted dozens of applicants. While mulling how best to screen them, I heard about a babysitter-scoring start-up called Predictim. Predictim scoured the social media profiles of babysitters and used AI algorithms to calculate risk scores indicating their trustworthiness. As a data scientist who designs algorithms for decision-support and a researcher who studies the impact of algorithmic decisions—and as a parent curious about whether my instincts would be borne out by science—I couldn’t resist giving it a try.

I did resist the temptation to generate risk scores for our babysitter candidates until I understood how the system worked. So first I read some of the first-person accounts shared on the blogosphere by other parents who had test-driven the website. All you needed to initiate an assessment was the applicant’s name, city of residence, and email address. The website evaluated their tweets and Facebook posts to generate risk scores on a 1–5 numerical scale across four dimensions: propensity for bullying, propensity for disrespect, tendency to use explicit content, and signs of drug abuse. Before unleashing the tool on our job candidates, I experimented with family and friends—but not for long: late last year Predictim decided to “pause” its full launch in response to criticism in the media (Facebook and Twitter also blocked the company’s access to their APIs). The website has since shut down.

The criticism centered around privacy. It is no doubt important to ask whether it is fair for employers to use this sort of data—but Predictim was not exactly an outlier.  Seven in 10 companies consider material posted on social media sites when evaluating job applicants, according to a recent survey by CareerBuilder, and 57 percent of these have found content that caused them not to hire a candidate. Predictim was no different but may have simply attracted attention for making the tool publicly accessible.  (It also skirted the permissions and disclosures that, say, a credit check demands, by targeting informal employers and telling its users that the tool shouldn’t be used to “make decisions about employment.”)

Privacy is a legitimate concern with social media. But the main problem is not that companies screen candidates based on publicly accessible social media data; humans conduct this type of due diligence all the time. It is that they provide limited clarity in terms of inputs (how the algorithm works), performance (how well the algorithm works), and outputs (how the algorithm communicates with its users). And Predictim is not alone—not in the world of HR software, nor in the expanding world of algorithmic decision-making, where machines increasingly guide decisions from recruiting to criminal sentencing, college admissions, safe driving routes, and the chances we get asked out on a date. Which is why we should be paying attention.

When I probed Predictim’s design to assess what inputs it was using, several telltale signs suggested that the firm’s algorithms read each word in a social media post. Next, it employed a neural network—a machine learning technique inspired by how networks of neurons in the human brain fire to record or validate knowledge—to classify each post as bullying or not, respectful or not, salacious or not, and drug-related or not. It’s hard to say how well Predictim’s algorithms distinguished between someone posting about their latest cocaine binge and someone posting about the television show Narcos. Given the current state of the art, chances are that most AI engines will struggle to make that distinction.

Yet AI is improving in this area. Would my criticism fall away once AI could pass the Narcos test? Not particularly—because that is where the performance and outputs come into the picture. For instance, although the tool might be effective at detecting swearing on social media, the company offered no evidence that the use of such words correlates with the reliability of a babysitter. As for outputs, what is Mary Poppins supposed to do when she finds out that Mr. Banks chose a different nanny because an algorithm deemed Poppins’ social media content too risqué? Is there a way for her to understand the reasoning behind her score, or to contest it? Tools like Predictim offer no explanations.

Algorithmic transparency is technically hard. Some of the best-performing machine-learning models are often the most opaque. Researchers are working hard to build interpretable AI—machine prescriptions that come with explanations—but it’s unclear whether industry will embrace this approach without pressure from above. Europe’s General Data Protection Regulation (GDPR) attempts to address that: much of the attention has been on its privacy protections, but the transparency demands are just as important in settings like recruiting and credit approval. Predictably, industry leaders argue that this sort of transparency is unduly onerous. But even if explanations of individual decisions are not considered worthy of investment, companies should audit their algorithmic decisions at the aggregate level (to ask, for instance: Are we consistently rejecting job applicants who belong to a certain group? Why?) In the absence of regulation, Mary Poppins’ only chance at an explanation will be pressure from below. We, as users, need to demand better—because it’s not just babysitters on the line.

This is why I have proposed an algorithmic bill of rights in A Human’s Guide to Machine Intelligence (Viking, 2019). The purpose of these rights is to offer consumer protection at a time when computer algorithms make so many decisions for or about us. Transparency is one of the key pillars in my proposed bill of rights. Another is user control. Algorithm designers should grant users a means to have some degree of control over how an algorithm makes decisions for them. It can be as simple as Facebook giving its users the power to flag a news post as potentially false, or it can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making. Companies should also be required to have an audit process in place that evaluates algorithms beyond their technical merits and also considers socially important factors such as the fairness of automated decisions. Together, this algorithmic bill of rights would help ensure that we can harness the efficiency and consistency of automated decisions without worrying about them violating social norms and ethics.

As for our sitter search, we decided to pass on all the candidates. I had a feeling we could find the perfect person if we looked just a bit longer. My reasoning? Just a hunch.


Kartik Hosanagar is the John C. Hower Professor at the Wharton School, where he studies technology and the digital economy. This essay is based on ideas from A Human’s Guide to Machine Intelligence by Kartik Hosanagar.

Share Button

    Related Posts

    Beyond Boot Camp
    Confidence Game
    The Pipeline Versus the Pioneers

    Leave a Reply