Why algorithms are bad for you (Pew/Elon 2016)


Statue of al-Khwārizmī, the 9th-century mathematician whose name gave us “algorithm”


I’ve written a lot about the Pew Research Center. Pew does a great deal of invaluable survey research on the behaviors and attitudes we develop online (okay, “we” means American here). In a departure from the science of probability surveys, Pew teamed up with researchers at Elon University back in 2004 to launch their Imagining the Internet project.


About every two years, the team prepares a set of questions that’s sent to a list of stakeholders and experts around the world. The questions reflect current hot-button items – but ask the participants to imagine how online trends will look a decade from now. The topics have ranged from broad social concerns like privacy and hyperconnectivity, to more technology-oriented questions like cloud computing and Big Data.

The 7th version of the survey was fielded this summer; it’s my 4th shot at predicting what life will be like in 2025. (For a look at what the survey tackled in 2014, see my posts starting with one on security, liberty and privacy.)

The survey format has a one-two punch. First a question is posed about the chosen issue, but in a way that usually requires a straight-up yes or no answer. That gives the researchers heads to count and gives the participants incentive to write as much as they wish to explain their one-word answers. The results make for terrific reading.

The current survey asks five questions, covering: i) the tone of public discourse; ii) education innovation for future skills; iii) the opportunities and challenges of algorithm-based everything and the Internet of Things; iv) trust in interaction; and v) the effects of ever-increasing connectedness.


As I’ve done some writing recently on algorithms, I was interested in starting with that question. (An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer – like returning results from a search engine.) The survey setup points directly to the dilemma:

“Algorithms will continue to have increasing influence over the next decade, shaping people’s work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other. The hope is that algorithms will help people quickly and fairly execute tasks and get the information, products, and services they want. The fear is that algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts.

“Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society?”

We were given 3 options in answering: negatives outweigh positives, positives outweigh negatives or the overall impact will be about 50-50. I went straight for the negatives. Here’s my answer…

THREE factors play a big role in shaping our online lives: the endless search for convenience; widespread ignorance as to how digital technologies work; and the sacrifice of privacy and security to relentless improvements in the efficiency of e-commerce. One thing these factors have in common is they all make the bad outcomes of algorithms even worse. These outcomes can be benign, like a Netflix movie list that doesn’t quite match your tastes. They can also be harmful, like criminal punishments determined by algorithms that predict recidivism with unacceptably inaccurate – and damaging – error rates.

The success of established online businesses like Facebook is premissed on using increasingly sophisticated techniques to target users by predicting the content they’ll want to read and watch, along with the stuff they’ll want to buy from advertisers. In solving problems like finding “news” suitable for a user’s news feed, however, predictive algorithms create other problems. For example, ad-supported services have a voracious appetite for personal data to feed their algorithms. The resulting aggregation of personal data undermines both privacy and security in ways users can’t see or understand clearly, like the increased appeal of customer account hacking. Algorithms can have even worse consequences in the hands of government agencies. With the Internet of Things settling upon us, surveillance by the state will keep getting easier and more tempting – leading to more government by algorithmic regulation. Because algorithms are neither neutral nor fool-proof, and people in positions of authority crave the efficiencies they can get cheaply from a black box, the potential for abuse is high, especially in the management of perceived societal risks.

The dilemma is that algorithms do offer benefits, but in ways that are often positive for businesses and government institutions, while negative for individual end-users. End-users crave convenience online and everyone wants to sell it to them – a process that algorithms excel at. This unholy pact is especially welcome to mainstreamers when they’re using digital technologies that are hidden, unknowable and entirely beyond their control. Since most people don’t even know what they’re paying their ISP for every month, it’s going to be very difficult to help mainstreamers understand that algorithms are not neutral, and that their benefits come at a price. Disclosure policies may help, but not if they come to resemble the obscure, long-winded privacy policies that have become standard on the Internet.