“My father once told me, ‘We don’t choose the things we believe in; they choose us.’”Minority Report (2002)
What do some of the most popular sci-fi films of all time have in common? Themes about human existentialism and the ambiguous nature of technology. In films like “2001: A Space Odyssey” (1968), “Westworld” (1973), “Terminator” (1984) and “Minority Report” (2002), the wonders of technology and artificial intelligence software (AI) soon give way to human-like dissension and aggression.
At times, the Hollywood treatment does a great disservice to the actual application of AI technology and is a big driver in the common thread of discussion regarding the potential for artificial intelligence to render us irrelevant in the workforce. However, popular-culture-inflected presumptions about the use of technology shouldn’t necessarily be ignored, particularly in the case of predictive policing, also known as crime forecasting.
Theoretically, predictive policing is the perfect nexus of technology, social media and community awareness. After all, today’s world has become incredibly small — thanks to the internet, everyone is connected to everything. We now have amazing visibility into events, situations and opinions all throughout the world. When trending hashtags on Twitter align with significant events, they lend themselves not to just a neighborhood watch community, but a global one.
According to Network World, an online editorial geared towards providing insight into business solutions, “Crime has patterns just like everything else humans do when we’re viewed as a large enough group. … This is the world of predictive analytics software; the scientific version of a crystal ball. Instead of peering into a glass globe you peer into (ideally) massive amounts of data and using Big Data mining techniques such as statistics, modeling, and machine learning software you look for patterns that are indicative of current or future behavior.”
Similarly, IBM’s Hub, a forum that provides thought leadership about big data and analytics, explained that the Internet of Things (IoT) organically leads to what they term the “public safety Internet of Things.” The public safety IoT is made up of “a plethora of connected systems, including social media … facial recognition, sensor networks … and many others.” The public safety IoT leads directly into predictive policing systems, as it combines the entire internet with a broad network of systems, including government-operated ones. The Hub explains that “Predictive policing systems are reducing crime and agency operating costs today. Their capabilities will only continue to expand for the foreseeable future, supporting a more proactive, finely tuned and cost-effective police force.”
What’s troubling is that both explanations sound like a real-life version of Minority Report, a world where crime, murder and general mayhem is kept at bay thanks to the PreCrime program, a mutant-centered approach towards policing. “Precogs,” children of drug addicts who have foreknowledge — the ability to glance into the future — deliver reports about murders that haven’t yet happened. Armed with those reports, the PreCrime police department in the year 2054 pre-emptively arrest the criminal-to-be, therefore reducing the murder rate to zero. However, as the main characters in the film discover, PreCrime is inherently flawed because of human beings’ proclivity for free will. Precogs don’t just have one immaculate vision per crime-to-be. They have three, and their visions must be translated via precog technicians who analyze all three visions before determining the most likely scenario.
And that’s where issues crop up regarding the use of technology to prevent crime.
The fallout from misusing tools that lead to accurate predictive policing, however, is far too real to ignore. It goes beyond just worrying about authorities overreaching and present-day life turning into 1984. While determining the average behavior of a population sounds like a good idea, inevitably and for good reason, questions about personal privacy and the potential for abuse of surveillance tools arise. Take a look at the recent fallout of the location-based social media surveillance startup Geofeedia.
Frankly, Geofeedia’s technology is a fascinating application of social media monitoring: leverage public information so that the company’s clients can monitor social media posts and the geographic data tied to those posts for their benefit. The main defense for Geofeedia’s actions is that they were only scraping publicly available information through the APIs offered from Facebook, Twitter and Instagram. Unfortunately, Geofeedia’s seemingly innocuous platform has been utilized by law enforcement agencies specifically for the discriminatory gathering and monitoring of intelligence. Geofeedia’s disconcerting relationship with police agencies came to light after the American Civil Liberties Union of Northern California requested access to records showing who provided and leveraged user data via Geofeedia.
What Have We Learned?
It is all too tempting to look at the filmic representation regarding the rapid, then deteriorating, development in technology as an inevitable turn of events. Science-fiction movies have existed since 1902, with the release of the black-and-white silent film “A Trip to the Moon.” The ways in which film and popular culture in general represent how technology is used and evolves significantly impacts the way we approach and understand it. While the intention of sci-fi films and TV shows is to suspend the audience’s beliefs, the more intriguing and appealing entries are the ones that specifically draw upon existing and close-to-existing technological innovations. It’s more palatable to take a little bit of “in a galaxy far, far away” with a dose of a not-so-alternate reality.
Our current climate is an incredibly tense one. We are in an “us versus them” reality right now, and tensions run high between authorities and the hoi polloi. There is general unease between the political leanings, biased beliefs and wobbling stances people publicly and privately align themselves with. It makes sense, then, that we turn to data as a (potentially) objective savior. What can reading, understanding and displaying data do to give us better visibility into a tragic event before it even happens? After all, data is stark. That seems to imply that data cannot be unethically manipulated. This is a very simplistic assumption.
Data in and of itself is not a saving grace. Data exists without context and humans exist to translate and analyze that data both for themselves and others. And human beings are notorious for subconscious bias and a tendency to buckle under peer pressure and dominant thought in society. The best way to grapple with data is to couple those stark numbers with machine learning and the instinct of experienced human beings. Big data is a vague buzzword that is gaining traction for good reason, because it does exactly that. The massive majority of collected and stored data has been sitting around, collecting dust, waiting for technology to play catch up. Now that machines can handle the sheer amount of data, it is time for the developers, marketers, research specialists, operations managers — anyone who needs to touch data, wrangle it or make sense of it to better business operations — to apply their expertise. It’s time for humanity and machines to make nice. And technology has evolved to allow us to do just that.