Predictive Tools for Public Safety

By Stephen Goldsmith • August 18, 2014

This is the fifth in an article series based on Stephen Goldsmith's paper "Digital Transformations: Wiring the Responsive City." Click here to read the report in full.

For decades, criminal-justice officials, advised by the results of good research, used data to drive performance. As a district attorney 30 years ago in Indiana, I used tools developed by the Department of Justice to identify “career criminals,” assigning them a score that would affect the severity of their charge and sentence. In retrospect, this was quite crude—prosecutors and police going through old paper-based criminal-history records, assigning numerical values to certain events, and ignoring other difficult-to-attain data sources.

Now, in the wake of diminished finances but better technology resources, public safety officials increasingly rely on better targeted interventions, and they do more with less. According to a 2011 COPS office report, between layoffs and attrition, law enforcement departments reduced headcount by 40,000 (including reductions in more than half of America’s police departments) in the preceding year.

In the face of mounting challenges, local officials turn to predictive analytics to enhance public safety efforts intelligently and judiciously. From policing and probation to disaster response and performance measurement, the capacity to track, model, and predict high-risk/high-need areas and constituencies has been an invaluable tool in public-sector management.

Predictive Policing

Traditional hot-spot policing has been around for two decades. Since the advent of CompStat in New York in 1994, police departments have used statistical analysis to predict criminal patterns. Today, public safety agencies are using sophisticated data mining to produce insights and focus on underlying causes. Instead of responding to a situation, new technology helps analysts predict and respond to patterns that anticipate crime before it happens. In the front lines of the effort, California public safety officials in Santa Cruz and Los Angeles are applying what they learned from predicting earthquake aftershocks to stopping crime.

In 2010, the Santa Cruz Police Department (SCPD) had 20 percent fewer staff members than in 2000, but received 30 percent more calls. Tight budgets and higher incidents of property crime led the agency to think creatively about deploying resources effectively. The SCPD’s public information officer turned to George Mohler, an applied mathematician and assistant professor at Santa Clara University, who developed a method for predicting future crime locations based on a mathematical formula used to predict post-earthquake tremors. Mohler theorized that “criminals want to replicate their successes, they go back to similar locations, they repeat their crimes—it’s almost identical to how aftershocks roll out after earthquakes, following predictable fault lines and timetables.” Mohler and several associates formed PredPol (short for “predictive policing”), which provides crime-prediction software to 11 municipalities in the United States and the United Kingdom.

Rather than simply highlighting high-crime areas, PredPol’s system intelligently locates possible future crime locations. In Santa Cruz, the data scientists started with eight years of verified crime incidents. The program analyzes short- and long-term patterns in property crimes and, using the aftershock algorithm, weighs the relative risk of a future crime occurring in the next few hours. So if a burglar breaks into a car along a particular stretch of road, the system evaluates that incident against the system’s history of car break-ins and identifies high-risk areas and time periods for police supervision.

At each shift briefing, officers receive maps of 15 future crime hot-spots, developed using the most recent crime data, for purposes of targeted patrol. The 150x150-meter hot-spots are easy to generate: analysts can simply log on to a user-friendly web application and print maps before roll call. Hot-spots can be tailored to address specific time and area concentrations, allowing officers to generate unique predictions for day and night patrols. Officer buy-in has been a key component of SCPD’s success in implementing predictive analytics. Department officials encouraged, but did not require, patrol officers to consider hot-spot data. Officials wanted the predictions to augment—not replace—officers’ intuition and local knowledge. This bottom-up strategy allowed the city to quickly move from pilot phase to fully integrated operations. In the first six months, this strategy reduced burglaries by 14 percent and motor-vehicle theft by 4 percent.

The Los Angeles Police Department and PredPol decided to test the system in a controlled experiment. Every morning, officers in the Foothill Division (population: 300,000) received hot-spot maps. On some days, PredPol’s system created the maps; on others, the LAPD’s in-house analysts produced them. The results: PredPol’s predictions were twice as accurate at predicting crime incidents as the traditional hot-spot analysis. Property crime in the Foothill Division dropped 12 percent, while L.A. as a whole experienced a 0.4 percent increase during the four-month period after implementation. Los Angeles now uses the PredPol system in three divisions.

Predictive Probation

In 2006, violent re-offenders established Philadelphia as one of the murder capitals of the United States. Philadelphia’s Adult Probation and Parole Department (APPD) oversaw 50,000 individuals, with only 295 probation officers. To manage the escalating crime, the APPD needed a systematic way of identifying the riskiest individuals and dedicating staff resources accordingly. If the APPD could accurately categorize recently paroled individuals as low-, medium-, or high-risk for potential to commit violent crime, the agency could save time and money and reduce the likelihood of violent recidivism.

Enter Richard Berk, a sociologist who for 30 years was a professor of criminology and statistics at the University of California. Wharton School of Business and the University of Pennsylvania’s Criminology Department recruited him partly to bring his modeling skills to Philadelphia’s crime problem. In partnership with Ellen Kurtz, APPD’s director of research, and Geoffrey Barnes, a criminology professor at the University of Pennsylvania, Berk began experimenting with “machine learning” to find connections across probationer backgrounds to estimate the likelihood of violent re-offense.

Berk built his predictive engine based on tens of thousands of individual criminal records, with dozens of variables such as age, gender, previous zip code, number of previous crimes, and type of offense. This intelligent, machine-learning model enables the computer to find patterns and relationships across dozens of variables and constantly reassess those relationships as new data are added. Berk created several iterations of the model, but the current one relies on outcomes from 119,998 historical probation cases drawn from nationwide data sets, each with dozens of predictors, resulting in a data set with 8.74 million decision points used to forecast a new probationer’s risk of committing a violent crime in the next two years.

Machine learning not only sorts and categorizes probationers according to risk; it also carefully adjusts the forecast to avoid costly errors. For example, the consequence of assigning low-risk probation supervision to an individual who will likely commit a violent crime is very serious—the economic and social costs far outweigh the cost of the supervision. Barnes and Berk worked with the APPD to decide on an acceptable error rate: As a policy, how many people would the APPD be willing to categorize as high-risk who were actually low- or medium-risk in order to prevent miscategorization of the highest-risk cases? At a ratio of ten over-supervised offenders to one under-supervised offender, the model categorized far more people as high-risk than the APPD could ever hope to handle. The policy challenge was to refine this error rate until capacity to monitor high-risk individuals matched the model predictions. The benefit of this model construction is that prediction errors are not uniform and are tailored to account for the cost of bad forecasts.

To the surprise of many in the APPD, Berk’s model found that the original crime (violent versus nonviolent) had little effect on the riskiness of a probationer committing a violent crime in the future. Compelling conclusions emerged, such as the probationer’s age at the time of his first crime compared with his most recent crime. But the real value of the model is less its research results and more its practical management benefits. The model takes a very complicated decision (the level of supervision needed) and intelligently sorts through all the possible predictors to derive a sensible strategy to segment the probation population by risk.

Barnes spent months after the initial model construction working with the APPD on a front-end-user interface and back-end database system that could tap into court records and police filings, and thus provide instantaneous forecasts. The resulting intake process streamlines and integrates siloed data across agencies in real time. When an individual registers with the APPD for probation, the program immediately pulls the predictor variables from centralized court and police records, runs the case through Berk’s model, and assigns a low-, medium-, or high-risk category, all in less than ten seconds. Probation officers are assigned accordingly, and the APPD, in partnership with Barnes and Berk, continues to perform randomized experiments to pinpoint the proper level and content of supervision for each risk category.

According to Barnes, a key factor of the tool’s success has been the APPD’s unwavering commitment to the model predictions. Rather than circumventing the model when the staff found a particular weakness, all efforts were made to adjust and build a better model with new or different data. This commitment came from the highest level of the APPD, allowing the team to innovate despite obstacles.

While it is too soon to tell whether overall recidivism has decreased because of this innovation, the model helped the probation staff handle a 28 percent increase in overall caseload with a staff 15 percent smaller than before the introduction of forecasting. According to APPD’s chief probation and parole officer, this feat simply would not have been possible without the use of risk forecasting.

Situational Awareness: Smart Policing with Sensors and Social Media

Criminal-justice authorities are using new digital tools and big data to improve operations. New technology allows data to be synthesized and made geographically relevant for a patrol officer in innovative ways. Increasingly, the solutions to urban problems turn on how new technologies allow data to be curated, mined, and delivered to those who can act on the information.

Leading innovators, such as New York, have invested heavily in situational awareness platforms. The New York Police Department partnered with Microsoft to develop the Domain Awareness System (DAS), a solution that aggregates and analyzes public safety data from reports, video feeds, license-plate information, witness reports, and so on, and then provides NYPD investigators and analysts with a comprehensive, real-time view of potential threats and criminal activity. The NYPD/Microsoft solution tailors the information to the specific needs of users.

Similarly, ShotSpotter works with municipalities to provide instantaneous gunfire alerts to police departments across the country. The core of ShotSpotter’s service is a wide-area acoustic surveillance system, supported by software and human ballistics experts, all focused on accurately detecting gunfire. The company mounts waterproof, watermelon-size, acoustic sensors on rooftops across a city. Networked together, an array of sensors can triangulate the incident location accurately in real time. If ten sensors detect a shot, the array can determine the incident location with a two-foot margin of error.

In the cacophony of an urban locale, many sounds can be misinterpreted as gunshots to the untrained ear. To solve the misidentification problem, ShotSpotter relies on a centralized qualification center. Once the sensors detect an explosion, ballistics experts at ShotSpotter’s command center in California analyze the noise to weed out false positives, such as car backfires and fireworks. Results are sent back to police dispatchers, providing the police with precise location, number of shots fired, exact time of the incident, and gunfire history for the area. From detection to review, the entire process takes about 40 seconds. ShotSpotter guarantees that it can accurately detect 80 percent of gunfire in coverage areas, although actual detection rates are as high as 95 percent. The technology has been implemented in 75 cities and towns across the United States, including Washington, D.C., and Milwaukee.

While predictive policing is proactive, ShotSpotter is reactive, but that does not limit its efficacy. Washington, D.C., has been using the system since 2005. The police there have detected more than 39,000 gunshots since 2006, using 300 sensors deployed across the city, according to an analysis by the Washington Post. D.C. police no longer need to hear a gunshot or depend on citizen reporting to know about it. The system has helped the D.C. police respond quickly to gunfire, track trends in gun violence, and establish evidence for criminal trials.

In Milwaukee, the police are using ShotSpotter to proactively respond to gunfire incidents, particularly in areas where violence has historically gone unreported because of resident intimidation. The Milwaukee Police Department calculated that, in the areas where ShotSpotter is deployed, only 14 percent of gunfire is reported to 911. Fear of retribution for reporting crime is a serious concern in many communities; ShotSpotter allows the police to circumvent this issue and respond to gun violence, equipped with detailed situational awareness.

ShotSpotter is part of a broader trend to use new technology for situational awareness. Smart policing also increasingly relies on social media, especially Twitter, to keep communication open between police and citizens, increasing situational awareness for both. When the Vancouver (Canada) Canucks made it to the finals of the Stanley Cup playoffs in 2011, the Vancouver Police Department used Twitter to connect with participants and respond directly to questions from the crowd. The public response was overwhelmingly positive. But on June 15, the Canucks lost in the final game of the playoffs, and spectators rioted in the streets of downtown Vancouver. The VPD’s Twitter feed became a critical tool for communicating with spectators and tracking developments. After the riot, VPD used Twitter and Facebook to inform the public about how to submit riot evidence. More than 16,000 people started following the VPD in the next few days, and the department saw a 2,000 percent increase in Facebook followers. Thousands of civilian “journalists” submitted videos, photographs, and tips to the VPD over the next months, providing an unprecedented amount of evidence on the incident.

Acoustic sensors and tools that mine social media to produce patterns from structured and unstructured data dramatically increase the effectiveness of those fieldworkers—police patrol officers and detectives, probation officers, and the like—who must allocate their time and use their discretion to protect the public.

Disasters: Rapid Integration and Dissemination of Data

Team Rubicon, a volunteer organization primarily comprising military veterans, deploys volunteers into disaster areas across the country. Relying on deep expertise in military protocol and logistics, Team Rubicon supports its volunteer responders in some of the toughest disaster situations. But allocating volunteers effectively is always a challenge, especially in unfamiliar contexts.

After Superstorm Sandy, Team Rubicon partnered with Palantir, a software firm, to aid in its recovery operations. During the time I worked to implement data analytics in NYC, the obstacle I most frequently encountered was the conviction, among agencies, that their legacy data simply precluded its use in conjunction with any other system. Recent data-mining breakthroughs surpassed the knowledge curve of many public IT officials, who were more focused on keeping an older data product functioning. Palantir’s work lies at the heart of bringing innovations in big data to the public sector. Local governments use Palantir’s data-integration platform, called “Gotham”—developed originally to aid the FBI and the CIA’s counterterrorism efforts—to unify and analyze isolated data sets. Palantir’s platforms unlock data sets traditionally siloed in disparate locations. In the chaotic aftermath of Superstorm Sandy, Palantir put its system to the test. Within 24 hours of the storm, Palantir deployed a cadre of engineers to New York City. From a bus in the Rockaways, these engineers supported NGOs to make efficient, emergency resource allocation decisions. On site, Palantir customized Gotham, building a mobile interface for volunteer first responders from Team Rubicon to receive, send, and gather critical information. Forward-deployed engineers customized their system to unify and distribute National Oceanic and Atmospheric Administration data, demographic data, electric service maps, Federal Emergency Management Agency (FEMA) projections, and damage assessments to partners in the field.

Members of Team Rubicon used smartphones with Palantir software to centralize and distribute needed information. The customized mobile application brought together critical information and allowed analysts to efficiently direct Team Rubicon volunteers to the most crucial tasks. For example, the software collected building-damage assessments through volunteers using their mobile application, prioritized damaged buildings for volunteer intervention based on a vulnerability analysis, estimated the number of volunteers needed to adequately address the building’s issues, and flagged certain structures for asbestos risk.

Direct Relief, another nonprofit involved with emergency response, faced its own challenges after Sandy. As Andrew Schroeder, Direct Relief’s director of research and analysis, puts it: “In a disaster context, we are trying to hone in on need as quickly as possible and disaster areas as quickly as possible so we can understand where we need to plug in.” As an NGO with limited staff, Direct Relief had to quickly and efficiently monitor and distribute supplies to hundreds of federally qualified health providers in the flood zone.

Ten days before Sandy hit New York and New Jersey, Schroeder was presenting at a conference on the theoretical application of Palantir Gotham to public-health emergency responses after a hurricane. Within a week, he was working with engineers to apply the concept in real time to decide on strategic pre-deployment caches of medical supplies, as Sandy approached New York. Direct Relief developed a social vulnerability index through demographic and housing information, and correlated those data against the constant stream of risk-assessment models generated by FEMA. Direct Relief could forecast where the medical needs would be, even before the storm made landfall. This data-driven modeling helped Direct Relief overcome the communications challenge in the first 48–72 hours after the storm. Health providers were completely out of contact—cell service and phone lines had gone down. There was no way for Direct Relief to know which providers needed assistance. With limited contact, Direct Relief used proxies, such as the electric-grid outage maps and whether local pharmacies were down in a particular area, to predict which groups needed assistance. Direct Relief volunteers were then sent to clinics in these vulnerable areas to confirm on-the-ground needs and coordinate medical-supply delivery.

Public Health Analytics for Pest Control

West Nile virus, an ailment once rare and relatively unknown in the United States, is now an annual danger in many suburban communities. In Suffolk County, New York, a large suburban and rural county on Long Island, officials began seeing West Nile cases in the early 2000s. “We realized that we were going to be dealing with West Nile virus every year,” says Dominick Ninivaggi, superintendent of Suffolk County Vector Control. But dealing with West Nile proved more challenging than anticipated. Initial tactics focused on identifying and mitigating mosquito reproduction in areas with high rates of virus-infected mosquitoes. The effort proved ineffective, and, after a detailed investigation by Ilia Rochlin, an entomologist working for Suffolk County, the agency found that high rates of infected mosquitoes did not correlate to high rates of infected humans. Given this information, what was driving infection rates?

Vector Control agencies typically use two tools to mitigate the spread of West Nile virus: killing adult mosquitoes with pesticides; and treating catch basins with pesticides to eliminate mosquito larvae. But Suffolk County, given its size and limited resources, cannot possibly treat all catch basins. When a case occurs, the county has to make a quick decision as to whether to spray the affected area with pesticides—a decision that requires careful weighing of costs, risk of outbreak, and the community impact of intervention.

Rochlin and Ninivaggi developed a model to assess the risk of outbreak using a combination of statistical methods and geographic information systems. Through modeling, they found relationships between human West Nile cases, landscape factors, population demographics, and weather patterns. Initial results showed a complex interaction between these factors and human cases of West Nile virus.

Using this hot-spot analysis, Vector Control now targets larvicide efforts in established hot-spots and uses aerial adulticide spray only where quantitative evidence supports the use of pesticides. By being strategic in the use of analytics, the agency has saved time and money, while still providing a high level of public safety.

Chicago Rodent Control

Street rats—common to every city—are a threat to urban infrastructure, food supplies, and public health. Research studies have shown that exposure to rodents can trigger asthma attacks, particularly in young children. Chicago’s Department of Innovation and Technology (DoIT) and Carnegie Mellon University’s Event and Pattern Detection Lab (EPD Lab) partnered to use predictive analytics to solve the rodent problem. Chicago’s goal was not only to generate rodent hot-spot maps, but also to anticipate rodent outbreaks before they happen.

Tracking and identifying rodent nests is a perennial problem for Chicago. Without good data on where rats were located, proactive prevention would be impossible. To solve the information gap, the team turned to Chicago’s rich data set, containing 4 million requests to 311, covering everything from pothole complaints to rodent-control requests. Researchers at EPD Lab correlated these rodent-control requests with other demographic and 311 data to find leading indicators of rodent outbreaks. Their findings: a 311 call relating to garbage triggered a seven-day window where rodent-control requests spiked in the same area. Using this key insight with other indicators, such as water-main breaks, the city could identify the size of the rat population and predict the likelihood that the population would strike in a particular area.

Prior to this predictive innovation, the city’s rodent-control team worked through 311 requests chronologically, responding on a first-come, first-served basis. With this new information, the team revamped practices to focus on a proactive, location-based strategy to clear out rats in high-risk areas first, before moving to the next risky area. As a result, the city reduced its 311 rodent-control requests by 15 percent from 2012 to 2013.

Intelligent Data Analytics for Evaluating Public Safety

You cannot fix what you cannot measure. NYC government realized this fact during a spike in fatal accidents due to slow 911 response times during the summer of 2013. Despite the city’s $88 million upgrade of its computer-aided dispatch system, the source of the delay remained elusive. Theories included delays associated with the new software, operator error, or indeed something else entirely.

The Mayor’s Office of Data Analytics (MODA), in collaboration with the NYPD, FDNY, EMS, and Verizon, developed a method to measure every stage of the emergency-response process—from the second that a resident dials 911, to the exact moment that emergency responders arrive on the scene. The analytical model stitches together data from decades-old agency systems into a comprehensive picture of the city’s emergency response. The city can now track the speed with which 911 operators interface with emergency medical-service responders, isolating these transactions from dispatch and travel time.

Integrating these data sets across legacy platforms presented challenges: each public safety agency has a different unique ID for each incident and records these data in systems that do not communicate with one another. To connect the systems, MODA developed an analytical matching technique based on time, type, and location of call. This software script automates the process of linking together complex data sets into a comprehensible whole. Rather than overhauling each agency’s unique and functional system, the mayor’s team developed a creative work-around to link isolated information. By isolating each facet of the emergency response, city leaders can detect inefficiencies and blockages, respond to inquiries about what exactly caused each individual delay, and make strategic decisions about where to invest future resources.

Turning to data to drive operations is not just about delivering better service; it also saves agencies time, money, and other resources. These applications of predictive analytics in public safety were all implemented during a fiscal crisis. By making upfront investments in technology, public safety organizations lower costs and increase capacity for targeted interventions.

Implementation is not easy. These cases show us that whether you are using old data in new ways, as with Chicago’s rodent-control program, or creating new data assets through ShotSpotter, successful implementation requires key institutional factors:

  • Substantial top-level support for the initiative
  • Technical partners to aid in implementation, testing, and refining
  • Supportive and enthusiastic staff
  • Coordination across siloed departments and agencies

Police agencies have long capitalized on these factors to implement data-driven policies. It is apparent, once again, that police and other emergency-response agencies are leading local government in using sophisticated data-mining tools to target resources, solve problems, and pre-position in times of crisis. The breakthroughs for these agencies will again set a pattern for government, as predictive analytics power the advance of ever more effective government.

About the Author

Stephen Goldsmith 

Stephen Goldsmith is the Derek Bok Professor of the Practice of Urban Policy at the Harvard Kennedy School and the director of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. He previously served as the mayor of Indianapolis and deputy major of New York City.

Read Professor Goldsmith's full bio here.