What Makes a Good Data Story?

By Sean Thornton • May 16, 2018

What makes for a good story about data?

I’m not necessarily referring to a story that uses interactive data visualizations to help get its message across, though there are many good examples of those in the New York Times and the Washington Post (and here on Data-Smart, too). I’m instead talking about a story about data itself—and how the technology used to understand it, distribute its insights equitably, and put it to use can help make cities better places.

Stories like that are essentially the mission of Data-Smart City Solutions. From its beginnings, Data-Smart’s goal has been to operate as a leading source of information “at the intersection of government and data”—a place where government officials, technologists, academics, community groups, and the general public all work together. Ten or even five years ago, this was a relatively niche group; in 2018, its activity requires a larger tent.

Since its start, Data-Smart has also been running stories from and conducting research in Chicago, its longest-running partner city (and my hometown). As a Chicago-based writer for Data-Smart, and later a Program Advisor for the Ash Center’s Civic Analytics Network, I’ve seen, heard, and been a part of many data stories—in Chicago and across the country. I’ve also written quite a few of them.

Through it all, it’s been an interesting ride. Here, in no particular order, are some insights I’ve gained on telling stories about data.


So, back to my original question: what makes for a good story about data?

A good story about data is essentially a good story—one that connects on a practical, personal or emotional level in some way. In my experience, that means that a good data story is generally not about the tools or the data itself. Instead, it’s about the people who create and use that new technology, the reasons why they did so, and the impact that those innovations have on real people.

In the past several years, for example, Chicago has devised lots of impressive tools and algorithms that have made a positive difference on the efficiency and effectiveness of both local government operations and transparency efforts. Relying largely on open-source software tools, a collaborative and partnership-driven model for expanding capacity for innovation, and its own open data portal, the team at Chicago’s Department of Innovation and Technology (DoIT) has improved rodent baiting operations, optimized restaurant inspection processes, and enhanced E. coli detection efforts at city beaches, among others.

What’s the most memorable aspect about these use cases? It’s the relatable, the tangible, the easily understandable—such as our (generally) shared dislike of rats, our enjoyment felt spending a day at the beach, or our interest in the safety and quality of the food we eat from shops and restaurants.

Once that narrative hook is established, that’s where some other key elements of good data stories come in: the people involved, the problems addressed, and the meaningful impact achieved.

Data Stories are about People. Without data and modeling methods, none of these stories would have made it into print. Yet it’s the people who make things really happen. People like DoIT’s Tom Schenk, Gene Leynes, and Nick Lucius, whose work has led to many data-related insights, and Gerrin Butler, whose embrace of data-driven methods has led to an enhanced set of operations for her Food Inspections Division, have made many Data-Smart stories—and data itself, even—come to life.

Data Stories are about Real-World Problems. Each of those examples—rodent baiting, restaurant inspections, and beach safety efforts—all also center around how to improve antiquated solutions to real-world problems. Being able to draw a direct line from data to such problems illuminates the value that data can have in a tangible way—especially to those who may view data through a more passive or archival lens.

Data Stories are about Meaningful Impact. Impact, of course, is what completes the equation of data plus problems, and what brings readers to data stories in the first place. Yet impact can mean different things for different people. Part of the challenge of working at the intersection of government and data—that place where government officials, technologists, academics, community groups, and the general public all work together—is finding out how to best tell a story that’s relevant to a fairly wide swath of readers.

Take a project like Chicago’s OpenGrid, a situational awareness application that lets its users see city data on a map in real time. A technologist would look at OpenGrid and want to know its systems architecture, database type, and GUI layout. A policymaker may want to know about its budget, the level of capital it requires to implement, and its geographic reach among a population.

And residents—who could be technologists or policymakers as well—may just want to know what it is they’re reading about, and if it’s useful. Which brings us to perhaps the most important question for readers of data stories:


This is where the old adage know your audience really applies. Even given the wide range of people interested in government and data, most readers will go into an article hoping that there is some beneficial takeaway.

For every piece I wrote, I made sure that I understood what the people involved in each initiative were doing and why they were doing it. Yet I also outlined a reason for readers to care about these projects in the first place. This could include anything from the experience of eating at a contaminant-free restaurant to the opportunity to access code via GitHub and give projects a try on their own.


Granted, there’s only so much in common with modern writing on civic tech and Hunter S. Thompson’s off-the-wall style of journalism. But in both approaches, sometimes the best way to tell a story is to experience it yourself.

One of my favorites articles I’ve written for Data-Smart was about Chicago’s Civic User Testing Group (CUTGroup), a local initiative that serves as a community of residents who get paid to test civic websites and apps. Many major civic apps and resources in Chicago, such as the city’s redesigned Data Portal, Foodborne, and the Chicago Health Atlas, have all gone through CUTGroup’s process and benefitted from its participants’ insights.

Two years ago, I served as a CUTGroup proctor and wrote about my experience to better understand the program. On deck for testing was OpenGrid, the aforementioned platform that provides residents ways to interact more personally with city data.

In sitting with several CUTGroup participants for 20-minute review sessions, I was fascinated by the wide range of responses I received. Some loved OpenGrid; some viewed the app as interesting, but better-suited for a local business or an alderman than an individual. Others didn’t much care for what it offered, and viewed OpenGrid as something they would never use of their own volition.

No civic app or data initiative will appease everyone, or reach everyone. But experiences like CUTGroup provide tangible, important reminders of why such projects exist in the first place.

About the Author

Sean Thornton

Sean Thornton is a Program Advisor for the Ash Center's Civic Analytics Network and writer for Data-Smart City Solutions.  Based in Chicago and working in partnership with the city's Department of Innovation and Technology, Sean holds joint Masters’ degrees from the University of Chicago, in Public Policy and Social Service Administration. His work has spanned the city's public, philanthropic, and nonprofit sectors.