Why Benchmarking Matters for Open Data

By Stephen Goldsmith • March 20, 2015

This article originally appeared in Government Technology magazine.

Over the last four years, cities across the country have rushed to join the march toward transparency and open data. Yet the words carry many meanings and even more approaches. Cities vary greatly on what data they open, how usable the information is, and how much they work to help the public use that data. Important to those cities really committed to open data is their ability to benchmark themselves against others. As a result, I’m seeing more and more benchmarking of data transparency efforts, originating from both inside and outside of city hall.

One of the cities leading the way on this issue is Philadelphia. As documented in a Sunlight Foundation blog post, Philadelphia’s Open Data Team used benchmarking to determine that the city was behind its peers in releasing certain difficult, costly data sets. The team looked at the data portals of four other large U.S. cities — Baltimore, Boston, Chicago and New York City — to identify and then analyze the data sets those cities were publishing that Philadelphia was not. That helped inform Philadelphia’s Open Data Census and gave the city insight on its own open data progress.

Of course, these benchmarking efforts aren’t coming exclusively from within government; both the private and nonprofit sectors are also getting in on the action. This past October, the software company Socrata released the results of a survey of government officials on their practices and plans for open data, providing governments of all sizes with information about what their peers are doing and thinking.

And then there’s the U.S. City Open Data Census, a collaboration between Code for America, the Sunlight Foundation and the Open Knowledge Foundation. The continuously expanding tool uses crowdsourcing to rank cities based on the transparency of their data, creating an interactive matrix that allows users to quickly evaluate the openness of data sets ranging from asset disclosure to zoning.

Anyone can use the tool, and in Los Angeles city government, it’s proved to be a valuable resource in decision-making.

“The census has been a really helpful guidepost in terms of what data we should be prioritizing,” said Abhi Nemani, Los Angeles’ chief data officer, who previously worked at Code for America while the census was being developed. “It’s helpful to have an external indicator of what data matters and what we should be opening up.”

Nemani said that L.A. recently published a consolidated list of city-owned properties partly in response to seeing the data set prominently highlighted on the U.S. City Open Data Census as something many other cities were releasing.

L.A. has also closely studied the data sets opened by a host of local governments — including Chicago, New York City, San Francisco, Baltimore and Montgomery County, Md. — as the city works to develop a road map for its future open data efforts.

In more and more cities, local officials are realizing that benchmarking can help them learn from the mistakes and successes of their peers as they tackle the challenges of open data. Indeed, only by looking at one another can cities gain exposure to new ideas and technologies, identify relative weaknesses and assess how well they’re doing — insights that are particularly valuable in an ambitious undertaking like an open data initiative.

Top photo screenshot of US City Open Data Census http://us-city.census.okfn.org/

About the Author

Stephen Goldsmith 

Stephen Goldsmith is the Derek Bok Professor of the Practice of Urban Policy and the Director of the Innovations in American Government Program at Harvard's Kennedy School of Government. His latest book is Growing Fairly: How to Build Opportunity and Equity in Workforce Development.

Email the author.