This is the first in a series on performance measurement in cities.
Performance measurement literature consistently defines the field as some combination of the approaches, metrics, and processes used to assess the efficiency and effectiveness of actions. The semantics of the definitions vary, but common to all is the idea that there is some set of quantifiable and comparable measures with the power to tell a stakeholder what they need to know about performance. Unfortunately, this is often not the case.
Especially in the public sector, assessing the qualitative and appropriately defining the quantitative can quickly add complexity. The calculus of measuring performance becomes even harder once examining integration and framing more deeply. Performance management authors Bourne and Neely classify a few of these: the inclusion of multi-dimensional measures (combined financial and non-financial metrics), the determination of appropriate strategic frames of reference, and the integration of external data in the reporting process, including benchmarking with other organizations.
The question of why organizations have an interest in navigating such a potentially convoluted process has answers as varied as the measures used. The idea of using performance measurement to help determine strategic direction is perhaps the most intuitive. Organizations want to determine whether they are succeeding relative to their goals and how they can or should adjust priorities. However, the communication of these goals and an organization’s performance against them is equally important. Whether incorporating qualitative assessment and external data, or just issuing detailed financials, effective measurement is essential to effective reporting. Less often cited by organizations, but still present in much of the literature, is the use of this data for incentive programs. Measuring performance allows organizations to tie rewards to outcomes.
The question of what should be reported will vary by person and community, and it is incumbent on governments to frame their external reporting with an eye towards the needs of this heterogeneous population.
All of this gets complicated even further in the public sector. An electorate is an inherently broader and more diverse population than the typical group of shareholders or internal stakeholders. The often singular profit motive of private sector stakeholders also makes reporting easier. Governments must manage and report on financial performance, but this is often not the only, or even primary, concern of a citizenry. Citizens judge their leaders (and vote) on everything from health and safety to the level of greenery in public parks. Such diffusion of priorities can confuse strategy and communication practices in performance reporting. The question of what should be reported will vary by person and community, and it is incumbent on governments to frame their external reporting with an eye towards the needs of this heterogeneous population.
History of the Field
The notion of performance measurement and reporting as a field has its roots in financial accounting. The most zealous historians date performance assessment practices back to the systems designed by the Medici family or even to the beginning of recorded transactions. The argument goes that the first tracking of revenues and expenses was itself performance measurement in the sense that it allowed visibility into an organization’s operations. As a purely academic exercise, this may be a valid observation. For better understanding performance measurement and reporting as a 21st century practice, a more logical starting place seems the rise in popularity of institutional performance measurement techniques in the mid-twentieth century. In the private sector, this was driven by the development of cost accounting practices in the 1970s and 80s. Models such as activity based costing allowed organizations to define their priorities, find the measures that best reflected their goals, and actively manage financial performance against expectations.
As these techniques developed in the private sector, a more or less parallel development process occurred among public agencies. Programs grew (mostly at the Federal level) through the 1960s and 70s beginning with improved process management at the Department of Defense in the Kennedy and Johnson administrations. As processes developed, agencies began pulling in functionality that linked performance to budgeting processes – adopting the cost accounting and other strategic performance measurement practices being simultaneously refined by private companies.
The private sector again laid the path for what would follow in governments with Motorola’s development of Six Sigma in 1985. Six Sigma, used predominantly in manufacturing, leverages statistical methods to find process flaws and improve productivity. The program’s goal is to achieve the “six sigma process,” one in which 99.99966% of outputs are free from error. This type of data based performance measurement and review was translated to the public sector by the 1990s with the inception of the now ubiquitous PerformanceStat system, which will be discussed in the next piece in this series.