This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Presents an overview of performance measurement for government. History of the quest to measure and improve government's performance; Definition of performance measurement and benchmarking; Definition of performance measures; Pointers on benchmarking. INSETS: Some common types of benchmarks.;Benchmarking applications.;Do not be ruled by numbers.;To contract or not to contract.
AN OVERVIEW OF PERFORMANCE MEASUREMENT
There is a profound feeling in the country that government has not been doing its job--not just that tax dollars are wasted, but that government cannot be counted on to help improve the nation's future or to support the current aspirations of ordinary Americans. This leads to the obvious question of whether the present government organization can operate effectively these days . . . . The public demands we do more.
--Senator John Glenn (1993), during hearings on
legislation to require performance standards in
The senator was citing the federal government. But no part of the public sector is immune from being perceived as unresponsive, gridlocked, and overly bureaucratic. Citizen surveys, for instance, consistently report that Americans believe that some 40 percent of public funds are either wasted or spent unnecessarily (Bowsher 1993). Citizens' trust in government and in government's ability to make a difference, keeps heading south. Something must be done.
One way to get more "bang for the buck" is through a practice known as benchmarking. Local governments--always looking for better ways to do business and to cut costs--routinely tout it, as do many corporations and state governments. The feds are close behind. The need to "work smarter," recent state and federal legislation, the desire to show results for money spent, and increasing public pressure for accountability help explain some of benchmarking's recent popularity. So does the "bandwagon effect."
The June 1994 issue of Governing magazine cites the benchmarking craze: "Governments that used to pay no attention to their own performance now seem obsessed with trying to measure everything in sight." Increased productivity, lower costs, and greater responsiveness--each of these phrases has a nice ring. But while "benchmarking" is a new buzzword, less than one-quarter or so of those with benchmarking programs--public-and private-sector users--are actually doing it well enough to yield something useful. The rest either are getting limited results or are spinning their wheels, wasting time, and achieving nothing (Biesada 1991).
Benchmarking is not a panacea, but it is one means of improving program or service performance when properly used. Biesada (1991) describes the process as comparing the performance of your own organization with that of others with outstanding performance to find fresh approaches and new ideas:
Benchmarking [is] originally a surveying term for a point of reference. While there are many definitions . . . . they all boil down to finding and implementing best practices . . . . A best practice is the method used by a company that excels at doing a particular activity.
Through a series of performance measures--standards known as "benchmarks"--a person can identify the best in a class among those doing a particular task. Then, the best practices are analyzed and adapted for use by others wanting to improve their own way(s) of doing things. Benchmarkers hope to become more responsive to customers, and thus more competitive, by finding and using what works best. Of course, the process also identifies those below the standards as needing to improve.
While some observers privately wonder when the feeding frenzy will fade in favor of a new trend, there is no question that the practice can improve performance and efficiency. This article provides an overview of benchmarking practice. The following piece will focus on "how-to," on the methods and techniques of performance measurement and benchmarking. Subsequent articles will discuss particular aspects of the practices, including utility, applications, and case studies, and will tell how to get your own program in place to assess the performance of any or all of your public services.
A Short History of the Quest to Measure and Improve Government's Performance
Assessing service performance is not new. Back in 1938, ICMA issued Measuring Municipal Activities, suggesting various types of information that local governments might use to monitor various local services and to assess how well these services were being delivered. Less than 10 years later, Japan began benchmarking as the cornerstone of its rebuilding after World War II. There was no need to reinvent the wheel: a lot of companies were doing good things. Finding and adapting dantotsu ("best of the best") gave the Japanese a nice headstart.
In his work with Japan through the 1950s, the late "quality guru" W. Edwards Deming pitched statistics as the basic means of finding out what any system can do and then designing improvements, as indicated, to help the system become more productive (Dobyns 1990). Some 50 years ago, at the federal level, the Commissions on Organization of the Executive Branch of Government, also known as the Hoover Commissions, worked successfully to streamline a federal government grown too large and too disorganized because of the Great Depression and then World War II. Today, we have the Gore Commission, best known for its work on Reinventing Government, along with a series of legislative initiatives at the federal and state levels to streamline the respective governments to work better.
ICMA again entered the picture in 1973 by cosponsoring and providing technical assistance on survey and national measurement issues on an 18-month pilot project of the National Science Foundation that sought to assess the effectiveness of public services delivery. The resulting publications (1977,1974) gave readers:
. . . an overview of various aspects of local government effectiveness measurement, including criteria for the selection of measures, uses for such measurement, identification of measures for [several service areas] . . . . and early findings on implementation. The [later] report detailed specific measures and data collection . . . [and was] intended to supplement, rather than supersede, the 1974 report.
Also for the National Science Foundation, ICMA (1977) issued a separate joint publication to provide a way to measure the overall performance of fire protection delivery systems. The National Fire Protection Association (1974), one of ICMA's 1977 coauthors, previously had released material on the utility of measuring fire protection productivity. (Readers interested in a more detailed history should refer to Hatry's 1989 article.)
Benchmarking traditionally has been associated with cost analysis, focusing on what competitors do and on what it costs them to do it, including machines, materials, and manpower, as well as nonproduction costs such as distribution. Performance-based budgeting became popular after World War II, with an emphasis on efficiency measures as expressed by the cost or number of hours per unit of output (Harry 1989). In fact, most performance measurement to date in the public sector has centered on financial indicators. As benchmarking's utility has been realized, methods have been generalized easily to apply to nonfinancial services.
Today, the practice of standard setting is everywhere. The U.S. Departments of Labor and Education both have massive projects--Workforce 2000 and the National Educational Goals Panel, respectively--to establish national standards for industry and technical occupations, as well as for education that develops job competencies for use as benchmarks. Independently, the U.S. Office of Management and Budget continues to work with federal agencies to develop standard, nonfinancial benchmarks common to the work of several agencies.
National associations such as the Government Standards Accounting Board, the National Academy of Public Administration, and the American Society for Public Administration have passed resolutions calling for the public sector to use performance measurement and reporting systems. And several states have enacted financial performance reporting standards for state agencies. Others are sure to follow.
One reason for the push is the 1993 Government Results and Performance Act. The new law mandates creation and support of inspectors general and chief financial officers to fight waste in selected federal agencies and to improve accountability. for financial and general management. Strategic plans must be set, performance goals established, and an annual report filed with Congress on actual performance as compared with goals. Select federal agencies now must show results before new appropriations are made; no more automatic refundings will occur just because a program was running last year. In exchange for greater accountability, these agencies have been given more flexibility to waive administrative controls to get things done. All of these changes were made so that government could manage for results, not just cite rules and regulations as a defense against action.
What Are performance Measurement and Benchmarking?
Where do we stand in relation to others' delivering a particular program or service? Who is doing something out there better than we are? What are they doing that we are not, and how can we change to mirror their performance?
Getting quantitative answers to these questions is the essence of performance measurement--the determination of how effectively and efficiently (at the .p lowest cost) your jurisdiction is delivering the public service of interest. The process is designed to yield information so that decisionmakers can tell how effectively a program or service has used its allocated resources (Grifel 1993); in comparison with other service providers. If the answer is "not well enough," you now hive that information, along with the basic tools to do something about it.
To measure something---IQ, height or weight, attitude, miles per gallon, personality, an employee's annual performance rating, or whatever--means to quantify it using a defined set of rules. To assess height, for instance, you compare the item with a measurement device (a ruler) to obtain height data. To assess an applicant's potential to do a job, you administer a valid ability test. Some things obviously are more difficult to measure because they cannot be observed directly. But the basic process of quantifying something via defined rules remains the same. Only the particular means for measurement will differ among different variables.
Finding the Best. Whether the category is colleges, restaurants, doctors, or local services, some are simply better than others. Certain colleges, for instance, generally are recognized as best and so attract many applicants. Many doctors who are identified as being the best must turn away new patients, while many parents move to jurisdiction Y just because it has a good school district. And often, you cannot get near a good restaurant around dinnertime.
If a service provider or institution is better, then something makes it better. Performance measurement is the process of finding out what that something is (why is college X better than college Y?). The first step involves identifying performance indicators (what underlies performance?), "operationally defining" each criterion, then quantifying the criteria through measurement. The criteria are the performance benchmarks.
Benchmarking, the next step, refers to comparing several competitors on the same benchmarks to see who is best, finding out why that one competitor is best, and then using the best practices as a means of achieving better performance in your own program or service.
A so-called operational definition is one that defines a variable for the purpose of measurement. Then, you can measure the variable according to that definition, thus ensuring a standard basis for comparison. For instance, "response time" can be defined in several ways; each respondent must know how this benchmark is being defined and must gather data only according to the operational definition that has been established. All users must "start the response-time clock" at the time that a call is received by dispatch--if that is how response time is being defined-or when dispatch relays the call to the units that will respond--if that is the definition. Operationally defining "intelligence" as "score on an intelligence test" means that someone scoring a 130 is more intelligent by definition than someone who scored a 95 on the same test.
Note that performance measurement and benchmarking are not exactly the same, though some people use the terms interchangeably. The initial work done to specify and gather data on the criteria that account for the performance of a program or service is known as "performance measurement." Knowing the factors that are important in effectively performing a particular service or function is the foundation of benchmarking practice. Benchmarking per se is the next step, which is taken to discover what those identified as having best practices are doing that you are not doing.
A Needed Baseline. A performance measure is thus a baseline, standard, norm, or criterion (all of these terms essentially mean the same thing) against which users can assess their own performance in a program or service. Each performance indicator, or benchmark, is one criterion underlying successful program or service performance; services can have many benchmarks on which you can make comparisons, as long as each benchmark is shown to be a valid component of performance.
No single benchmark or range of values possibly can account for the total performance of any program or service. Certainly, some indicators are more important than others. But putting together a series of valid benchmarks is necessary to gain a good idea of what is needed to improve service effectiveness.
One goal of benchmarking, naturally, is to permit comparisons among benchmarks developed through performance measurements. Although two localities may have similar populations, so many other variables exist that you must question the extent to which direct comparisons really mean anything. Such questions as the following arise: What makes jurisdictions comparable? And to what extent do comparisons with an established benchmark make sense for my own particular jurisdiction? Taking it a step further, what level of performance is considered to be significantly different from the established benchmark?
Issues like these underscore the importance of making performance measurements and of obtaining accurate data on valid benchmarks. ICAIA'S Permance. Measurement Consortium will not simply use members' averages on particular criteria as benchmarks. A range of values will be published, not a single criterion. Physicians, for instance, have learned not to give one ideal weight for any individual but to give a range that the healthy person should fall within. This practice prevents some of the interpretation problems encountered when comparing individual measures against a single value. (The ideal weight for my height is 172, and I weigh 180. Does a difference of eight pounds mean that I must lose a few pounds or endanger my health?). Even some good, valid data are not exact.
Remember that regardless of the quality of data or validity of benchmarks: (1) no data are ever perfect; (2) direct comparisons among competitors (or comparisons of yourself against an absolute standard) should be done only to find red flags; and (3) small differences must not be taken as meaningful.
Also, never expect to make perfect direct comparisons. Either you will come away disappointed, or you will be misled by numbers that you assume are highly accurate performance indicators of particular services in unique jurisdictions. As CORE (1993) nicely summarizes:
. . . cities are caught in a bind. On the one hand, they each have unique combinations of economic, demographic and environmental characteristics that make them difficult to compare [directly]. On the other hand, city officials and citizens want to know if they are being "efficient." To evaluate efficiency, however, requires comparisons.
Performance measurement lets you quantify whatever variables are selected as underlying the performance of a particular service. On the other hand, benchmarking per se is a general means of comparison, the starting point for finding where changes are needed or not needed. The practice is not an end in itself, nor is it an exact science. At best, both performance measurement and benchmarking are rough guides with which to begin the improvement process. What you do to improve may be more important than what you do to find where improvement is indicated.
Defining Performance Measures
Finding good performance indicators is not as simple as it seems at first. Think about what constitutes a good lawyer. "Percent of cases won" is a cloudy indicator because some attorneys take hopeless cases. Annual income is a possible benchmark, but what about those routinely working with poorer clients, providing pro bono services, or--as most agree that a good lawyer should do--trying to settle for a lower fee in exchange for not going to trial? How about the criteria that account for a reliable plumber, a good doctor, or a safe driver? Because choosing criteria is difficult, achieving agreement that each suggested criterion is important to service performance is an essential step.
It should be emphasized that while a benchmark is a standard, it is not a measure of quality. Nor is it the lowest common denominator among those delivering a service. One of my favorite moot benchmarks is the one used by the Defense Department's procurement staff as a criterion of performance: number of contracts awarded annually. The more contracts they issued, the better Department of Defense's procurement people were rated at doing their jobs. The performance benchmark of number of patients seen by a certain HMO is just as meaningless, as is the number of letters processed by the U.S. Postal Service. What if 20 percent of the mail is lost or misdelivered in a particular region? Most times, quantity measures are independent of the quality of services being provided.
Benchmarking 101. Developing performance measures the heart of benchmarking--begins with a dear statement of the program's mission. Benchmarks flow out of objectives and mission statements, once the latter have been accepted by all parties. Ask your team, What does our program or service do; who are its customers; and what do those customers expect from the program or service? It is first necessary to get agreement on just what it is you do. Else, there can be no benchmarks with which to measure or compare performance.
A mission is the reason why the provider exists, while goals are the results that support the mission. Objectives are what must be accomplished to achieve a goal. For instance, the Bureau of Potholes within the public works department might state as part of its mission "to maintain all city roadways." A goal written in support of the mission, then, would be "to ensure that every city road is clear, passable, and free of potholes and other obstructions." An objective to attain the goal might be "to repair 90 percent of the potholes found within city limits within 72 hours."
One of Edwards Deming's 14 points is "consistency of purpose"; you should decide what business you are in, what your products and/or services are, who your customers are, and how you can stay in business. Buggy-whip makers may have thought they were in the buggy-whip business when actually they were in the transportation business. Specifically, they were in the field of vehicle acceleration. Even making the best-quality buggy whips was not enough, for consistency of purpose means staying ahead of the customer to meet present needs, as well as planning for the future (Dobyns 1990).
To stay ahead and to plan, you may have to look at your program or service in a new way. IBM did this and moved from being the leader in typewriters to the major player in computing. Because it realized that its mission was communications, not typewriters or machines, it was able to change when the technology used to fulfill its mission changed. Thus, because benchmarking must be a continuous process in order to work well, benchmarks need not remain the same from year to year. They should reflect new information, a revised mission, and organizational changes, among other factors. As such, missions, goals, and objectives must change, with new criteria of success being established.
The next step is measurement, or gathering data to quantify each benchmark. AS Deming said with respect to standardization, "whatever number you get depends on how you count." Einstein put it this way: "Not everything that can be measured is important, and not everything that is important can be measured." Hence the notion of validity, or the degree to which you are measuring the right things the right way. In other words, are you measuring the main criteria that really underlie service delivery and performance? If so, then improving a particular criterion will improve overall performance of the service.
Two other aspects to consider are the reliability and meaningfulness of your data. Reliability is a measure of stability or replicability. A ballplayer who gets six hits in 20 at-bats during the season has hit the magic .300, but so has someone who finishes the season going 180 for 600. Most people would prefer the latter hitter in a critical situation because more at-bats suggests a more stable batting average, which means there is more reason to expect the latter hitter to hit .300 next season. In the same sense, you should not regard small differences as significant. For instance, while an average response time of 5:25 is better than one of 5:35, you would be hard pressed to show that 10 seconds is meaningful in .terms of improved performance.
Performance Measurement Only Gathers Data
But just measuring something does not improve it. Performance measurement is a planning tool. The next step is benchmarking, which is an improvement (i.e., management) tool. At this point, you are comparing your performance with others' work on each benchmark to identify who is performing best on particular benchmarks (best practices) and who is falling behind. The next steps are to analyze what the best practitioners are doing that you are not, and to reengineer best practices for importing, once you have identified these practices. Analysis and reengineering are important because trying to import or replicate a best practice as-is from one jurisdiction to another generally will not work.
Not only can entire organizations use benchmarking to compare themselves against other, similar organizations, but also departments or services within an organization can study the methods of similar units in other places. By the same token, to find a best practice, you need not benchmark the same service across all jurisdictions--or throughout your own industry.
Realizing that services are made up of individual tasks or methods that can be generalized across different fields, Florida Power and Light visited a Japanese utility company, learning, among other things, how to protect power against failures caused by lightning strikes. Motorola routinely looks at processes of firms that are not direct competitors but that do tasks similar to those done by Motorola. This way, the latter learns from these other firms about order processing, billing, and other accounting practices.
Looking for better ways to handle long lines, First Chicago Bank bench-marked airlines; retailer L.L. Bean looked to Xerox for better ways of warehousing and managing materials. And Motorola benchmarked Domino's Pizza for pointers on improving service and delivery times on cellular telephones.
(Motorola had learned about benchmarking from Xerox, then applied the concept so well that it won the Baldridge Quality Award before Xerox did.) General Motors recently used the process to design new and innovative, L-shaped assembly lines so that components can be assembled closer to where they are machined. Alcoa, a benchmarking leader, routinely benchmarks other companies on benchmarking (Biesada 1991).
Benchmarking Pointers: Some Final Words
First, do not make a commitment to benchmarking, TQM, or anything similar. Make a commitment to improving. Otherwise, most likely you will become a statistic, one of the 75 percent of organizations simply going through the motions of benchmarking. Basic human nature makes you want to keep doing something in the same way you have been doing it. This may be comfortable, but it is not necessarily the best practice. Staying the same improves nothing.
For proof, visit any driving range. It is a fact that nearly two-thirds of all golf strokes are taken within 100 yards of each hole. But by far the majority of practice time spent by golfers involves shots longer than 100 yards. Most courses do have practice greens for sinking a few putts before teeing off, but when will someone introduce a "short-game range"?
Not surprisingly, research suggests that the least effective people are those who spend the most time doing tasks that they feel most comfortable doing. Golf metaphors can be stretched to cover service deliver. Long and straight driving really looks good, especially on the first hole in front of other golfers waiting to tee off, or at a range. But improving driving will have a minor effect on overall golf performance. It is thus important to focus efforts where they will do the most good. Users are continually reminded to prioritize: Benchmark first those programs or services with the highest costs, those that bring in the most revenue, or those in which you suspect performance shortfalls (e.g., those that show an excess of citizen complaints or a decrease in revenues). Benchmarking a lunchroom, mailroom, or employees' activity fund probably will not yield the best rerum on your investment (Swain 1993) when there are so many other choices out there.
Remember that benchmarking focuses on results and improvement, not on creating a public report card. Summarising the delivery of respective programs or services with a single letter grade may be a quick and dirty means of assessment, but it is not what benchmarking is about. This is not an easy lesson to learn in the real world of councils, where so many managers are in transition. Managers, agency heads, and councils, however, must all understand that the goal is to improve, not to place the blame for any sub-par performances. Identifying a problem and taking steps to resolve it is certainly preferable to not knowing something is wrong, keeping your head in the proverbial sand, or maintaining an inefficient status quo.
Of course, when comparing jurisdictions on bencmarks, even those services found to be above the norm can still improve. Deming preached constant improvement: Top performers should never be content to rest on their laurels. Improving quality, he wrote, automatically increases productivity; the more quality that is built into a product or service, the less it costs to make or deliver. Quality is designed in, rather than inspected in later on (Dobyns 1990). Failing to meet requirements the first time around can take as much as 30 percent from an operating budget or can mean 30 percent in lost sales revenue:
Finally, to obtain real benefits from benchmarking, all users must be ready to accept change and to make a commitment to follow through. An organization's environment and culture must be receptive to change. Successfully implementing a quality process depends upon, and naturally causes, a change in organizational culture and in traditional ways of doing things. Employees will be doing things differently from in the past. They will be thinking differently, with emphases on responsiveness, customer satisfaction, and doing it right the first time. At Florida Power and Light, for instance, managers and supervisors fill the roles of leaders, coaches, and trainers. In 1989, FPL was the first group outside of Japan to take home the Deming Prize for outstanding achievement in quality management. As Swain (1993) concludes:
Benchmarking will only be a success if the information showing where you fit in against the best competitors can get translated into direct action . . . . That is the whole point: to change the company into one that is so efficient and so profitable that your competitors start benchmarking themselves against you.