Search Results
321 items found for ""
- THE GRAVEYARD OF GOOD IDEAS
A couple of weeks ago, we wrote a B&G report titled “ Is Speedy Government Good Government? ” in which we made the case that the pressure for states and localities to solve problems overnight can often lead to failed efforts. “Jumping into new initiatives. without the time for adequate preparation, may be great for elected officials anxious to make quick headlines,” we wrote “But glitter often tarnishes as time passes and efforts go through a rational progression of brainstorming, bringing in stakeholders, finding revenue streams, building a talent base, and forging through a political process that can sometimes favor inertia over momentum.” That column seemed to strike a responsive chord among many of our readers, one of whom, Barry Van Lare, a very active fellow with the National Academy of Public Administration, commented on a related issue that we think is worthy of note. "While inadequately staffed and hasty implementation can, and often does, lead to failure,” he wrote, “there is also a problem with the frequency of change. All too often potentially successful initiatives are abandoned well before they can reasonably be expected to produce results." In our experience that’s absolutely true and it’s a pity. Over the years, we’ve seen many efforts that seem to have great promise fall by the wayside before they have the opportunity to prove their value. We’ve seen this happen repeatedly in the realm of education where new programs are put into effect, which in the real world will take four or five years to show progress, but then a new superintendent comes in is confronted by a school board impatient for results and tries something new (but not necessarily better, or maybe not as good). Even the best programs often take time to have any impact, even when they’ve been carefully thought out beforehand. New staff sometimes must be hired, and existing personnel have to be trained (and sometimes there’s no money to do so, but that’s a different story). Progress rarely can accrue on the back of a good idea until the people charged with implementation are up to speed. What’s more, sometimes terrific new programs have a whole host of little flaws that must be detected and dealt with before they begin to show their worth. Getting to visible solutions often requires an iterative process, with tweaks and changes necessary before results show improvement. Delay in gathering, analyzing and disseminating data can also lead people to believe that state or local governments aren’t making things better for them. When an evaluation or performance measurement data comes out a year after a new program has been inaugurated, it’s generally based on the most recent data available, which frequently comes from before the effort was even begun. Administrators are likely to understand that this is just a timing issue, but when the local press picks up on the report, it’s all too easy to ignore the existence of the initiative and that, in turn, can make residents clamor for yet more change. Lurching from one initiative to another, without sufficient evidence of whether the first one worked or didn’t, isn’t a productive approach. Yet an impatient populace, who are increasingly losing trust in government, can push their leaders to forfeit the advances that might have been made, if they’d only had the time. #StateandLocalGovernmentManagement #StateandLocalPerformanceManagement #StateandLocalPerformanceMeasurement #StateandLocalProgramEvaluation #StateandLocalGovernmentData #StateLocalPolicyImplementation #GovernmentProgramGraveyard #CityGovernmentManagement #CityGovernmentPerformance #StateandLocalHumanResources #HastyProgramImplementation #LaggingData #TrustinGovernment #StateandLocalWorkforce #StateandLocalProgramFailure #BarrettandGreeneInc #DedicatedtoStateandLocalGovernment
- IS SPEEDY GOVERNMENT GOOD GOVERNMENT?
Not to be overly blunt, but we’ve grown sick and tired of hearing people complaining about how long it takes state and local governments to make dramatic improvements in the lives of their residents. Anyone who has been reading this website, or any of our other writing, will know that we’re firm believers that a band of dedicated people working in government can make important changes for the better. But they rarely happen overnight. Take for example, an article we recently wrote for Route Fifty that focused on a handful of states which are taking important steps forward to create accommodations for potential workers who deal with a variety of “non-apparent” issues including autism. Since these kinds of issues had been ignored in the past, we thought that this was splendid news to share. But one of our readers wrote to us to say that she was “wondering why certain states are more anxious than others to begin these programs.” We understand the importance of quick action, but also know the complexity of culture, policy and organizational change and understand that it can take a fair amount of time for any initiative – no matter how important – to get traction more widely. Jumping into new initiatives. without the time for adequate preparation, may be great for elected officials anxious to make quick headlines. But glitter often tarnishes as time passes and efforts go through a rational progression of brainstorming, bringing in stakeholders, finding revenue streams, building a talent base, and forging through a political process that can sometimes favor inertia over momentum. Carrying this thought a step further, we’ve seen lots of instances when legislation actually moves too quickly to yield a successful outcome. Elected and appointed officials need to take the time and effort necessary to consider how the policy is going to be implemented. As a PWC study titled, “ Are public projects doomed to failure from the start ?” stated, “Political decision-makers and senior civil servants often have misconceptions about the capabilities and boundaries of project management. Project deadlines are often set on the basis of political debate rather than a realistic planning effort.” There’s an unfortunate phenomenon at work here. The more people insist that their leaders make changes overnight, the more likely it is that the new initiatives will not succeed. And the more failures accrue, the greater the pressures to get something new and exciting out the door – especially as election day approaches. There’s an anonymous quote we like that sums up our case: “Democracy is a slow process of stumbling to the right decision instead of going straight forward to the wrong one.” #StateandLocalHumanResources #StateandLocalGovernmentManagement #StateandLocalCultureChange #StateandLocalGovernmentPerformanceManagement #StateandLocalWorkforce #DeliberativeDecisionMaking #StateandLocalPolicyImplementation #PolicyImplementation
- POTHOLES ON THE AI ROAD
We’re not scared of AI. It doesn’t take much historical memory to know that pretty much any new technology brings out fear and trepidation. In fact, according to Techradar, “When the Stockton-Darlington Railway opened in 1825, people feared the worst: the human body, surely, wasn't made to travel at incredible speeds of 30 miles per hour. People genuinely believed that going that quickly would kill you in gruesome ways, such as your body melting.” Over a century later, many were convinced that the arrival of television was going to ruin children’s eyesight and destroy the movie business altogether. As we noted in one of the two biographies we wrote about Walt Disney, film producers regarded television as “the monster,” and viewed it “with suspicion fearing it would keep audiences out of movie theaters.” And now here comes AI. We turned to ChatGPT for its view on this issue (Going to the source, seemed only fair.) It told us that “As AI systems become more sophisticated, there’s a risk that they may operate beyond human control or oversight.” Anyway, we feel pretty confident that while AI may have some unfortunate consequences, we don’t think it’s the stuff of nightmares. In fact, we’ve tended to write about many of the ways in which AI can help governments run more efficiently and effectively. But our eyes haven’t been closed to the downsides; the hazards of AI both in government and as a research tool. Here are five of the things that are on our mind: Many governments are working on policies designed to restrain improper uses of AI, and that’s a good thing. But based on years of watching how government regulations work, we worry that there’ll be insufficient oversight to make sure that the policies actually have been implemented. There’s an enormous amount of misinformation available in the AI-verse. We turned to ChatGPT to ask about ourselves, and after saying a bunch of nice things, it also mentioned that “they have authored or co-authored books such as “Balancing Act: A Practical Approach to Business Event Based Insights.” Having never heard of this book, we were sure we hadn’t written it and on investigation it turns out to have been written by Kip Twitchell who works with IBM Global Business Services. It’s not only the bad information that’s the problem, it’s the information that isn’t even available through AI that can matter. Artificial Intelligence, after all, can only know what it can find by foraging the Internet, and a great deal of worthwhile knowledge hasn’t been digitized at all. This can include current information from smaller governments, remote geographic areas, groups of people whose sentiments or lives tend to go undocumented, or magazines, newspapers and journals that no one has ever made available in digital form. AI has the capacity to come to conclusions based on hidden biases. As an October 2023 brief from IBM pointed out, “ Underrepresented data of women or minority groups can skew predictive AI algorithms. For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.” Also alarming to us is the commonplace notion that AI can make decisions for governments. It can certainly be a valuable tool, but as smart as AI may be right now (we won’t begin to think about the future), it can only serve as an advisor, not as a decision-maker. For years now, we’ve advised against using the term “performance-based budgeting”, because we don’t believe that performance measures can be the sole guide to good budget decisions. We’ve always made it a point to talk about “performance-informed” budgeting. The same goes for AI’s potential to help with decisions. It can inform them but shouldn’t be expected to make them. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #StateandLocalGovernmentData #StateLocalGovernmentAIPolicyandManagement #CityAIManagement #StateAIMnagement #CountyAIManagement #StateandLocalTechnologyManagement #StateandLocalPerformanceMeasurement #StateandLocalPerformanceManagement #StateLocalArtificialIntelligenceManagement #StateLocalArtificialIntelligencePolicy #GovernmentOversight #FearOfTechnology #GreeneWaltDisneyBio #StateLocalGovernmentRegulation #BarrettandGreeneInc
- EMPLOYEE SURVEYS' TWO-EDGED SWORD
Employee surveys can provide extremely valuable information for supervisors, managers and HR officials in cities, counties, and states when they reveal both the negative and positive sentiments that employees have about their workforce. They have the potential to lead to decisions that will improve workforce satisfaction and retention of valued employees. But when employee surveys are viewed as nothing more than another form to fill out and there’s no feedback or sense that the surveys were ever seen by anyone who cares, then they can have the opposite effect. “When you do a survey, you are implicitly making a commitment to employees that you are going to share the results and do something about it,” says Bob Lavigna, senior fellow-public sector for UKG, a firm that provides workforce and human resource management technology. “And if you don’t follow through on that commitment you are going to damage trust.” We called Warren Kagarise digital engagement manager for King County, Washington to ask him about this issue and his advice was simple: “Once the survey ends, close the loop with respondents — even if the result is not flattering to the agency.” He went on to say that “if you’re not able to close the loop and give the acknowledgment that we heard from you and this is the next step in the process, people assume that you were never listening in the first place.” Of course, simply making the results of surveys available is only half the equation. It’s equally important to let the people who filled it out know what the organization intends to do about the problems that have surfaced. Furthermore, even if there’s nothing much that can be done, it will serve an entity well to explain why that would be the case. “I f for example there are complaints about compensation and there aren’t enough resources to raise pay, then it can be worthwhile educating employees about the value of the benefits to their total compensation,” advises Lavigna, “as benefits like health care or pensions can add on another 30 percent or more to that total.” At a time when many state and local governments are concerned about their turnover rates, employee surveys can help keep a satisfied workforce in place. But if those surveys are perceived as a sham exercise, everyone would be better off if they weren’t used at all. #StateandLocalHumanResourceManagement #StateandLocalHumanResources #StateandLocalPerformanceManagement #StateandLocalEmployeeSurvey #EmployeeSurveyFollowUp #StateLocalTotalCompensation #EmployeeSurveyImpact #PublicSectorWorkplace #LaborManagementRelations #RobertLavigna #UKG #KingCountyDigitalEngagement #StateEmployeeSurveyFeedback #CityEmployeeSurveyFeedback #StateLocalEmployeeRetention
- REBRANDING LOCAL GOVERNMENT
Over the years, we’ve chatted with high school and college students about the potential of careers working for state and local governments. All too often their responses are something like “Yeah, I suppose so,” in tones that make us believe that they’re not supposing anything at all, except that the whole thing sounds like a crashing bore. This is particularly dismaying for the two of us who have devoted our careers to this corner of the world. We are pretty sure that for the most part they have little or no idea what kinds of jobs local government can offer aside from the obvious ones like police, fire, and sanitation. Over the last several years, as the workforce shortage has afflicted many governments, we’ve reported about all kinds of outreach efforts that attempt to show the life of a government employee as a secure one with potential for advancement and the opportunity to benefit the world. These efforts are paying off in some places, but we think that many cities, counties and states are missing out on a potentially more persuasive approach. Instead of trying to sell jobs in local government as though that had much meaning, do a better job at showing potential candidates all the genuinely exciting missions that local government leads to. In a recent conversation with John Bartle, President-Elect of the American Society for Public Administration and Dean of the College of Public Affairs and Community Service at the University of Nebraska at Omaha, he expounded on precisely what we’re thinking. “My children are 25 and 23,” he told us. “And they and their friends are not excited to work for the government. However, they are excited to advance many of the goals that require government and the nonprofit sector: peace-making, disaster response, improving racial and gender equity in programs, fair taxation, labor rights, and child protection.” According to the National Center for Education Statistics the following are the most common majors for aspiring students to earn a baccalaureate diploma: business, health professions and related programs; social sciences and history; biology and biomedical scientists; psychology and engineering. All of these fields offer a plethora of jobs in government. And that doesn’t even count the “helping jobs,” like social work that are appealing to many. Okay, we admit that maybe the big dreams of many teens – to be professional athletes, movie stars or rock musicians – aren’t on the roster of job titles in government. But we do believe that outside of fantasy-land job plans, there are plenty of young people who aspire to help other people. Better educating them in the many opportunities that exist in the realm of government is a way to sell the brands of jobs that cities, counties and states can offer. #StateandLocalHumanResources #StateandLocalGovernmentManagement #StateandLocalHiring #LocalGovernmentHiring #RecruitingNewGovernmentEmployees #RecruitingYoungPublicSectorEmployees #StateandLocalRecruitment #CityRecruitmentStrategy #CountyRecruitmentStrategy #StateRecruitmentStrategy #MarketingLocalGovernmentJobs #StateandLocalWorkforceShortage #WorkforceShortage #AmericanSocietyForPublicAdministration #ASPA #JohnBartle #UniversityOfNebraskaOmaha #AttractingYoungPeopleToGovernment
- OUR RESPONSE TO RESPONSE TIMES
When it comes to many vital public services, including police, fire and EMS, one of the primary – and sometimes the only – performance measurement that people use is response time. On the surface this makes a lot of sense. For emergency services, particularly, every moment can spell the difference between a minor incident and a tragedy. To the general public, fast response times are real tangible evidence that they are getting good service. Just ask anyone who has waited for an emergency vehicle when a relative or friend might be undergoing a heart attack. All that said, however, response times are often misunderstood. Sometimes, when they are overemphasized, they can actually lead to emergencies themselves. It’s our guess that most people who read about response times aren’t aware that they can be measured very differently by first responders. According to Lexipol, which provides information and tech solutions to help public safety organizations, there are three different ways that response times are generally measured. They are: · “Turnout time – the elapsed time from when a unit is dispatched until that unit changes their status to ‘responding.’” “Call processing time – the elapsed time from the call being received at the (public safety answering point) to the dispatching of the first unit.” “Travel time – the elapsed time from when a unit begins to respond until its arrival on the scene.” There’s a huge difference between the three – particularly from the point of view of the person who is urgently in need of help. With a shortage of EMS vehicles in many parts of the country, for example, after the 911 call is finished it can take the dispatcher valuable minutes to actually get an ambulance company to respond to the call. Once that happens, the ambulance still needs to arrive at the scene. From the perspective of the person who made the call, the response time might be 23 minutes (from call to help) not eight minutes (for the emergency vehicle to make the trip). If response times are truly to be used as helpful performance measures, we’d argue, that what really matters is the amount of time it takes from hanging up with 911 until help comes knocking on the door (or kicking it down in extreme instances). Other measures don’t really reflect the customer experience. Yet another issue with response times is that they don’t take into account the specific situation – and that can jeopardize safety for others, including the responder. If someone thinks they’ve broken an arm, for example, and calls 911 it probably doesn’t matter much if an ambulance arrives in ten minutes or twenty minutes. But if the call is for a fire or a heart attack then every minute counts. Yet these different scenarios are comingled in publishing response times. And that means that when emergency vehicles are summoned, responders who are being held accountable for their response times are responding to the scene as quickly as is possible – traveling far faster than the speed limit, going through stop signs and so on. No surprise that in 2021 according to the National Safety Council, 198 people “died in crashes involving emergency vehicles. The majority of these deaths were occupants of non-emergency vehicles.” Our recommendation is that response times, wherever possible, should be disaggregated in such a way as to differentiate between life and death emergencies and those that are far less serious in nature. This would not only make the response time measures more useful – it might save other innocent lives along the way. #StateandLocalGovernmentManagement #StateandLocalPerformanceMeasurement #ResponseTime #PoliceResponseTimeManagemen #FireResponseTimeManagement #EMSResponseTime #ResponseTimeManagement #PoliceManagement #PoliceData #FireManagement #FireDepartmentData #EmergencyManagementResponseTime #EMSResponseTime #StateandLocalDataGovernance #NationalSafetyCouncil #ResponseTimePerformanceMeasures #Lexipol #PerformanceMeasurement #PerformanceManagement #B&GReport
- THE TWELVE BIG LIES ABOUT STATE AND LOCAL GOVERNMENT
There are all kinds of variations on the theme of the three big lies that people tell in the normal course of day-to-day life. One of our favorite sets consists of: 1) This is for your own good. 2) It’ll be done by 3:00 3) It must be true; I heard it on the news. In the old days there was another one that was exceedingly popular. It was “The check is in the mail,” but nowadays nobody much sends checks in the mail, so we’d offer a replacement for that one: “The check is being processed.” Those deceits, of course, are generic in nature. But over the years we’ve been collecting a series of mantras about the alleged reality of state and local government that don’t necessarily work in the real world. We’ve heard them from people at all levels of government, sometimes from established authorities and sometimes from people who just pretend they understand the way government works. Here are our top twelve. We’d be interested in hearing additional ones from readers of this B&G Report. Of course, some of the dozen items that follow are valid sometimes. But we’ve heard them repeatedly when ample evidence demonstrates that they’re wide of the mark. By the way, we hesitate to use the word “lies,” here. As that word seems to have become widely open to interpretation these days; and it’s frequently used just to describe something with which the accuser disagrees. So, just to be specific, what follows are explanations about the way things work that are frequently NOT the way things work. And the list is based on both our own experience, and the understanding of states and localities we’ve accumulated over the last thirty years. 1. “We know we are in financially sound shape because we have to pass a balanced budget.” 2. “It’s impossible to fire a public sector employee.” 3. “We’ll solve this problem by setting up a commission. Or a study group.” 4. “Our transparency website means our government is transparent.” 5. “Buying new technology will be the key.” 6. “Merit pay is pay based on merit.” 7. “The key reason we have a huge unfunded liability in our pensions, is that our benefits are too rich.” 8. “You should just look at the general fund in order to analyze our city or state’s financial condition.” 9. “You can always trust our data.” 10. “Government can be run like a business.” 11. “Everything we need to know is on the Internet.” 12. "Once a piece of legislation is passed that means that something is really going to happen.” #StateandLocalGovernmentManagement #StateandLocalHumanResources #StateandLocalBudgeting #StateandLocalGovernmentPerformance #StateandLocalGovernmentTransparency #StateandLocalGovernmentWorkforce #StateandLocalPension #StateandLocalGovernmentTechnology
- WHY THE PHRASE "BEST PRACTICE" MAKES US JITTERY
We have to admit it. More than once we’ve referred to a policy or management approach as a “best practice.” But mostly those words were originally uttered by a source we quoted. Frankly, those two overwhelmingly common words often make us uneasy. There may be cases in which best practices can apply from city to city and state to state. Best budgeting practices, for example – such as those developed by the Government Finance Officers Association – can certainly be useful. It’s an accepted best practice in budgeting, for example, that entities should cover current year expenditures with current year revenues -- not revenues borrowed from the future. Who can argue with that? Outside of budgeting, there are some other areas in which best practices can certainly hold up. And many of them. which may not have held true in the past, are now thankfully self-evident. In human resources, for example, it's certainly a best practice to make every effort to avoid explicit or implicit racism in hiring or recruiting. Or consider the realm of information technology, where no one can deny that sufficient training can be fairly called a best practice. Before we go on, it seems worthwhile for us to provide our own definition of "best practice." Others may disagree, but it's the way the words sound to us -- and we suspect to many others. We believe that the ubiquitous phrase should be used to describe management policies that can be applied pretty much universally. Best practices, we'd argue, should be something like plug and play models that others can pick up and use with a reasonable assurance of success. But that's often not the way the words are used. For example, the latest glittery idea that seems appealing (but has only been proven as worthwhile in a smattering of places) can often be dubbed as best. We see this all over the place. People writing reports for any number of significant organizations will take the study of a handful of cities or states and list approaches they’ve uncovered as “best.” Not to seem cynical, but we've noticed that often the words "best practice" are used in consulting firms to sell their own approaches. For years, it was considered a best practice that states set aside exactly 5% of revenues in their rainy day funds. No more. No less. When we researched the topic, we discovered that precise number emanated from an off-the-cuff comment in a speech given by a leader in one of the ratings agencies. As years have passed, thinking on the topic has grown more sophisticated. The Volcker Alliance, for example, has thrown that 5% figure out the window and encourages states to tie their reserve funding to the volatility of revenues. Here are five reasons we are concerned when a best practice is ballyhooed by a government official. 1) Ideas that work in rural areas often don't apply well to densely populated cities 2) Approaches for homogeneous regions may leave out elements important in places with greater diversity 3) Things that work well in healthy economic times may need to be forgotten in the depths of a recession 4) Changing times generally require new solutions. For example, in the depths of the pandemic it was a best practice not to shake hands. Nowadays, people even hug hello. 5) The label is too often applied before a notion has been properly evaluated and proven to be generally workable. Fortunately, there are a number of alternative phrases that can be somewhat more accurate. We prefer "promising," "leading," or "accepted" practice. None of these reflects a universally, unquestionably, absolutely superior way of doing government business. Ultimately, this is all just a matter of semantics. The fundamental reason we feel as we do about practices being labeled the “best,” is that this phrasing may stand in the way of the evolution of thinking that’s necessary for progress in states and localities. If we know the best way to do something, then why look for a better way? And the search for better functioning government is the core of what we do for a living. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #EvidenceBasedPractices #BestPracticeCynicism #ErroneousBestPracticeLabeling #AvoidingBestPracticeLabels #StateandLocalBudgeting #StateManagement #LocalManagement #PerformanceManagement #EvidenceBasedManagement #EvidenceBasedDecisionMaking #EvidenceBasedDecisionMakingShortcoming #GovernmentConsultantOverreach
- FIGHTING FRAUD: ADVICE FOR SMALLER CITIES
Whatever their size, all state and local governments must contend with the possibility that federal grant money or taxpayer revenues will be siphoned off by perpetrators of fraud. While many of the defenses against fraud are similar, there are some cautions that particularly apply to smaller communities. In an interview that appeared on Route Fifty’s October 11, 2022 “Follow the Money” broadcast, City Manager José Madrigal, talked about the effort to fight fraud in Durango, a city of 19,000 in a rural county in the southwest part of Colorado. We're highlighting here a few of Madrigal’s comments which are particularly germane to the 19,000 cities, towns and villages with fewer than 25,000 people. The following edited comments occur toward the end of the broadcast, slightly after the 15 minute mark. The hazards of personal connections in a smaller community: Madrigal said, “Connection can be a really great thing because, you know, with that small town feel, everybody knows each other. My kids go to school with a lot of my co-workers kids. They sometimes hang out. We’re very well connected and sometime with that personal connection comes a letdown in your guard. “You know the person. Our kids play soccer together and they play basketball together . . .You start building all of these social connections. In places where you may not be personally connected, it’s easier to be a little more suspicious." On learning from larger communities: Madrigal remarked that cities with smaller populations could still model themselves on bigger cities and not view size as a barrier. “Some people who have not been in a bigger city have this shield. ‘Oh, no. Can’t do that. . . They have more resources than we’ll ever have.’ “I think there’s ways where you can scale a lot of those things. I may not have a 30-member accounting department, but I have 15 and I can be able to do some things in a better way. “I think sometimes the bravado of coming from a small town or representing a small town (makes us think) we can’t do it like bigger towns. There’s a lot of processes that are out there that I think you can definitely scale down so as not to be intimidated by the processes of bigger areas. Look at them and say. ‘What can I bring in?’" #Fraud #PublicSectorFraud #Durango #Colorado #JoseMadrigal #FightingFraud #RouteFifty #FollowTheMoney #StateandLocalGovernment #StateandLocalGovernmentFraud #CityandCountyManagement #StateandLocalGovernmentManagement #SmallCityCulture #GovernmentOversight
- GUIDELINES FOR BUILDING A POTENT PERFORMANCE MANAGEMENT SYSTEM
In early 2022, we were honored to join the late Harry Hatry, a pioneer of performance management and a distinguished fellow at the Urban Institute and Batia Katz also of the Urban Institute, in co-authoring one of his last papers, which was titled “Do’s and Don’ts: Tips for Strengthening Your Performance Management Systems.” Hatry, who we'd known for decades, passed away on February 20, 2023 at the age of 92. The paper also included contributions from a formidable group of performance experts. The list at the time included Maria Aristigueta, dean of the Biden School of Public Policy and Administration at the University of Delaware, Don Moynihan, McCourt Chair at the McCourt School of Public Policy at Georgetown University and Kathy Newcomer, professor in the Trachtenberg School of Public Policy and Public Administration at the George Washington University. The paper sums up a great deal of performance measurement and management knowledge that Hatry and others put together over many years. Here are a handful of our favorite items under the category of “Do’s”. They aren’t taken verbatim from the paper – we’ve edited many for length. Collecting Performance Data Do seek input from stakeholders as part of your process for selecting performance measures to track. Stakeholders are likely to include frontline employees (those who serve your program participants); special interest group members; elected officials; and, especially, participants in each relevant demographic group. Do make sure that mission statements focus on expected benefits. What benefits or outcomes are sought from the program, and for whom? What negative outcomes does the program seek to avoid or alleviate? Too often, mission statements identify only how benefits will be achieved without clarifying what benefits are sought. Do include both output and outcome indicators in performance measurement systems. Select the measurements used to track the performance of new programs at an early stage of program development. Defining and sharing these measurements will provide guidance for people in positions of authority in the program about what is expected of them. Analyzing Performance Data Do compare the outcome values broken out (disaggregated) by demographic characteristics (such as by age group, race/ethnicity, gender, educational level, and location.) This is of major importance in identifying service equity issues and identifying different service procedures that would boost the outcomes for different demographic groups. Identify and highlight in performance reports unexpected issues indicated by these breakouts. Do compare the performance values over time. Look for trends that indicate the need for action. Do compare performance indicator actual values with targets that had been set for individual performance indicators. Targets are used both as an accountability tool and to motivate program staff. Presenting Performance Findings Do identify any significant ways in which the data collection has changed over time. Otherwise, users may be misled by year-to-year changes that are not attributable to real-world improvements or declines but simply changes in the way the data have been created. Do clearly define each performance indicator. Both the data collectors and data users should be able to understand what is being measured. For example, fire departments, can measure response time from the moment the call comes in until trucks arrive at the scene, or alternately they can provide the same measure beginning the moment the trucks leave the station. Do tell stories that illustrate data’s meaning and importance. Numbers alone will only communicate effectively to readers who enter a document with curiosity. Real-world anecdotes will engage a far larger audience. Disseminating Performance Findings Do share findings with the press. One common complaint is that performance information only gets attention when negative. That can only be counteracted with a proactive approach. One key to getting attention in the press is to provide information that runs contrary to common assumptions. Do make the latest data on the performance indicators readily accessible to program managers throughout the year. As issues arise during the year, the latest performance data should be available to managers to use in addressing those issues. Do provide summaries and highlights to report audiences after each performance report is produced. Using Performance Findings Do reply to queries about findings, even if they are critical in nature. If it turns out that a query challenges findings in a way that could raise some doubts, it’s worth acknowledging that. Trust and credibility grow when room for doubt is acknowledged. Do periodically review the performance measurement process and update it as needed. Is it tracking the right things? Are the performance data and the data collection procedures producing data of sufficient quality? Is the information of sufficient value to justify a measurement’s added cost? Are the performance findings clearly presented? Has the information gotten to all those who can use the information? Do unleash the full power of performance data, not only through regularly published reports, but also at other opportunities throughout the year. Use the performance measurement process to help address issues as they arise. This will enable decisions to be made with the latest available performance data. It will also enable program managers to obtain performance information tailored more closely to the particular issue at hand.
- MOUSETRAPS FOR FLAWED DATA
It may seem a little heavy-handed, but for years now we’ve been writing about the endless reams of bad data that are used to manage and to make policy. For the most part, we’ve pointed to issues that require careful examination of the information to determine if its trustworthy or not. But, as time has passed, we’ve come across a great many signals, easily spotted and identified, that point to quicker recognition that information should be scrutinized. Here are a half dozen examples: 1) Beware comparisons between absolute figures that come from different size cities or states. If, for example, something criminal happens to hundreds of people in California that may not be nearly as alarming a situation as when the same thing happens to dozens of residents of Wyoming or North Dakota. 2) Sometimes reports or articles use numbers that are so precise as to be unbelievable. It seems to us that when project spending is reported as $1,436,432.15, there’s no legitimate way to figure costs out to cents, dollars or hundreds of dollars. A tight range is often more useful and believable. 3) Speaking of ranges, it’s self-evidently problematic when an expense is reported as somewhere between $100 and $500 million. Either not enough due diligence has been done, or the estimators are living in the Land of the Wild Guess. 4) If you’re relying on data for which no assumptions are provided dig deeper. When discount rates vary between two state pension plans, it’s entirely possible that the liability figures are not comparable. 5) Watch out for figures that are huge beyond common sense. Some years ago, there was a lot of talk about one million children being abducted each year. Living in New York City, news reports were full of the story of just one little boy, Etan Patz who was last seen at a bus stop in lower Manhattan. How could it be that if such huge numbers of children were disappearing, one child was getting so very much attention? It turned out, according to the Denver Post in 1986 that the “national paranoia" raised by the one million figure wasn’t really reflective of scary men luring children into their cars with candy – but rather children taken in a custody battle. (And the often repeated one million figure was also an exaggeration. In 2017, the Justice Department reported that the number of serious parental abductions is closer to 156,000 annually of which about 30,500 reach the level of a police report.) 6) Information that is self-reported to the federal government by states or by cities and counties to the states can he questionable. A question like “does your city use performance information,” can get “yes” answers regardless of differing definitions and degree of use. In the past a big-city mayor told us that his community used measurements to make decisions about road conditions. When we pursued the question, it developed that the only data the city had was an accumulation of citizen complaints about potholes. #StateandLocalDataManagement #StateDataQuality #CityDataQuality #CountyDataQuality #SuspiciousGovernmentData #FlawedGovernmentData #BadData #InaccurateGovernmentData #StateandLocalGovernmentDataHazard #StateLocalGovernmentDataHazard #AvoidingInaccurateStateGovernmentDataComparisons #AvoidingGovernmentDataErrors #HowToIdentifyBadGovernmentData #TrustAndFaultyGovernmentData #StateGovernmentDataAssumptions #StateLocalPensionPlanDiscountRate #PublicSectorDataCaution #DataStandardization #SelfReportedDataSkepticism #SpottingCommonGovernmentDataMistakes
- THE ACADEMIC-GOVERNMENT COMMUNICATIONS GAP
We spend a lot of time talking with government practitioners and a lot of time talking with academic researchers. We’ve often wondered about the barriers that keep them from talking with one another as much as they should. That’s why we’ve been particularly charmed by the Office of Strategic Partnerships in North Carolina, which we wrote about in an April 16 column for Route Fifty. The effort there has been to close the gap that often exists between the multiple academic researchers in a state and the government officials who are often addressing the same topics – just in different ways. Here’s what Jenni Owen, the director of North Carolina’s Office of Strategic Partnership, told us last month. “People in academia who say they want to have an impact on policy really mean it. And people in government who say they want evidence and data to inform their decisions also really mean it. But the way they each go about doing that is often clunky.” Witness her experience in a recent meeting with about 100 faculty members and doctoral students. “How many of you are pursuing or have something on your research agenda that you think has implications for public policy?” she asked. Everybody raised their hand. Next question: “How many of you have talked to anybody in government ever about the topic you’re pursuing or thinking about pursuing from a research perspective?” And one person raised a hand. Cautiously. The problem is not one-sided. Government officials often have little time to seek out good research themselves and no easy way to know what’s going on in the multiple institutions of higher learning within the borders of their state or beyond. North Carolina has set up formal ways for government departments to communicate their research needs to universities across the state. But there were also ways that Owen, who has vast experience in both academia and government, and others cited in our conversations about how the relationship could be improved in relatively simple ways. 1. While state and local governments can certainly do a better job communicating the different initiatives that they’re working on, researchers can also do more to actively learn about their own governments. Owen and OSP often advise researchers, to watch the press releases from an agency, for example; pay attention to interim committee study groups; learn about organizational structure; look at departmental goal-setting, strategic plans and areas of cross-department collaboration. 2. Advice also focuses on Initiating communications at the beginning of a research project, not when it’s finished. This requires knowing about new or ongoing government initiatives that might connect with research and touching base at an early stage with a couple of sentences about the relevance of the research to the initiative. 3. That means that before communicating, it's important to understand areas of jurisdiction, a bit about departmental organizational structure and some basics about operations. If the research is about county jails, it’s likely an error to focus attention on the State Department of Corrections, which may have limited responsibility in that area. Likewise, it sends a bad signal to inquire as to whether someone is likely to be re-elected when they’re actually appointed to their position. A few other tips for researchers who want to see their work having impact: Don’t wait until a journal article is published to send it to a government official and hope that they read it, without explanation as to why it would interest them. Make sure that the journal article is relevant to the work the official is doing and include a sentence as to why you’re sending it to them. Understand who the players are below the top level. Communications don’t have to go to the Cabinet Secretary or the Commissioner. As Owen told us, “Don’t assume that the leader is where you need to start.” Consider ways to collaborate. University research may overlap with a government’s own specific research needs. See if the research you’re doing can also address a government’s own specific needs in an overlapping area. Says Owen: “I dream about the day when a researcher says to a government entity, ‘I’m going to go interview new parents in rural areas. Is there even just one question you’d like me to ask?’ “There are not just small gestures of partnership, but they are also substantive. It shows that researchers are thinking about policies and programs and applications of research learning for government decision-making. This can be a game-changer for the partners.” One more thing: The gap between professions; the difficulty in communicating and the caution with which each side approaches the other is something we run into all the time. Our book, “The Little Guide to Writing for Impact” was published last month. It was written in collaboration with Donald F. Kettl and motivated by our mutual sense of a pervasive frustration among academics, editors and publishers that different styles of writing and communication were often standing in the way of getting important research findings across to the practitioners who could put them into action. #AcademicGovernmentCommunicationsGap #NorthCarolinaOfficeOfStrategicPartnerships #JenniOwen #StateandLocalGovernmentResearch #StateGovernment&UniversityResearchLink #GovernmentAcademicCommunication #TipsforAcademicResearchers #StateandLocalGovernmentManagement #University&StateGovernmentRelations #OptimizingUniversityResearchforGovernmentDecisionMaking #UniversityGovernmentCollaboration #ImprovingResearchPartnerships #OptimizingGovernmentResearchNeeds #StateGovernmentResearchNeeds #ApplyingUniversityResearchToPolicy #UniversityGovernmentResearchGap #LinkingAcademicResearch&StateGovernment #B&GReport #RouteFifty #LittleGuidetoWritingforImpact #DonaldFKettl