top of page

Search Results

54 items found for ""


    When it comes to many vital public services, including police, fire and EMS, one of the primary – and sometimes the only – performance measurement that people use is response time. On the surface this makes a lot of sense. For emergency services, particularly, every moment can spell the difference between a minor incident and a tragedy. To the general public, fast response times are real tangible evidence that they are getting good service. Just ask anyone who has waited for an emergency vehicle when a relative or friend might be undergoing a heart attack. All that said, however, response times are often misunderstood. Sometimes, when they are overemphasized, they can actually lead to emergencies themselves. It’s our guess that most people who read about response times aren’t aware that they can be measured very differently by first responders. According to Lexipol, which provides information and tech solutions to help public safety organizations, there are three different ways that response times are generally measured. They are: · “Turnout time – the elapsed time from when a unit is dispatched until that unit changes their status to ‘responding.’” “Call processing time – the elapsed time from the call being received at the (public safety answering point) to the dispatching of the first unit.” “Travel time – the elapsed time from when a unit begins to respond until its arrival on the scene.” There’s a huge difference between the three – particularly from the point of view of the person who is urgently in need of help. With a shortage of EMS vehicles in many parts of the country, for example, after the 911 call is finished it can take the dispatcher valuable minutes to actually get an ambulance company to respond to the call. Once that happens, the ambulance still needs to arrive at the scene. From the perspective of the person who made the call, the response time might be 23 minutes (from call to help) not eight minutes (for the emergency vehicle to make the trip). If response times are truly to be used as helpful performance measures, we’d argue, that what really matters is the amount of time it takes from hanging up with 911 until help comes knocking on the door (or kicking it down in extreme instances). Other measures don’t really reflect the customer experience. Yet another issue with response times is that they don’t take into account the specific situation – and that can jeopardize safety for others, including the responder. If someone thinks they’ve broken an arm, for example, and calls 911 it probably doesn’t matter much if an ambulance arrives in ten minutes or twenty minutes. But if the call is for a fire or a heart attack then every minute counts. Yet these different scenarios are comingled in publishing response times. And that means that when emergency vehicles are summoned, responders who are being held accountable for their response times are responding to the scene as quickly as is possible – traveling far faster than the speed limit, going through stop signs and so on. No surprise that in 2021 according to the National Safety Council, 198 people “died in crashes involving emergency vehicles. The majority of these deaths were occupants of non-emergency vehicles.” Our recommendation is that response times, wherever possible, should be disaggregated in such a way as to differentiate between life and death emergencies and those that are far less serious in nature. This would not only make the response time measures more useful – it might save other innocent lives along the way. #StateandLocalGovernmentManagement #StateandLocalPerformanceMeasurement #ResponseTime #PoliceResponseTimeManagemen #FireResponseTimeManagement #EMSResponseTime #ResponseTimeManagement #PoliceManagement #PoliceData #FireManagement #FireDepartmentData #EmergencyManagementResponseTime #EMSResponseTime #StateandLocalDataGovernance #NationalSafetyCouncil #ResponseTimePerformanceMeasures #Lexipol #PerformanceMeasurement #PerformanceManagement #B&GReport


    We remember the exciting day when we bought our first IBM PC and a printer for $7,500 back in 1981. (Yes. You read that number right.)  Our exciting new computer had no hard drive and its operating systems existed on a floppy disc. Years later, after a few computer upgrades, we heard about this thing called a gigabyte. That seemed like an unimaginable amount of space – probably enough to store all the information in our world.  It wasn’t so much later that we had scores and then hundreds of gigabytes on our desks. These days we’re all hot and bothered about the ways we can use AI. So, before we say anything more about the various problems that come along with advances in communications technologies, let it be clear that we’re thoroughly captivated by technology and hope we always will be. But when it comes to communicating with one another we are frustrated by the losses we’ve suffered each time something new comes along. Back in the days when fax machines were the brave new world, lots of time was saved by sending letters instantaneously all around the world.   But soon afterwards, every organization had a fax machine, with the numbers on their business cards (those were the days when people still used business cards) and all kinds of hitches began to appear. For example, mass mailings (A free trip to the Bahamas!) started to clog up fax machines. Faxes often didn’t come through. They got ignored as they piled up in a central spot awaiting someone to bring them to their rightful recipient. But that was only the beginning of a downward spiral. E-mails are another example. Soon after we adjusted to communications arriving this way, we began to miss the old-fashioned mail system. Even more, we began to miss the old-fashioned telephone, which allowed you decipher, through the tone of someone’s voice, whether they were sincere or sarcastic. Of course, e-mails have made the world a speedier place. People can exchange information and documents quickly – a major plus for us as researchers.  But the negatives have mounted up. For one thing, e-mails have led to an unhealthily 24/7 world. E-mails pop up in the middle of the night and they know no such thing as weekends. For a while, we worked with someone who would send out e-mails on Sunday afternoons beginning by saying “Hope you’ve had a nice weekend,” under the assumption that recipients must be ready to get back to work on Sunday. Then there’s the lack of thought that many people put into what they send by e-mail. People in a rush can sound terse and even rude in an e-mail, even when that wasn’t their desire. Most people have learned that the use of all capital letters comes across like yelling, but that’s a lesson that bears repeating. It’s surprising how little care is taken in getting names spelled properly. Or even using the right names in the first place. Our little company is called “Barrett and Greene, Inc.” You might be surprised to know how many notes we get (and these aren’t mass mailings either) addressed to “Dear Barrett.” Of real frustration is the desire to move so quickly through seemingly endless stacks of e-mail that people never read the entirety of notes they receive, necessitating a long exchange that would have been avoided with five minutes on the phone.  Following is the kind of thing we (and we suspect you) go through regularly: Us: “Thank you for your willingness to work with us. Can we talk on March 31, and if so, what time would be good with you? Them: “Yes, the 31st will work.” Us: “Terrific. Just let us know what time will work for you and the best way to reach you.” Them: “How’s 3:30?” Us: “That will work fine. But did you mean Central Time or Eastern Time? And how should we reach you?” Them: “Sorry, I should have been clearer. I meant Central Time.” Us: That’ll work well. But could you please send us the best way to reach you?’ Then we wait for two days and write again asking for the best way to reach the other party, only to get an automatic reply saying they’re out of the office for the rest of the week. Worse yet, from the point of view of style and tone, is the growing number of people who are relying on texts, which often include acronyms that require us searching on the internet for their meaning. We got used to LOL a long time ago. And we picked up on IMHO, too. But the acronyms keep coming. Not long ago we got a three-letter text that just said “NVM.” Turned out it meant “nevermind,” which pertained to a prior text. And if style and tone can be lost in emails they entirely disappear in texts. As far back ago as 1546, writer John Heywood wrote “Hasteth maketh wasteth.” Some things never change. #ChangingCommunications #ChangingTimes #EmailFrustration #EmailMiscommunication #TextingFrustration #B&GReport


    In 1799 when Napoleon was bearing down on Egypt a stone slab was discovered that came to be called the Rosetta Stone. It bore text in three forms, including Egyptian hieroglyphics which hadn’t been understood since before the fall of the Roman Empire. The written wisdom of the ancient world had been lost for centuries, but the stone made it decipherable. We want to be the modern-day equivalent of the Rosetta Stone (a peculiar aspiration perhaps for people instead of rocks). Only instead of making ancient script comprehensible in the modern age, we want to unlock the mysteries of the kind of writing done by people trained in academese for the rest of the world. Toward that end, in collaboration with one of the smartest men we know, Donald F. Kettl, author of 25 books and professor emeritus and former dean of the Maryland School of Public Policy, we’ve written a new book titled “The Little Guide to Writing for Impact” (Rowman & Littlefield, 2024). The book presents a series of guidelines that will enable readers to successfully frame a policy argument; pitch it to editors; organize the work so that the ideas have real impact; support it with data and stories; find the right publisher; and follow up after publication to ensure that the argument has enduring impact. It’s aimed for people who want to write everything from short blog posts through op-eds, commentaries and policy briefs, dissertations, articles for both the popular press and academic journals, and books. Truth in Advertising: The major point of this B&G Report is to persuade you to: ·        Tell others about the book if you think they can make use of it. ·        Buy the book yourself. ·        Use the book in your classes if you’re teaching. In short, this is the most self-serving B&G Report we’ve ever written. But we’re just vain enough to believe that it can be of genuine use to you, your colleagues, your students, and your friends. Here are some comments we’ve received about the book: Donna Shalala, Interim President of The New School, and former secretary of the U.S. Department of Health and Human Services commented that the book is “A little book that will have a big impact on policy. Imagine a whole generation who can clearly communicate great ideas!" Katherine Willoughby, editor-in-chief of Public Administration Review and Golembiewski Professor of Public Administration at the University of Georgia, said that “If you want to author a classic book, have your research published in a premier academic journal, complete an award-winning dissertation, or simply write better, consult The Little Guide to Writing for Impact. This quick read is chock-full of golden nuggets that, if engaged, will boost your influence on people and policy through your writing.” Chris Morrill, the Executive Director of the Government Finance Officers Association, commented that “With notes of Strunk and White’s Elements of Style, Barrett, Greene, and Kettl have gifted us a highly practical guide for communicating in a hyper-distracted world. Even with an array of new digital tools and artificial intelligence, at core communicating involves crafting a clear, concise, and compelling message. Barrett, Greene, and Kettl gives us the tools to do so.” Finally (actually there are more, but we’re running out of space, Trevor Brown, dean of the John Glenn College of Public Affairs at The Ohio State University wrote that "If you read it carefully and take its lessons to heart, this little book can have a big impact on the quality of your writing. Useful, readable, and above all sensible, it's pitched to scholars and policy wonks who want to reach a broad audience, but it will be helpful to anyone who puts words on paper and wants them to be read, understood, and to matter." There are two ways for you to purchase this book: Go right to where you’ll find it by clicking here. Alternatively, for readers of our website, we're providing a 30% discount on the book. To take advantage of this offer, click here and after registering to make a purchase, enter the code: WF130. #LittleGuidetoWritingforImpact #StateandLocalManagement #StateandLocalGovernment #WritingforGovernmentImpact #WritingforPolicyImpact #AcademicImpactonPolicy #CommunicatingAcademicResearch #AcademicImpactonStateGovernment #AcademicImpactonLocalGovernment #WritingforImpact #KatherineBarrett #RichardGreene #DonaldFKettl #Rowman&Littlefield #AcademicWriting #CommunicatingWithPolicyMakers #WritingGuide #Barrett&Greene #B&GReport #NewBarrettGreeneKettlWritingGuide #UniversityofMarylandSchoolofPublicPolicy


    As we recently reported in the second of a two-part series about Trust In Government for Route Fifty, about 45% of Americans have a less than favorable view of the trustworthiness of local governments, according to data from Polco. That’s somewhat up from 40% in 2017. And while it’s better than the federal government it’s still a very sorry state of affairs. In that series, we recommended several ways that states and localities can help engender greater confidence in their efforts to serve residents; the one that was probably nearest and dearest to our hearts was the use of performance management. Of course, simply measuring everything in sight isn’t going to grab the public’s attention. In fact, it’s repeatedly dismayed us that governments that have robust means of measuring quality are often skeptical about sharing their findings with the public. Some seem to believe that they’ll only be hit over the head with a statistical stick when efforts don't pay off. As Marc Holzer, a well-known academic and author of Rethinking Public Administration, says, “We have a lot of data out there and a lot of performance measures. But most citizens don’t have access to that because it’s not communicated to them. And in many cases, it’s deliberately hidden by management because they don’t want to put themselves in the line of fire.” That’s a big mistake. People mistrust what they don’t understand. They’re more inclined to have faith in an institution that is candid, even if it’s open about mistakes or “performance is proven to be poor,” says Michael Pagano, dean emeritus of the College of Urban Planning and Public Affairs at the University of Illinois, Chicago. “If voters trust that the government is providing accurate information, they will continue to trust.” There’s little question that there’s a strong journalistic urge to put bad news on the front page, while better news winds up someplace on page seven.  As The Guardian reported some years back, “people’s interest in news is much more intense when there is a perceived threat to their way of life. They care much less about what happens around them when they enjoy relative peace and/or relative prosperity.” But as true as that may be, we’d like to make the argument that if bad news trumps good news, transparency can help cultivate trust even in times when the news may not be good. This is particularly true at the local level, where people tend to know what’s happening around them. They know when the roads are falling apart. They know when there are homeless people wrapped in newspapers on the streets. They know when their children pretend to be sick rather than attending a dangerous school. Hiding the truth doesn’t help. Rather it’s telling the truth – good or bad – and telling the public what’s being done to make it better. #TrustInGovernment #StateandlocalTransparency #PublicSectorTransparency #StateandLocalManagement #StateandLocalPerformanceManagement #RouteFifty #POLCO #RethinkingPublicAdministration #MarcHolzer #MichaelPagano #ReportingStateandLocalPerformance #StateandLocalMedia #StateandLocalCommunications


    Back some years ago, when we first started to evaluate management capacity in states, counties, and cities for the now defunct Financial World magazine we were forced by the editors and publisher to rank the entities we were evaluating from best to worst. We hated that for many reasons. As far as we could see the difference between number 29 and number 30 wasn’t even marginally significant and yet these comparisons were often picked up by the local press. That made the publisher happy as he loved to get lots of attention, but it never ceased to bother us. Subsequently, when we began our work on the Government Performance Project, we took great care to make it clear that while we were evaluating and even grading the states, we weren’t ranking them. We carefully avoided ever using that word preferring to refer to our “evaluations”. Perhaps the GPP, which utilized the skills of many highly regarded academics and a team of journalists didn’t stir up the same kind of media frenzy than the far-far-far less-rigorous Financial World work (which was entirely done by the two of us), but the leadership at Pew and Governing were more interested in contributing solid useful information to the world of public sector management than they were in creating a stir. In the years that have passed, it seems to us that there must be some kind of gold mine in the field of publishing 1-50 rankings of the states and similar lists of best and worst cities. And we cringe when we see many of them, for a variety of reasons. Forbes (which seems addicted to these kinds of lists) went so far about a year ago as to publish a 50-state ranking titled “States With The Most Devoted Dog Owners.”  According to the article, the ranking was based on a survey of 10,000 dog owners (200 per state) and compared them across seven metrics, including “the percentage of dog owners who broke up with a significant other who didn’t like their dog.” Apparently, “6.78% of dog owners broke up with a significant other who didn’t like their dog.” Woof. Beyond the dubious nature of this kind of metric, and the value of such a list in the first place, the idea that you can get a solid sampling by asking 200 people from every state, regardless of its size, has zero merit. California dog owners were represented by about .0005% of its population. We don’t want to get distracted by criticizing this kind of foolishness, though. That’s like shooting fish in a barrel. We’re far more concerned about rankings that are taken somewhat more seriously. For example, though we won’t be the first or the last to complain about the value of the U.S. News rankings of universities, they’re worth mentioning here. For starters, these rankings always seem like a dangerous exercise to us, as we see families making decisions about college selections based on these rankings instead of the value of the program to which the high school senior is applying. Beyond that, there have been plenty of criticisms of the methodology used to make these lists. Beyond the specifics, there have been lots of complaints about the ever-shifting methodology which makes for significant changes in the rankings themselves. As Daniel Diermeier, chancellor of Vanderbilt University wrote in Inside Higher Ed,  “Does this mean those of us who’ve fallen in the rankings are objectively worse than we were a year ago? Does it mean a university that shot up the list is suddenly orders of magnitude better? Of course not. The shifts in rankings are largely due to the changes in methodology.” This raises two questions: Was the last year’s methodology wrong and that’s why there was a change? Or is it in the interests of the publication to see changes from year to year in order to make the horse race more exciting? If all this wasn’t cause for concern about the validity of these rankings then consider the  January New York Times article that pointed out that “U.S. News sells ‘badges’ to colleges so they can promote their rankings – whether they are 1st, 10th or much much lower.” While the college rankings are probably the best known, there are also a plethora of lists of “best places to work.” We can’t begin to enumerate all the potential flaws in these lists, but the degree to which they vary from ranking to ranking isn’t a very good signal that they should be regarded as valid. For example, one list of “the best and worst states for work-life balance,” indicated that New Hampshire was the best of the lot. But then there was another ranking that claimed to demonstrate that New Hampshire was the worst state to be a teacher. Don’t teachers care about work-life balance? We’ll bet they do. Let’s say for the sake of argument – and we don’t believe a word of it – that both lists were accurate, teachers reading the first one could be heading as fast as they can to New Hampshire only to find out that in their profession they’d be better off anyplace else. Finally, let’s think a bit about the “best places to live list.” Best for whom? These are almost always blunt instruments for coming up with a very complicated answer. Some lists use the level of home ownership as a measure of a good place to live. But that would mean that Manhattan is probably not the place for you, where high costs mean that only about 24 percent of the population own their own place. But there are clearly other reasons some people love living in Manhattan. We did for over 35 years and cherished every minute of it. All while paying rent. One more: Let’s say that in your opinion low taxes are a wonderful way to pick your home state. Lots of lists rank the states by that criteria and you’d be led to believe you should head for Florida, which is famous for its exceedingly low tax burden. But d0 you have children in schools? Then it may be important that your teachers are well paid, and on that measure, Florida could hardly do any worse. Take things a step further and assume that you only care about low taxes and have no interest in the children of the state – but you happen to be a member of the LGBTQ community – well we don’t need to say any more about that. #StateandLocalManagement #StateandLocalManagementRanking #FlawedStateRanking #FlawedCityRanking #FlawedBestPlacesToLiveRanking #FlawedUniversityRanking #USNews&WorldReportCollegeRanking #ForbesRanking #GovernmentPerformanceProject #FinancialWorldStateRanking #FinancialWolrdCityRanking #FinancialWorldGovernmentRanking #GoverningGradingtheStates #InsideHigherEducation #SillyStateComparisons #StateRanking #CityRanking #BestPlacestoLiveRanking #CollegeRanking #B&GReport


    Recently, a relatively high-level manager in a large southern city told us about the progress her city was making in energizing a brand-new performance management system there. She told us that this was the first time her city had ever done something like this. But wait. When we were first covering performance management several decades ago, this same city was known for being a leader in exactly that kind of work. We pointed this out to our source who was interested to hear the news. This kind of thing happens all-too-frequently to us, and to others who have been around the world of state and local government for a while. We’re not suggesting that new employees in a city or a state need to take a course in the history of management where they’re working. But it’s really a pity when they lose the opportunity to build on old efforts – figuring out why they succeeded or failed – and then work from there, instead of starting from scratch. We were talking about this with Marc Holzer the well-known public administration scholar who got is PhD from the University of Michigan in 1971. His take: “These people aren’t building new things. They’re re-inventing things all the time. And they make mistakes they made before that could have been prevented.” One of our favorite quotes about this topic comes not from the world of the public sector but from Vatican City, where Pope Francis has said, "The lack of historical memory is a serious shortcoming in our society. A mentality that can only say, 'Then was then, now is now', is ultimately immature. Knowing and judging past events is the only way to build a meaningful future. Memory is necessary for growth." The risks of losing track of the past can be serious. For example, consider the way many states and cities are currently dealing with their surpluses (many of which were created by extra dollars from the federal government in recent years). Contrary to the Government Finance Officers’ Association admonitions to spend one-time revenues on one-time expenditures, we see state after state cutting their taxes and increasing their expenditures, which is likely going to leave them up against a fiscal wall. We’ve written in the past about the over-use of the word innovations in part because many new programs are described that way simply because the current administration doesn’t have any notion that they’ve been tried or suggested in the past. “But,” as we wrote in early 2022, “when governments overemphasize the notion that their future lies in innovating, they can miss out on another equally important concept: that there are lots of good ideas for successful government that aren’t brand new – but simply need to be implemented.” #StateandLocalGovernmentPerformanceMeasurement #PerformanceMeasurement #StateandLocalGovernmentPerformanceManagement #PerformanceManagement #StateandLocalGovernmentInnovation #PublicSectorHistory #ForgottenCityHistory #HistoricalKnowledge #StateandLocalInnovation #GovernmentFinanceOfficersAssociation #GFOA #MarcHolzer #HistoricalMemory #MissingHistory #StateandLocalGovernmentBudgeting #State and Local Surplus


    Typically, when we hear about a city or state choosing not to gather more potentially useful data because it’s too time consuming, we push back. But there are exceptions. A notable one is an element of New York City legislation called the “How Many Stops Act”, which would require New York police officers to report on every single police street stop and investigative encounter, including demographic information about the person stopped and the reason for the encounter. We agree that officers should be held accountable and that strong actions are needed to prevent racial profiling. But in this case, we’d argue the legislation goes a few steps too far. It would require, for example, that following a crime, police officers would have to fill out a form to record every time they interact with a witness or a possible witness. Let’s say for example, a liquor store was robbed, and the perpetrator ran out into a busy city street afterwards. When police arrive at the crime scene and ask dozens of people on the street whether they saw anyone running out of the store, they’d have to do the appropriate time-consuming paperwork. Without disputing the goals of gathering this information, the question is this: Regardless of the validity of a cause, aren’t there instances in which gathering mountains of data is potentially counter-productive? On January 19th, New York City’s Mayor Eric Adams vetoed the Bill, tweeting out the following: “You know my story. I've been the victim of police abuse. And I've been a police officer. But while our administration supports efforts to make law enforcement more transparent, more just, and more accountable, this bill would take officers away from policing our streets and engaging with the community. Today, I vetoed the ‘How Many Stops Act’ because it will make our city less safe.” On late afternoon Tuesday, January 30, the City Council overrode the Mayor’s veto, and the bill will now become law. Jim Quinn, who was executive district attorney in the Queens District Attorney’s office and now writes for the New York Post did a little back-of-the-envelope math in that newspaper:  “There are about 30,000 uniformed police officers, detectives and sergeants. If just half of them fill out only one form a day, and it takes one minute to complete, that is 15,000 minutes — or 250 hours of police time wasted each day.” And that’s just one form a day per person! The word “wasted,” is a little too strong for our tastes, as we know there are instances in which this information would be valuable. From our perspective though, this presents the kind of question that’s easier to ask than to answer: “When it comes to requiring that more data be gathered, will the benefits outweigh the costs?” One element involved in considering this question (though not one that necessarily applies to the police data in New York City) is whether the managers or elected officials in an organization are really going to use the data that’s been painstakingly gathered. These are busy people and many of their computers are jammed with gigabytes of spreadsheets and hundreds of data points. When data can be gathered from information that’s automatically being generated (like time sheets or budgets) this is less of a concern than when the data requires public sector staff time to assemble. We’d argue that this should be central in the minds of people who are determining what data-gathering requirements we impose on city and state employees. #StateandLocalGovernmentData #StateandLocalGovernmentDataManagement #CityData #CityDataManagement #CityDataCollection #PoliceData #NewYorkCityPoliceData #NYPD #NYPDDataCollection #HowManyStopsAct #NewYorkCityPolice #StateandLocalGovernmentPerformance #StateandLocalGovernmentPerformanceMeasurement #PoliceDataCollection #DataCollectionBurden #NewYorkCityPoliceData #MayorEricAdams #NewYorkCityCouncil #DataBurden #UnintendedConsequences #RacialProfiling #B&GReport #CostBenefitAnalysis #DataCostBenefitAnalysis


    Back in 2020, then President Donald Trump proclaimed that “The murder rate in Baltimore and Detroit is higher than El Salvador, Guatemala or even Afghanistan.” That statement was misleading and part of it was outright false, but even beyond that, he left out the fact that reported homicides in Detroit were near 50-year lows. Currently, Detroit has the third highest homicide rate in the country, according to World Population Review, which is still an unfortunate state of affairs. But look at the trends and a new picture emerges. According to the city’s data, it “finished 2023 with 252 homicides, the fewest recording since 1966.” Most experts would agree – and Detroit is a perfect illustration – that any single point of data can be misleading if it’s not put into a broader framework, often with the use of trend lines. As Ron Holifield, CEO of Strategic Government Resources, told us, “When you’re just looking at a single piece of data without context it’s like looking through a peephole without seeing the entire room. Under the worst of circumstances that leads to a false and misleading perception.” It’s certainly easy for reporters to take a single point of data from a recent year and turn it into a headline (either positive or negative). But historical perspective changes a single piece of information into something that’s genuinely informative. Says Liz Steward, the vice president of marketing and research at Envisio, a strategy and performance management software company.  “Only sharing point in time data can be worse than providing no data at all because showing an individual number can minimize a very big problem or exaggerate one.” Sometimes, it’s in the interest of a reporter or an advocacy group to avoid looking further than a single digit and use it as representative of a full story. “If you see a number that supports your argument it might be easier to just take it, without digging deeper,” according to Sam Gallaher, head of data science at Third Line an audit and financial management software company. “It’s definitely a challenge in doing research and being open to numbers that challenge your hypothesis. It takes some real effort to get past that.” On the flip side, digging a little deeper into statistical history can turn a bad news story into a good one. Entities that understand this and make a point of it can help the press to get the story right. We developed a deep understanding of this in the years that preceded our work on the Government Performance Project. As we’ve recalled in this space, “In the early 1990s Alabama’s leaders took a very poor grade in our evaluations of state government management capacity for the long-defunct Financial World magazine and compared them to our prior --- and even worse—evaluation. The state got some very positive reports in the local press by pointing to the improvement, with promises of more to come.” It's worth noting, however, that simply showing information one or two years back can have the perverse effect of misleading people when the most recent historical data was misleading. For example, comparing data in the last year or so to that which was accurate during the depths of the pandemic can lead to misunderstandings. As a result, many data-wise organizations are comparing current data to that which was generated pre-pandemic. For example, when the Pew Charitable Trusts examined employment rates last summer, it compared first quarter 2023 numbers with those from early 2020. As Mike Maciag, a policy researcher and former data analyst told us, “I’m sure you guys have come across dashboards, where they show information compared to the prior year, which is better than nothing. But snapshots are snapshots, and you’re comparing things to a point in time and that can be misleading when the prior year was abnormal.” While space limitations may prevent many sources of data from featuring a table that shows ten years of prior data, there is an alternative that can help: Compare current year data to a five-or-ten-year average. While there’s no control over how the press or social media outlets use data, state and local governments can help to keep the public better informed by making it easier for others to get a reasonable understanding of its meaning. “Reporters might, if they have time, go back and look at trend lines,” says Maciag. But a lot of times that’s difficult to find.” Cities, counties and states that produce well-wrought publicly available dashboards can help overcome that challenge. Take Corona, a city of about 166,000 in Riverside County, California. Its dashboard shows point-in-time data for a number of key performance indicators, but very clearly directs users to historical data. For example, average response time to a fire there in the most recent quarter was four minutes and 53 seconds. Was that good? Bad? Indifferent? Taken on its own, this number lacks meaning. But at the click of a button you can see that eight quarters ago, it was 5 minutes and ten seconds, and the trend line shows that though there have been ups and downs, the fire department has been bringing that number down steadily. In the final analysis, Nate Silver author of “The Signal and the Noise: Why So Many Predictions Fail But Some Don’t" had it just right, when he wrote “Data is Useless Without Context.” #StateandLocalGovernmentData #StateandLocalPerformanceMeasurement #CityData #DetroitHomicideRate #StrategicGovernmentResources #RonHolifield #Envisio #CityCrimeData #CityTrendData #DataTrends #DataContext #LyingWithStatistics #CityDataWithoutContext #PublicSectorDashboards #MisleadingData #MisleadingCityData #LizSteward #Envisio #ThirdLine #PewCharitableTrusts #MikeMaciag #ElizabethSteward #FinancialWorldMagazine #GovernmentPerformanceProject #StateGovernmentEvaluations #DataSnapshotsvsTrendLines #CityofCoronaCA #RiversideCounty #DashboardBestPractice #RonSilver #TheSignalandtheNoise #CityGovernmentPressCoverage #StateandLocalGovernmentManagement


    We’ve just been catching up on one of our favorite podcasts, Freakonomics Radio, and came across a wonderful conversation with Samuel West, the founder and curator of the Museum of Failure, which is a traveling pop-up museum with more than 150 failed products on display including the unlamented fat-free Pringles potato chips of 1996, which had the unfortunate side effect of causing diarrhea. The conversation struck a particularly resilient chord when West said that “Maybe (it) feels better to learn from success. But I think we can learn much more from failure. It’s a natural way of learning. That’s how we learn to eat, how to walk, how to do anything is through a repeated trial and error.” He went on to say that the “more society becomes focused on success the more failure gets stigmatized.” We agree with his conclusions and think they apply in important ways to state and local government policy and management. There’s an ongoing quest to find “best practices,” a phrase that we described as making us feel jittery in a blog post we wrote for the IBM Center for the Business of Government a few years ago. But when researchers, advisors, analysts, elected leaders and writers only look for success stories, they miss out on the benefits of learning from the efforts of those that didn’t succeed. Given the number of failed efforts that riddle the past, we’ll steal some words from George Santayana who is said to have coined the phrase, “those who do not learn history are doomed to repeat it.” One example that immediately comes to mind was the de-institutionalization of the mentally ill back in the 1960s and 1970s. The idea was to get men and women out of (frequently pretty awful) psychiatric hospitals and put them into community care programs where they could be treated with greater success and kindness. But though many institutions were either shuttered or shrunk the money never really came through for the alternative. One of the results was the homelessness crisis we face today. As we point out in a column we recently wrote for Route Fifty, billions of dollars are now going back to creating more beds where psychiatric patients can receive help when it's needed. But it took a long while for the lesson of that failure to be absorbed and to be taken into account in making new plans. One of the few places in government in which failures are uncovered, considered, and analyzed is in the work of performance auditors. As Jenny Wong, Berkeley auditor wrote to us in an e-mail, “Audit findings are essentially identifying a gap in a service operation, internal control, etc. In fact, one of the four elements of a finding is assessing the impact from that gap (or you can say failure). That is at the heart of why something matters --- the impact.” It's important to note that failures don’t need to be total disasters to provide a learning experience. Consider so-called near-miss analysis, which is widely used by airlines, when a tragic accident has nearly – but not actually – occurred. There’s lots to be learned when such incidents are reported, to avoid a life-taking disaster in the future. As Shayne Kavanagh, senior manager of research for the Government Finance Officers Association pointed out to us, “catastrophic failures are relatively rare, but there might be lessons from near misses that prevent future catastrophic failures.” Another reason we believe that the de-stigmatization of failures is so important: When people live in fear of falling short of the mark, they’re likely to be reluctant to take risks. A favorite quote of ours comes from well-known marketer engineer, physician and entrepreneur Peter Diamandis, “If someone is always to blame, if every time something goes wrong someone has to be punished, people quickly stop taking risks. Without risks, there can't be breakthroughs.” Here’s an idea we have for the future of this website. If we can get the funding to support such an enormous undertaking, we want to open a “Center for Failed Practices,” which would provide a repository of examples of ideas that once born failed to thrive – and the lessons communities should learn from them. #StateandLocalGovernmentManagement #StateandLocalPerformanceManagement #StateandLocalPerformanceAudit #StateamdLocalProgramEvaluation #CityofBerkeleyAuditor #GovernmentFinanceOfficersAssociation #B&GReport #CenterforFailedPractices #ShayneKavanagh #AuditorJennyWong #LearningFromFailure #Deinstitutionalization #FreakonomicsRadio #MuseumofFailure #DedicatedtoStateandLocalGovernment #RouteFifty #IBMCenterfortheBusinessofGovernment


    This is the time of year when the days are at their shortest, the thermometer may dip below freezing, the stores are crowded with shoppers and publications are full of predictions for 2024. We thought we’d join the pack and offer up seven forecasts for the world of state and local management in the months to come. If you have any to add, please send them our way at 1)    Whatever states and local governments choose to do about remote work – including hybrid work -- there’s going to be growing pushback from at least part of the staff. Settle on a requirement for three days in the office and people will want two. Cut back to two days and there’ll be complaints about the lack of socialization in an office that’s nearly empty. And whatever days you pick for staffers to come in they’ll be inconvenient for many. Figuring out this riddle is going to be a huge task for HR offices from coast to coast. 2)    As the American Rescue Plan Act money comes closer to running out, states that decided to cut back on taxes are going to begin to regret their actions – particularly if citizens get wind of the idea that services may be diminished in months or years to come. 3)    Though AI is going to keep advancing, and no one will know the outcomes for some time, the good news about its capacities is going to begin to outweigh the terrifying specters of the way it’s going to take over planet earth like some creature from outer space. 4)    The number of “chief officer” positions will continue to grow, following on the trend to appoint “chief sustainability officers” and “chief heat officers.” Many won’t be given enough money or staff to do their jobs. 5)    We don’t dabble in politics, but this felt worth saying: Whatever the pollsters say, most are going to be wrong. 6)    Ransomware attacks – already at peak levels – are going to accelerate even more as the bad guys get richer and cities (especially small ones) still won’t have sufficient resources to stop them. 7)    (Here’s an easy one) There’s going to be more than one natural disaster someplace, which will be followed by a resounding chorus of voices asking why the entity wasn’t prepared.


    At the end of November, a survey was released that investigated how local government finance leaders feel about their current budgeting practices and their readiness to embrace modernized approaches. It was conducted by Polco, a community engagement and civic analytics govtech company, in partnership with the Government Finance Officers Association, with collaboration from Envisio, a maker of government planning software, and Euna Solutions, a creator of budgeting software. The 285 respondents were either directly responsible for the budget (77%), part of the budget team (15%) or staff members who oversaw the budget department (12%). Respondents were asked to rate their current budget methodology based on 15 budget quality characteristics, with ratings ranging from 100 (excellent) to 0 (poor.) One cautionary note, according to the report which was titled Rethinking Budgeting: Results from the Local Government Budget Survey. “These results come from higher performing organizations based on surveying GFOA’s distinguished budget award winners. Results from local government budgeting in general, would likely show less innovative practices.” While the survey found that local governments are inclined to take into account the priorities of elected officials and staff, when it comes to listening to residents, the results were somewhat bleaker. As Michelle Kobayashi, principal research strategist with Polco told us in a conversation last week, “Traditionally, it's been difficult to incorporate stakeholder opinion in the budget process and this study confirmed that.” A few of the findings that buttress that point: ·        When respondents were asked “how would you rate your current budget methodology/process on incorporating residents,” only 41 percent indicated it was excellent or good. ·        Insofar as allowing residents to help in the decision-making process, taking into account important tradeoffs, the numbers were even worse, with only 21 percent saying that was excellent or good. ·        In terms of building trust with residents, 48 percent said their entities were doing a good or excellent job. The study also examined the degree to which decisions were data driven and focused on results and outcomes. When asked “how would you rate your current budget methodology/process insofar as focusing on the outcomes or results delivered by government activity,” 56 percent said it was excellent or good. When respondents were asked how well they integrated with their organization wide strategic plan, 61 percent said the effort was excellent or good. As Kobayashi told us, “Focusing on inputs rather than outcomes makes the budget less interesting to external stakeholders, and also makes collaboration a greater challenge. . .  Again, this area did not score well in the assessment and many organizations would benefit by incorporating a stronger focus on results in their processes. This is a really good way to make budgets more actionable.” A third major take-away from the survey, reported Kobayashi, was that there’s a lack of transparency in the budget process and document itself. “That’s another area, that we scored very low,” she said. “There are mandates on public information sharing around the budget – host a meeting and let residents respond – but the organizations were weaker at disseminating information in a way that helps constituents understand how you’re spending the money and why you’re spending the money.” Armed with the information gathered in the survey, the Government Finance Officers Association will release a self-guided tool to help individual entities assess how well they’re doing in the various areas covered by the study. The tool was created by the same group that launched the survey, with particular support from Kobayashi and her data science team. The goal would be to use it as a guide for a gathering of staff, elected officials, residents and other stakeholders to talk about how well the budget process serves them in a variety of vital areas. Kobayashi: “They can use the tool to assess their current budget status and brainstorm things like how well we are doing in terms of say, welcoming residents into this process and what can we do better? At the end of this assessment, the group would then decide if there were areas where they could move forward. It might be focusing on outcomes, participatory budgeting or readiness to train staff on new technology. And then GFOA could provide them resources to help them move forward in these areas.” The ten different dimensions of the readiness assessment include: ·  Alignment with strategic plan and/or current organizational priorities ·  A results/outcome orientation ·  Collaboration across departments ·  Collaboration with elected officials ·  Constituent engagement ·  Transparency and opportunities to build trust ·  Change management ·  Empowered budget staff ·  Dedication to human capital/staff training ·  The use of integrated, agile technology Naturally this process won’t change things overnight. “You find two or three top areas you can work on,” says Kobayashi, “and not overwhelm people with the process of change, so that overall, you’re evolving over time.”


    A few weeks ago, we wrote an item for this website about the executive orders that were pouring out of the offices of the nine most recently elected governors. One of our findings was that “New task forces, study groups and advisory bodies were a dominant theme.” That discovery led us to think about the many task forces we’ve seen established over the course of years. Some have certainly led to the kind of information necessary to implement a new policy. But all too many have been the governmental equivalent of treading water, exhausting time and resources while moving no place forward. As John Bartle, dean of the College of Public Affairs and Community Service at the University of Nebraska in Omaha, wrote to us when we reached out to him for his thoughts, “From what I have seen in state government (not universities), some task forces are created as a way to make it appear as if there is a response to a political demand, with no real intention of making any progress.” We agree with Bartle’s comment, and take note that he’s only referring to “some” task forces. We're aware of many cases in which task forces are established with only the best of intentions. As Mark Funkhouser, President of Funkhouser & Associates, and former Mayor and Auditor of Kansas City Missouri told us, “Task forces can be useful when there is a policy question that must be answered and is outside or beyond the purview of the normal policy making process. Task forces work best when they are staffed by professionals with deep expertise in the area considered and those staff are empowered to bring well developed solutions to the problems being considered.” One task force currently operating is The Governor’s Commission on the Future of Health Care in New York State a hugely ambitious undertaking. We contacted Patrick Orecki, director of State Studies at the Citizen Budget Commission to see what he had to say about it and here’s what he told us “We certainly think the task force is a good step. We've been calling for a permanent such body put in law, along with vastly improved data reporting for Medicaid. The trouble with the task force, currently, is that its mandate is largely undefined, and it is an entirely administrative function. Between those two facts, it seems like it could fall short and just be window dressing like other task forces before it.” So, then what makes for a successful task force that leads a promising policy on a clear path toward implementation? Tim Maniccia, Chief Fiscal Officer and Treasurer at Hudson River-Black River Regulating District had some rules of thumb for us. He believes that a successful task force should: · Have clear desired outcome and measures of success; · Secure commitment from organizers to go where the evidence leads; · Appoint a small number of knowledgeable, dedicated people; · Be sufficiently resourced and supported · Be time limited, with opportunity to extend if preliminary findings yield other important questions that can be answered. Without most of these elements in place task forces can follow the path described by an article in Fast Money, headlined “The First Effort to Regulate AI was a Spectacular Failure.” It described in 2019 the efforts made for the New York City Automated Decisions Task Force, and explained that “Excitingly, this was the first task force in the country to comprehensively analyze the impact of artificial intelligence on government. Looking at everything from predictive policing, to school assignments, to trash pickup, the people in this room were going to decide what role AI should play and what safeguards we should have. “But that’s not what happened. “Flash forward 18 months and the end of the process couldn’t be more dissimilar from its start. The nervous energy had been replaced with exhaustion. Our optimism that we’d be able to provide an outline for the ways that the New York City government should be using automated decision systems gave way to a fatalistic belief that we may not be able to tackle a problem this big after all.” This was certainly an extreme case, but it’s a path that any significant task force can take unless it’s carefully planned for, established and utilized. #GovernorExecutiveOrders #StateGovernmentTaskForce #CityGovernmentTaskForce #StateGovernmentStudyGroup #Funkhouser&Associates #CollegeofPublicAffairsandCommunityServiceUofNebraska #Governor'sCommissionontheFutureofHealthCare #CitizenBudgetCommission #AITaskForce #NewYorkCityAutomatedDecisionsTaskForce #TaskForceDisillusionment #StateandLocalGovernmentManagement #ArtificialIntelligenceRegulation #ArtificialIntelligenceinStateandLocalGovernment #B&GReport

bottom of page