
Search Results
332 results found with an empty search
- HOW TO IMPROVE GOVERNMENT COVERAGE
Over the years, we’ve written a great deal about ways in which government officials can improve their relationship with the press. We know there’s a lot of frustration in state and local governments about the coverage (or lack of coverage) they receive. In our conversations with government officials we often inquire as to ways that members of the press could do a better job in covering their governments. So, based on decades we’ve spent writing about state and local management and policy, we've collected a handful of ideas we’d like to share. (If you have other thoughts we should include, please send them in and we'll add to the list in a subsequent column. Six tips to help journalists improve state and local government coverage: 1. Don’t expect rapid change when new policies or practices are introduced. Articles that take governments to task for the absence of results shortly after a new policy is put into place can miss the fact that it takes time to implement almost any new policy — and if the results aren’t immediate, it doesn’t mean that it’s a failure. 2. Social policy issues are complex and despite the publicly absolutist stance taken in political discussions, government practices and policies are rarely all bad or all good. They usually have some elements that are working well and others that cause problems. A flaw, or even a bunch of flaws, in a new policy may not signal the need for the policy to be abandoned. It’s kind of like the proverbial dike with a hole. The solution isn’t to tear down the dike, but to securely plug up the opening. 3. Government officials who are trained to deal with the press (actually just about anyone who is trained to deal with the press) have learned to skirt questions asked so they can answer entirely different questions of their choosing. At various times we’ve had media training, and this is exactly what we’ve been told: “Don’t worry about the questions you’re asked. Just answer the question you wanted to be asked.” We try hard not to let government officials get away with this frustrating bait and switch. 4. Tamp down on cynicism. All journalists covering government have been lied to at various points in their careers, but in our experience — and we’ve had thousands of interviews covering every state and large city and county in the country — we’ve found that most government employees are diligent, hardworking and inclined to be as candid as they’re permitted to be. 5. Just because a policy or new program is passed by the legislature and is signed by a governor doesn’t mean it’s actually going to happen. If a bill isn’t funded, the fact that it passed may only be symbolic. We wish more journalists would follow up on important new policies to see what’s actually happened after some legislator ballyhoos this grand accomplishment. 6. Most ideas in government have been tried before. Just check out our slide show on transparency and you’ll see all the new ideas about budget transparency that were on exhibit in 1908. Of course, that doesn’t mean there’s anything wrong with trying them again. “Whatever government tried before in performance management, can be tried again, with the new technologies available,” John Kamensky, emeritus senior fellow with the IBM Center for the Business of Government, told us some years ago. It’s truer than ever now. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #GovernmentPressRelations #GovernmentPressCoverage #CityPressCoverage #StateandLocalMediaRelations #StateandLocalMedia #AdviceForJournalistsCoverningStateandLocalGovernment #GovernmentTipSheetForJournalists #StateandLocalPublicAdministration #CityGovernment #CityPublicAdministration #CityPerformance #CityGovernmentManagement #CityMediaCoverage #StateMediaCoverage #CountyMediaCoverage #StateandLocalPressRelations #StateandLocalGovernmentCommunications #BudgetTransparencyHistory #Early20thCenturyBudgetExhibits #StateandLocalGovernmentBudgeting #StateandLocalGovernmentTransparency #BandGReport #BarrettandGreeneInc
- WHINING OVER A GLASS OF WHITE WINE
Generally speaking, we see the positives in the world in which we live. We’re grateful for our work, family, and one another. Right now, we’re particularly grateful that the niche we’ve carved out for our lives’ work is focused on state and local government management and performance and has excluded writing about politics or the federal government. But frequently enough, over the dinner table, we find ourselves venting about the issues that have come up in the work of our day, attempting to research, analyze and write about states, counties and cities. So, welcome to our dinner table, pretend that you’re enjoying a nice glass of white wine, and join us as we whine about six (admittedly trivial) things. We’ll bet that you have similar issues, too. Here goes: 1) We don’t understand organizations that have dropped their full names in favor of acronyms. We’re not talking about something like “NASBO,” which is just a shorthand for the National Association of State Budget Officers, and pretty much everyone in state and local finance knows that. Rather, we’re talking about groups that pretty much obliterated any use of their full name. Take for example, KFF, which used to be called the Kaiser Family Foundation. For reasons that probably made sense to someone there, it now only goes by its acronym. And when we write about the great work the organization does, we feel obliged to put its original name in parentheses so people will know what we’re writing about. 2) On a regular basis, we find ourselves carping about people with whom we work, who don’t understand the real meaning of a deadline. The origin of this word goes back to the Civil War, when a deadline wasn’t a marker in time, but a physical boundary in prisons. If someone sauntered over the line, they would be shot. Obviously, that is an archaic use, but we believe that the word still has a meaning, which to us is the point in time at which something needs to be done. Not to attack some of our academic friends, but in that world, it seems to us like deadlines are aspirational, unlike the way they’re used in the old-fashioned journalistic world from which we learned our craft. 3) Very, very, last-minute cancellations are another ongoing frustration. Like almost everyone we know, we tend to work long days, and schedule ourselves from hour to hour. So, when we’ve set up a Zoom interview, and while we’re sitting staring at ourselves on our screens, we get an e-mail or a text apologizing for the need to postpone. We can understand that sometimes public officials’ calendars can change from moment to moment (when the mayor calls, everything else takes second place). All we’re really asking is to postpone at the time we’ve arranged, not fifteen minutes later. 4) Frequently, when we send out an e-mail, on the subject line we include words like “please confirm that you’ve received this”. It’s amazing how many people don’t pay any attention to this seemingly simple request. And then, when we follow through a day or two later, apologizing for writing multiple times with the same query, it turns out that they actually got the original note, but since they couldn’t respond in full right away, they just ignore our plea to confirm that they got it. 5) A remarkable number of people we interview are quick to provide data to demonstrate a point. But when we try to understand the number or percentage that’s been cited, we realize that it’s not really data, but more like a rough guess or the repetition of a statistic that’s been echoed with no reason to know it’s true. We wish that people would be more cautious about repeating a number they once heard – or wish to be true – if they don't know it's backed by evidence. 6) We’ve written eight books over the years. And with remarkable frequency someone will tell us that “I saw your book.” We never know quite what to make of this. While we yearn for praise, the simple act of seeing a book, doesn’t get us very far, and makes us reticent to say “thank you,’ because we’re not clear what we’re thanking them for. #StateandLocalGovernmentFrustrations #MissedDeadlines #FrustratingOrganizationalAcronyms #GovernmentAcronyms #UbiquitousPublicSectorAcronyms #KFF #UnverifiedData #OnlineMeetingLament #WildGuessesMaskedAsData #AuthorFrustrations #BarrettandGreeneFrustrations #BandGReport #BarrettandGreeneInc #DedicatedToStateandLocalGovernment #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance
- IN DEFENSE OF OLD TECH
Recently, we were involved in a project that involved reaching out to dozens of officials in cities across the United States. As you might expect, we did this through an e-mail volley – carefully sending out the e-mails one at a time to avoid them appearing to be potential spam. But the responses were far fewer than we had anticipated. So, we sent out another volley and got a few more. Our problem was that we had little idea of whether the e-mails were reaching the desired recipients. Our solution: We set about a time-consuming process of finding phone numbers and calling directly (and, by the way, given the concern about privacy, it’s not the easiest thing in the world to find phone numbers on websites). We reached a handful of people directly and for others we left messages. Within a couple of days we had made contact with many of the people we had been chasing electronically. In a day when e-mails and text messages seem to rule the world of interpersonal contact, this process reminded us of a far simpler time, when everybody used old-fashioned telephones to make contact. It certainly was more time-consuming than sending out e-mails, but also was often more effective. For one thing, phone calls weren’t caught in spam filters. Beyond that, when an assistant, secretary or the actual source answered the phone, you could be pretty sure that you had actually reached the right office. This isn’t the only instance in which we miss an earlier time, when people kept their list of contacts on Rolodexes full of phone numbers. Another one is linked to the dependence on using Zoom and other visual platforms in order for people to make personal contact. We always give our public sector sources the option of meeting online or on the phone, and they almost always select the former. We get it, kind of. It’s pleasant seeing a person’s face and getting a sense that people will get to know one another better when they can see them on their computer screens. But how many times have you arranged for a video meeting only to find yourself frustrated? If your experience is anything like ours, this happens quite a bit of the time. For example, we’ve experienced repeated online meetings in which, for reasons nobody understands, one party or the other can’t be heard or seen. Sometimes people have just left themselves muted or for other reasons the audio doesn’t work or in mid-sentence, the person freezes entirely. Sometimes, everybody just gives up and reaches for a phone. The transition often takes time. And when we’re speaking with public officials who rarely have a minute to lose, it just abbreviates the total conversation. Then there are the instances in which one of the parties is running a little late. If people were using telephones, then the party waiting for the call can get some work done, send out e-mails, or do paperwork. But when you’re tied into a screen, awaiting for the arrival of another person, pretty much all you can do effectively is stare at your own image while waiting for somebody else to pop up. When we’ve been hanging around online for more than five or ten minutes, we send an e-mail asking (far more politely than we feel) if the appointment is still on. And the answer to that question leads us to another way that we think that new technologies can stand in the way of progress. With great frequency the answer to our outreach is an apology that puts the blame on the person’s lost email, spam folder or digital calendar. We know this dates us, but we keep our calendar on a long, long Microsoft word file that we double check regularly. People tell us we’re inefficient, but we’re hanging onto something that works. And we don’t miss appointments. (Hardly ever, anyhow.) We can’t honestly say that the use of e-mail, texts and online video conversations haven’t been a good thing overall. But in a reversal of the words in Joni Mitchell’s beautiful song Both Sides Now , “Well something’s lost but something’s gained.” #StateandLocalSurveyResearch #NewVersusOldComunicationTools #StateandLocalGovernmentInterpersonalContact #OnlineMeetingLament #TelephoneNostalgia #BandGReport #BarrettandGreeneInc #DedicatedToStateandLocalGovernment #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance
- THE DEATH OF HISTORY?
Last week, in a B&G Report titled, “Information Can Be a Buried Treasure,” we wrote about our concerns about the use of artificial intelligence as a the be-all and end-all source of information for a growing cadre of researchers. Our fundamental point was that AI only can draw upon information that’s been digitized and available on the internet, leaving out potentially valuable information that has no flaw other than that it was never scanned. One personal example. We have many years of completed survey instruments from the Government Performance Project, which was supported by the Pew Charitable Trusts for over a decade until 2010. These surveys are unique repositories of information from all fifty states about areas like human resources, budgeting and performance management. A treasure trove, we think, but one that is currently buried in boxes in an office we rent. Someday we’ll get around to having them scanned and available to researchers, but for now they might as well not exist. We were chatting about the general idea of valuable historical information that’s not now available, with a public official at the recent American Society for Public Administration’s annual conference in Washington D.C., and he called this phenomenon “The Death of History.” Pretty strong words, and the more we reflected, the more we began to realize that they’re right on target. The fact is that as time passes, we’ve become increasingly aware that many of the leaders in state and local government come up with ideas they think are brand new, without digging into the files, or talking to people who preceded them, to see whether their notions are really new or just a retread of something that’s been tried before. Exhibit A: A few years ago we were interviewing a leader in performance management in a large southern city. The individual told us that the city had embarked on a new and innovative performance management effort that hadn’t been tried before and that it was relying on outcomes or results and not just outputs. At risk of seeming overly snarky (which maybe we were) we pointed out that this city was well known about 25 years ago, for something not very different than was being tried today. We had written about it back then. Honestly, we didn’t know why the previous initiative hadn’t lasted, but it seemed abundantly clear that this was information that could be very helpful now. The value of describing something as an “innovation,” helps feed this unfortunate fire. We speculate that it’s not easy selling the legislature on a program that existed in the past, but didn’t last. As we wrote in this space some few months ago, “ The very word innovation (or its cousins, “innovate,” and “innovative”) is used by elected officials as a kind of magic wand that can create better tomorrows. ” This phenomenon is nothing new. We recall several conversations with Harry Hatry, one of the great gurus of performance management, after the book "Reinventing Government" came out in 1992. It was a huge best seller (and allegedly sat on a shelf next to President Clinton’s bed), and popularized a number of ideas, including the benefits of looking at outcomes when trying to evaluate government. Though the book had great value by popularizing concepts that continue to make sense today, Hatry was annoyed at the general sense that this was the first time such notions had been written about. He had been preaching this gospel for years, and as Shelley Metzenbaum, an American nonprofit executive, academic, and former government official specializing in public sector performance management, told us, “Harry called for increased attention to outcome measurement and management as early as the 1970’s.” We don’t want to be too harsh on researchers, practitioners and academics for missing out on the important history that could help inform their current work and we have to give credit, as well, to internet resources like Google Scholar with its vast repository of academic writing that record contemporary government history. But with the speed of change today and the increasing reliance on summarized answers from AI we can’t help worrying about what is lost. #StateandLocalGovernmentManagement #StateandLocalGovernmentHistory #GovernmentPerformanceProject #StateandLocalPerformanceManagement #StateandLocalPerformanceManagementHistory #StateandLocalGovernmentBudgeting #StateandLocalBudgetingHistory #StateandLocalHumanResourcesHistory #StateandLocalPublicAdministration #StateandLocalGovernmentInnovation #AmericanSocietyForPublicAdministratioin #ASPA #PewCharitableTrusts #ReinventingGovernment #HarryHatry #ShelleyMetzenbaum #BandGReport #BarrettandGreeneInc
- DATA GARBAGE IN – AI GARBAGE OUT
For years, we’ve been writing about the importance of good data, based upon which states, counties and cities can make good decisions. But as a growing number of practitioners and academics are relying on artificial intelligence as a source for all the information in the world (when we were children, the equivalent was the Encyclopedia Brittanica), the necessity of care for the accuracy and completeness of data rises even more in importance The well-written, if sometimes pedantic prose that AI can produce can create the impression that the information it spits out will be accurate. That’s not the case. Consider, for example, a B BC February report that examined news summaries generated from artificial intelligence (AI) engines including OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI. Here are a few of the report’s alarming findings: “51% of all AI answers to questions about the news were judged to have significant issues of some form. 19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.” “13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.” The basic problem is this: as a growing number of people rely on AI to get the information they need, their attention is being diverted from the basics of data management. With so many government officials focused on artificial intelligence, the critical issue of data quality is getting less attention. A conversation with Rudy de Leon Dinglas, Chief of Staff of the Bloomberg Center for Government Excellence (GovEx) at Johns Hopkins University, revealed his concerns about this issue: “I don’t think we should turn our back on basic data management strategies, because they’re very foundational. AI is here, but we cannot turn our backs on making sure that we’re shoring up the infrastructure about data strategy and data policy.” Added Kel Wang, manager of applied data practices at GovEx, “All around, everybody talks about AI, but it’s honestly hard to come across articles around the lines of data quality or data inventories or data governance. Data quality is not the shiny icing on the cake.” Ultimately this goes back to the old adage: garbage in garbage out, a phrase that’s been around since the early days of wide-spread computer use. We brought up one significant issue with the accuracy of information drawn from AI, in a piece we wrote based on a roundtable discussion about the topic hosted by the IBM Center for the Business of Government. As we wrote , “A I can pick up opinions and understand them as fact, which are then used to make decisions. But the confusion between opinions and fact is a significant one, and when they are conflated, public sector leaders can be misled.” We’re hardly alone in this concern. The MIT Sloan School of Management had this to say, “ AI tools like ChatGPT, Copilot, and Gemini have been found to provide users with fabricated data that appears authentic. These inaccuracies are so common that they’ve earned their own moniker; we refer to them as ‘hallucinations’. “For an example of how AI hallucinations can play out in the real world, consider the legal case of Mata v. Avianca. In this case, a New York attorney representing a client’s injury claim relied on ChatGPT to conduct his legal research. The federal judge overseeing the suit noted that the opinion contained internal citations and quotes that were nonexistent. Not only did the chatbot make them up, it even stipulated they were available in major legal databases.” In a recent e-mail conversation with Doug Robinson, executive director of the National Association of State CIOs, he underlined this point, writing that “GenAI models are trained on and use massive amounts of data to be successful. You need to feed the beast! We've identified data quality as one of the most essential elements of GenAI adoption in the states in our blueprint. If you can't trust the quality, integrity and reliability of your data, you can't trust the results of the analysis.” Last year , NASCIO collaborated with EY on a national survey of the states about this concern, and discovered that “very few states comprehensively address data quality and integrity,” wrote Robinson. Despite that unfortunate bit of news, the same report indicated that “Ninety-five percent of the respondents believe that increased adoption of AI and generative AI (GenAI) will impact the importance of data management.” Robert Osmond, Virginia’s Chief Information Officer was referenced in the report as reinforcing the importance of keeping close watch on data quality before allowing AI algorithms to thrust the information into the world, “emphasizing the state’s commitment to ensuring that the foundational data for AI, particularly large language models (LLMs), must be reliable and of high quality. So that AI is ethical, responsible, and transparent, the foundational data used for AI must be accurate, and the results of the AI must be thoroughly tested for accuracy.” You’d think that the advent of AI, which makes data vastly more accessible would have encouraged governments to focus more closely on cleaning up their data than ever. Sadly, that doesn’t appear to be the case in many places. The glittery prospects of AI are taking up a huge amount of government capacity, and it’s easy to get so lost in the prospects for the future, that the basics of data management that are essential for AI to be useful can lose traction. #StateandLocalGovernmentData #StateandLocalDataAccuracy #StateandLocalGovernmentGenerativeAIPolicyandManagement #DataAccuracyAndArtificialIntelligence #ArtificialIntelligenceDataAccuracy #DataQuality #StateLocalDataGovernance #CityData #CityDataGovernance #StateLocalGovernmentAIPolicyAndManagement #CityDataQuality #StateandLocalDataQuality #StaeandLocalGovernmentManagement #GovEx #BadData #StateLocalArtificialIntelligenceManagement #CityTechnologyManagement #StateandLocalTechnologyManagement #StateandLocalPublicAdministration #CityPublicAdministration #BloombergCenterForGovernmentExcellence #RudydeLeonDinglas #KelWang #DougRobinson #NASCIO #IBMCenterBusinessOfGovernment #RobertOsmondVirginiaCIO #BandGReport #BarrettandGreeneInc
- INFORMATION CAN BE A BURIED TREASURE
A few weeks after we left our then-18-year-old son off at the University of North Carolina in 2005, we got a note from him, thoroughly excited about having discovered something called microfiche. For those of you for whom that technology is long forgotten, it was a way to see originals of documents on a transparent card – using a so-called microfiche reader -- in order to gain access to long forgotten archives of magazines, newspapers and other sources of information. By that time, the Internet had already become ubiquitous and Google and other search engines were increasingly turned to as the be-all, end-all source of information. But he had discovered the joy of deep-digging research, using original documents that could never be found online, but represented a kind of buried treasure for a diligent researcher.. We were thinking about this the other day when we were chatting with someone about the fact that AI is rapidly becoming the current equivalent of Google as the source for all the information in the world. Increasingly, researchers in the realm of state and local government and elsewhere are turning to AI to give them the information they need to help write reports or even make important decisions. But there’s a problem. What AI can find for you is information that has been digitized. But there’s tons of valuable material that’s never been put in digital form. That includes information about groups and cultures, with histories that have been written about less and never picked up by the digitizers of the world. We’ve become acutely aware of this when we consider the articles we wrote for the now defunct magazine, Financial World. This work includes the predecessor to the Government Performance Project, (published in Governing magazine and sponsored by the Pew Charitable Trusts). Boxloads full of information accompanied those evaluations of cities, counties and states, but unless we can find the original publication (we have a bunch, but they’re hidden away in boxes), they might not have been written at all. Financial World went out of business in 1998. For the purposes of this B&G Report, we checked to make sure the preceding paragraph was accurate. So, we used Perplexity, one of our favorite AI Tools and searched using the prompt, “Tell me about work done evaluating cities in Financial World Magazine.” The response it spat out instantaneously began with “CEOWORLD magazine evaluates cities through its International Financial Centers Index. . . . ” That had nothing to do with the years of work we put into our Financial World efforts – which we immodestly believe, could be of huge use to anyone who wants to delve deeply into the history of topics like performance measurement. We fear that as researchers increasingly rely on AI for their work, much material that is of value will become lost to future generations. Sometimes, the best repositories of information are in the minds of the human beings who were around at the time. There are multiple one-on-one conversations with people who have recounted memories that were never scanned. We’ll boast a bit and give an example of the kind of research we’re talking about. Long ago, when we were writing a biography of Walt Disney, we grew intrigued with his favorite teacher, Miss Daisy A . Beck. Previous biographers had taken note of her existence, and referred to some correspondence he had with her, which could be found in the Disney Archives. But that was about it. We thought that since she was such an important influence on him (she encouraged his drawing) we’d dig a little more. Fortuitously, we discovered that her aged niece and that woman’s son were still living near Kansas City, Missouri, and that they’d be willing to visit with us. We learned that Daisy Beck was “a stylishly dressed woman in her late 30s,” at the time, and that she coached the track team. Though he was no athlete, we were able to write that she urged Walt to try out for track and she’d tell him “Hop right out there at recess and show me what you can do.” We looked Daisy Beck up on Perplexity and although the AI tool told us who she was and the usual stuff from other biographies, you won’t find any of that there. Robert Caro, author of the Powerbroker, and the first four of a five-book biography of Lyndon Johnson has no parallel in this kind of work and though his diligent research is out-of-reach for most biographers, his digging for details shows how much treasured information is buried away. As he wrote in the Paris Review , The “LBJ Presidential Library is just massive. The last time I was there, they had forty-four million pieces of paper. These shelves go back, like, a hundred feet. And there are four floors of these red buckram boxes. His congressional papers run 144 linear feet. Which is 349 boxes. A box can hold eight hundred pages. I was able to go through all of those, though it took a long, long time.” Will future journalists, authors and students continue to dig? It used to be said that history is written by the winners. Now, we fear it will be written by the digitizers. #StateandLocalGovernmentResearch #StateGovernmentAndArtificialIntelligence #StateLocalGovernmentGenerativeAIPolicyAndManagement #ArtificialIntelligence #ArtificialIntelligenceAccuracy #ArtificialIntelligenceAndHistory #ArtificialIntelligenceBlindSpot #WhatAIMisses #GovernmentPerformanceProject #FinancialWorldCityEvaluation #FinancialWorldStateEvaluation #ArtificialIntelligenceInStateAndLocalGovernment #WaltDisney #DaisyBeck #WhatAILeavesOut #DigitizedHistory # #BandGReport #BarrettandGreene
- THE GRAVEYARD OF GOOD IDEAS
About a year ago, we wrote a B&G report titled “ Is Speedy Government Good Government? ” in which we made the case that the pressure for states and localities to solve problems overnight can often lead to failed efforts. “Jumping into new initiatives. without the time for adequate preparation, may be great for elected officials anxious to make quick headlines,” we wrote. “But glitter often tarnishes as time passes and efforts go through a rational progression of brainstorming, bringing in stakeholders, finding revenue streams, building a talent base, and forging through a political process that can sometimes favor inertia over momentum.” That column seemed to strike a responsive chord among many of our readers, one of whom, Barry VanLare, a former employee with the National Governors Association and an active National Academy of Public Administration fellow, commented on a related issue that we think is worthy of note. "While inadequately staffed and hasty implementation can, and often does, lead to failure,” he wrote, “there is also a problem with the frequency of change. All too often potentially successful initiatives are abandoned well before they can reasonably be expected to produce results." In our experience that’s absolutely true and it’s a pity. Over the years, we’ve seen many efforts that seem to have great promise fall by the wayside before they have the opportunity to prove their value. We’ve seen this happen repeatedly in the realm of education when new programs are started that might realistically take four or five years to show progress. But then new leaders come in, and want to try something new (but not necessarily better, or maybe not as good). Even the best programs often take time to have any impact, even when they’ve been carefully thought out beforehand. New staff may be needed and existing personnel have to be trained (and sometimes there’s no money to do so, but that’s a different story). Progress rarely can accrue on the back of a good idea until the people charged with implementation are up to speed. What’s more, sometimes terrific new programs have a whole host of little flaws that need fixing before they begin to show their worth. Getting to visible solutions often requires an iterative process, with tweaks and changes necessary before results show improvement. Delay in gathering, analyzing and disseminating data can also lead people to believe that state or local governments aren’t making things better for them. When an evaluation or performance measurement data comes out a year after a new program begins, it’s generally based on the most recent data available, which frequently comes from before the effort was even started. Administrators are likely to understand that this is just a timing issue, but when the local press picks up on the report, it’s all too easy to ignore the existence of the initiative and that, in turn, can make residents clamor for yet more change. Lurching from one initiative to another, without sufficient evidence of whether the first one worked or didn’t, isn’t a productive approach. Yet impatient residents who are increasingly losing trust in government, can push their leaders to forfeit the advances that might have been made, if they’d only had the time. #StateandLocalGovernmentManagement #StateandLocalPerformanceManagement #StateandLocalPerformanceMeasurement #StateandLocalProgramEvaluation #StateandLocalGovernmentData #StateLocalPolicyImplementation #GovernmentProgramGraveyard #CityGovernmentManagement #CityGovernmentPerformance #StateandLocalHumanResources #HastyProgramImplementation #LaggingData #TrustinGovernment #StateandLocalWorkforce #StateandLocalProgramFailure #BarrettandGreeneInc #DedicatedtoStateandLocalGovernment
- INSIDE THE RESEARCH SAUSAGE
Over the course of the years, we’ve happily written about scores (probably hundreds) of reports that are focused on broad areas of state and local government, like budgeting, human resources, performance management, infrastructure and so on. And we’ve been involved in creating some of these, too. When we read one of these documents, we always try to see how the authors have reached their conclusions. We tend to be persuaded by quotes from experts who have strong reputations, clear-cut methodologies and most of all persuasive data. Here’s the unfortunate news, that we hardly want to admit to ourselves. Despite the best intentions of many of these research exercises, there are often hidden flaws. Sometimes, these are just sloppy errors. Anyone can make a mistake. But we’ve come to believe that all too frequently, the authors were fully aware of these issues, but didn’t want to share these dirty little secrets with the world. Often, we only become aware of this situation when we get somebody on the phone to learn more about their research and discover that they can’t answer many of the basic questions that we pose, including concrete examples of the phenomenon about which they’ve written. We’re tenacious about such things, and generally we’ll ask our source to follow up and send us the missing facts or anecdotes, and in most cases they agree. Then a day or two passes and we follow up with an e-mail asking for the missing numbers or the stories that will help us to illustrate their contentions. Sometimes the answers are forthcoming with a little nudging. But often those e-mails go ignored until we contact the person a few more times and are ultimately told that they can’t get the information we’re seeking, either because they don’t have time or because the data aren’t available. On our more misanthropic days, we conclude that they knew all along that there were no answers that they’d be able to find but were hoping that we’d forget and go away. Ultimately, this leads us to using hedge words in our write-ups because we’re not entirely confident of the findings, and so we don’t want to hang our reputation on their absolute accuracy. Or in more extreme cases, we just drop the example from our own published work. Here’s a real-world example: About six months ago, we were writing an article about a city survey. Our question was simple: “How many people responded?” There was no answer immediately forthcoming, but (using the approach we’ve just written about) we waited to find this out. if there weren’t enough responses, then we didn’t want to use the findings. Finally, our source told us that this information wasn’t public. We could conceivably have used the Freedom of Information Act to dig deeper, but this was just one illustration we planned to use out of many. And we were on deadline. Our solution: We dropped mention of the survey from our piece. But had we not thought to ask the original question, we would have run the risk of using the results of a survey that was problematic. Another issue: Reports that make a strong point, include supportive evidence, but ignore a stockpile of facts that take the other side. We’re not suggesting that reports needs to say “on the one hand and on the other hand,” equating minimal evidence of one point of view with more powerful proof of the other. But when there’s sufficient research showing a contrary point, we’d argue that the researcher owes it to the audience to acknowledge that. All this brings us back to a time, many years ago, when we were writing a documentary about Walt Disney, for which we interviewed 77 people, most of whom knew him well, while others were well-known historians. In a filmed conversation with one of the historians, he told us that Disney’s father had never had any success in life. We pointed out something we knew that was contrary to that, and the historian said the following: “Yes, that may be true, but it doesn’t fit into the theme.” Years ago, an editor of ours (who we’ll not mention by name, for obvious reasons), complained that we hadn’t come to our conclusions before we did the reporting. We were somewhat younger then and didn’t have the courage to say that we thought this was ridiculous. We couldn’t conjure up any conclusions on the spot, which didn’t make him happy. This kind of mindset was extreme, and we think it’s relatively rare. We believe most people try their hardest to be fair and honest and complete in their work. But too often that’s not the case. And that’s a problem, particularly when other researchers rely on false published narratives, the false conclusions can be repeated until everyone believes they’re true. #DedicatedToStateandLocalGovernment #StateandLocalGovernmentResearch #HiddenResearchFlaws #DataQuality #StateandLocalResearchFlaws #QuestionableSurveyResearch #ReportBias #MissingData #StateandLocalGovernmentManagement #StateandLocalGovernmentResearch #CitySurveyResearch #StateandLocalSurveyResearch #StateandLocalGovernmentReports #BandGReport #BarrettandGreeneInc
- HOW DELAYS CAN DAMAGE TRUST IN GOVERNMENT
More than ten years ago, leaders in West Haven, Connecticut (where the Greene of Barrett and Greene grew up), began planning a fabulous new development called “The Haven.” It is still described on its website as an “unprecedented waterfront destination that blends an inspired outlet experience with the ambience of luxury resort.” Sounded like a dream to economic developers, and in preparation for the project, a section of the city was effectively decimated. So, far, however, nothing has happened. And the city is searching for a new developer. Residents like Vicenta Gibbons are thoroughly angry. As she told us, “ People were forced to move out of that area and were deceived, as well as the rest of West Haven by promises of a high-end mall area, where sheiks would fly in to shop . . . It is (now) blighted and depressing. We could have a wonderful venue for concerts, sports, etc., (as they have in Bridgeport) since it is right off the highway and certainly is a large enough parcel. The city could have been collecting taxes for several years now.” We’ve been thinking a lot lately about trust in government, and it strikes us that delays like this are the kind of thing that detracts from people’s faith that their governments are functioning well. Once a project has been announced, when voters see that it hasn’t actually come to pass, that builds the sense that government simply doesn’t get things done. Of course, a great many efforts do come to fruition on time and on budget, but given human nature, it’s often the ones that are sadly put off that stick in the public consciousness. This phenomenon isn’t limited to giant infrastructure projects. Consider the effort in Massachusetts to come up with proposals for a new state seal by July. Seems like the simplest of efforts, right? But it’s currently running months behind schedule. Worse yet back in 2021 a 20-member panel couldn’t come to any decisions about the new seal, after extending its deadline several times, prompting Secretary of State William Galvin to call the effort a “complete failure.” Debates currently go on, and nobody has any idea when a new seal will actually be accepted. Then there was the effort we described a couple of weeks ago, in a Management Item about the delays in Colorado’s efforts to cut back its owned and leased space by 1 million square feet by July 1, 2025. As we explained, the goal was overly optimistic, and difficult negotiations over long-term leases have meant that the state has pushed that goal back to July 1, 2027 (and cut back its goal to 800,000 square feet.) There are a number of reasons why governments frequently don’t deliver on time. Sometimes, as in Colorado, one element of the project wasn’t taken into account at the outset. Other times, it’s a matter of underfinanced projects or politics intruding on an announced plan. There are also any number of challenging factors that weren’t predicted at the outset (like rising costs of concrete). In an opinion column in Governing last year, Stephen Goldsmith, professor, former mayor of Indianapolis, and one-time deputy mayor of New York City, wrote that “officials should create a culture of urgency.” He cited the way former NYC Mayor Michael Bloomberg had “placed clocks on every conference room table, all set at a maximum of 30 minutes, in an attempt to render obsolete needlessly long meetings.” The column quoted Brad Keywell, author of the "Story of Time," recommending “killing bad regulations and processes, halting obsolete approaches, which would be a refreshing, even thrilling, way of governing.” There’s one more element at play here, and that’s the efforts that are simply unrealistic, or badly planned from day one. The poster child for this kind of thing is the Second Avenue Subway in New York City, which was intended to run 8.5 miles along Manhattan’s east side. It was (believe it or not) initially proposed over 100 years ago, and the first tangible progress made was in 2017 when three new stations opened. There’s no guarantee that anything more will be done anytime soon. It effectively lives large in the minds of many old-time New Yorkers as an example of how big projects don’t get done. There are no magic bullets to guarantee that government projects will be completed on time and plenty of hazards, as well, in setting highly unrealistic goals or rushing action without due consideration of potential negative consequences. As for the more common reasons for delay, we do have one piece of advice . When action slows down, and residents begin to roil with disappointment, explain, as publicly as you can, the reasons. At least then, the public is less likely to lose trust due to a lack of understanding of the exogenous factors that aren’t under the government’s control. #StateandLocalGovernmentManagement #StateandLocalPerformance #StateandLocalPerformanceManagement #CityGovernmentManagement #CityPerformanceManagement #StateandLocalPublicAdministration #StateandLocalTransparency #CityGovernmentTransparency #StateandLocalInfrastructureManagement #TrustInGovernment #CityInfrastructureManagement #StateandLocalGovernmentProjectDelay #NewYorkCitySecondAvenueSubway #TheHavenWestHavenCT #PublicSectorGovernmentTrust #CityProjectDelay #StephenGoldsmith #BandGReport #BarrettandGreeneInc
- GOVERNMENT IS MORE THAN RED VS BLUE
We’re looking forward to participating in a panel at the upcoming annual conference of the American Society for Public Administration (ASPA) about Trust in Government. With that in mind, we’re devouring anything we can read about that topic (though we’ll be focusing on states and localities at the conference). No surprise that an article jumped out at us from The New York Times that said early on: “Voters across the political spectrum are disillusioned with a government that has become synonymous with “Groundhog Day”-esque spending battles, slow public works projects and political gridlock.” Since we’ve spent our professional careers focusing on management and policy, not on politics, it dismayed us to see that two of the three reasons the Times cited for wide-spread disillusionment had to do exclusively with politics. The real work of government -- delivering services like health care, transportation, education and public safety – seems to have taken a back seat. Now, we’re going to take a leap forward and theorize that to many Americans, government – and not just in the federal government -- has become understood as equivalent to politics. At least that’s the impression you’d get through the kind of social media upon which many Americans rely to get their news. A few decades ago, when we were being interviewed about articles we had written about state government, reporters would sometimes ask: “And what’s the governor’s party?” Believe it or not, we often had no idea. It didn’t seem important to us at the time. Perhaps there was some naiveté attached to that approach, but we wish that we could go back to the good-old-ignorant days. Fortunately, even to this day, we’re able to have many conversations with people who work behind-the-scenes to make government work. It’s extremely rare for politics – and political preferences – to enter in. They may be democrats or republicans themselves, but they’re very happy to talk about the improvements they’re making in operations and services. We don’t want to make the bulk of this column a screed about our woes, but we’ll go on a little more and let it be known how frustrating it is for us when people ask what we do for a living. Our one-line answer is that we “analyze, research and write about state and local government.” If their eyes haven’t glazed over by this point, the next question they ask is almost always about the politics of their city, council or state – not how well they are managed or what exciting new programs they are developing. The problem here is that when Americans think it’s all about the politics, they can easily lose track of all the efforts that are made on a daily basis to try to make life better for residents of cities, counties and states. What’s more, we fear that agency heads and staff in states where the political sky is murky, will be reticent to boast about their accomplishments out of fear that this can make them targets of politicians who want to attack rather than to encourage. We’re not making this stuff up. We were recently at a conference and had a great conversation with an agency head in a state where the politics are rageful, and asked her if she’d like to write a guest column for this website about the good work she’s been doing. She said that wasn’t something she’d be able to do these days because she wanted to just go about her work, without getting publicity for it, thus keeping out of the line of fire of one of the two parties. This kind of thinking leads us to fear that if the people in government who are doing the hard work are reticent to talk about it, then we’ll be caught in a downward spiral in which government is equivalent to politics. And the work of the civil servants will recede ever further into a shadowland of stuff that only self-described wonks know or care about. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #StateandLocalPerformanceManagement #CityandCountyManagement #TrustInGovernment #StateandLocalPublicAdministration #CityPublicAdministration #CityGovernmentManagement #CountyGovernmentManagement #StateandLocalPolicyImplementation #PoliticsVsGovernment #DedicatedToStateandLocalGovernment #CityGovernmentPerformance #StateandLocalDataGovernance #CityTechnologyManagement #StateandLocalServiceDelivery #StateandLocalGovernmentHumanResources #PublicSectorHumanResources #StateandLocalWorkforce #GovernmentOversight #StateandLocalGovernmentEfficiencyandEffectiveness #StateandLocalInfrastructureManagement #AmericanSocietyForPublicAdministration #ASPA #BandGReport #BarrettandGreeneInc
- DO YOU SPEAK DATA?
Nearly every city, county or state gathers huge quantities of data for a variety of purposes. Some of it is effectively used, some is entirely ignored, and some of it is denied. But one thing is clear: Data has become the language of government. With that in mind, it’s increasingly critical that government employees are bilingual; not only must they be traditionally literate in that they can speak clearly and understand what others have said, they also need to be steeped in something that’s become widely known as data literacy. This is not intended just for people who would identify themselves as data specialists, whose jobs are primarily to help create or analyze data. Data literacy efforts must spread throughout any well-run government organization. The need for government leaders to understand data – and for data specialists to translate it into clear comprehensible English – has become especially critical for those who are using it to make important decisions or monitor the programs in which they’re involved in order to take steps to improve them. But, in the real world there’s no requirement that elected officials really understand the charts, graphs and spreadsheets that are put before them. When was the last time you heard someone running for city council or selectman boasting that “I’ll be able to do my job well because I’m data literate”? What’s more, it’s critical for staff at many levels to understand the data and its significance, even if they’re not using it to make decisions. That’s because many employees collect and input data. Multiple cities have found that data quality improves when their workforce understands the use to which the data is put, its importance to the taxpayers they serve, and its connection to the success of their departments. This idea was well spelled out in a 2022 Deloitte report , “Data Literacy for the Public Sector: Lessons from Early Pioneers in the U.S.” One of its most important findings was that, “In order for agencies to effectively engage in the ever-changing data landscape, organizational data literacy capacity and program models can help ensure individuals across the workforce can read, write, and communicate with data in the context of their role. Data and analytics are no longer ‘just’ for specialists, such as data engineers and data scientists; rather, data literacy is now increasingly recognized as a core workforce competency.” The ways that cities are achieving better staff data literacy come not only from live or online classes and, study-at-your-own pace materials, but also from established communities of interest that encourage employees to meet and learn from each other. Still, not all employees are going to be willing students, as the time they spend achieving data literacy can add onto an already overwhelming work week. We’ve never heard of a place that offered overtime hours for employees who were taking an online course in data literacy. Early efforts to foster data literacy in the public sector workforce began nearly a decade ago, when President Barack Obama signed an executive order that boosted the development of open data 3 – the notion that, absent an overriding reason, all government data be accessible to all Americans. That idea quickly gained traction with the nation’s cities, and they began to make their data more accessible to a broader range of people both inside and outside of government. But simply providing data to people is only half the battle. It’s equally important that they are helped to have the capacity to evaluate it, understand it and ask the appropriate questions about its meaning. Not only is it important for people in the public sector to be able to understand data, it’s also important that they use the words of data to mean the same thing as other people think they mean. In the first book of the Old Testament, the story is told of the Tower of Babel. At heart, the tale concerns the effort of Babylonians to create a tower that would “reach to the heavens.” But that lofty goal proved to be a failure because the people building the tower spoke many different languages. This ancient story has a lesson for cities now that data has become the language of government. It’s critical for them to speak the same language – to be data literate – or else their programs and policies can risk facing the fate of the Tower of Babel. One challenge that confronts people who are pushing data literacy programs in their cities, counties and states is the lack of resources for adequate training. Like all other government efforts, promoting data literacy isn’t free. As a result, it’s important for elected and appointed leaders to have sufficient buy-in to this process for them to make the necessary resources available. This is particularly worrisome, as many states and localities are confronting a “fiscal cliff,” and when expenses outpace revenues, the first thing to be dropped are often training programs. But despite this tendency, governments that cut back on this kind of education are shortsighted, and their reliance on data is ultimately doomed to be less than successful. #StateandLocalDataGovernance #CityDataGovernance #StateandLocalTechnologyManagement #CityTechnologyManagement #StateandLocalGovernmentManagement #CityandCountyManagement #StateandLocalGovernmentPerformance #CityEmployeeDataLiteracy #CountyEmployeeDataLiteracy #StateEmployeeDataLiteracy #PublicSectorDataLiteracy #StateandLocalGovernmentHumanResources #StateandLocalWorkforce #DataLiteracyTraining #DataLiteracyAndElectedOfficials #CityDataQuality #CountyDataQuality #StateDataQuality #CityDataUse #CountyDataUse #StateDataUse #DedciatedToStateandLocalGovernment #BandGReport #BarrettandGreeneInc
- SHORTCOMINGS OF EVIDENCE-BASED PRACTICE
As anyone who has been reading our work in the past knows, we’re strong believers in making sure that there’s ample use of evidence when states and localities decide to take one policy or programmatic turn or another. Despite our faith in the value of evidence-based practices, it’s become clear to us that this is a case where, as Greek fable writer Aesop put it “Too much of a good thing can be bad.” He was referring, by the way, to flies who got attracted to a honey jar, where they got stuck in the sticky stuff and died. Back to our concerns about evidence-based practices (which aren’t nearly as extreme as the plight of those flies): When entities rely too much on evidence-based practices before they embark on a new program or policy, there’s a hazard that they may be finding their proofs of concept from dissimilar places. (This is one of the issues we discussed when we wrote about the hazards of the phrase “best practices,” a while ago.) As the Center for Law and Social Policy argued a few years ago , “In generalizing knowledge, EBPs fail to consider the cultural relevance of practices, thereby failing to provide certain communities, especially communities of color, with solutions that respond to and understand their individual lived experiences and cultural contexts. An effective practice in one community may not be effective in all communities.” Additionally, simply finding evidence from another jurisdiction that something works well, doesn’t mean that it’s easy to replicate with fidelity, while also providing the flexibility and ability to innovate for those who want to improve upon established practices. By definition, when places want to make true innovations, there isn’t evidence from elsewhere that a new idea works. This makes us think about a conversation we had with a close friend who was then in television programming. We were having fun coming up with ideas for tv shows that we think would be successful, but he shot them down, one a time on the premise that this kind of program had never been done, so no one was likely to do it now. The same thing we suspect (but can’t prove) is that when executive or legislative branches require that their agencies show proof from elsewhere that a new idea works – and they can’t, because it’s never been tried before, true progress can come to a grinding halt. We’ll take this a step further and argue that risk-taking is an essential element for progress, and the safety belt of evidence can be overused to provide cover so that it doesn’t appear something new will fail. What’s more, for smaller communities, the amount of time and money that’s required to unearth evidence from elsewhere, can provoke a lack of initiative as agency heads watch every budgetary dollar they get. A few more challenges on this front are sited in a paper titled “ What are the limitations of evidence-based practice ?” by the Center for Evidence Based Management, from which we’ve drawn the following excerpts: “Sometimes the best available evidence is not available. This is particularly the case with regard to novel management techniques or the implementation of new technologies.” “Another limitation is that the current management environment changes more rapidly than in the past, which limits the relevance and applicability of scientific and experiential evidence that was generated in an organizational context that was different that today’s.” “Some managers see evidence-based practice as a tool to reduce staff expenses: use the best available evidence to determine the best model or technique, hire young, inexpensive practitioners and equip them with an evidence-based protocol to guide their decisions. This would not only be a misuse of evidence-based practice but also suggest a fundamental misunderstanding of its principles” #StateandLocalGovernementManagement #StateandLocalManagement #EvidenceBasedPractices #StateandLocalPerformanceManagement #EvidenceBasedManagement #EvidenceBasedPracticeMisuse #EvidenceBasedPracticeShortcomings #GovernmentRiskTaking #StateandLocalInnovation #CityInnovation #CenterForEvidenceBasedManagement #CenterForLawAndSocialPolicy #BandGReport #BarrettandGreeneInc