top of page

Search Results

319 items found for ""

  • POTHOLES ON THE AI ROAD

    We’re not scared of AI. It doesn’t take much historical memory to know that pretty much any new technology brings out fear and trepidation. In fact, according to Techradar,  “When the Stockton-Darlington Railway opened in 1825, people feared the worst: the human body, surely, wasn't made to travel at incredible speeds of 30 miles per hour. People genuinely believed that going that quickly would kill you in gruesome ways, such as your body melting.” Over a century later, many were convinced that the arrival of television was going to ruin children’s eyesight and destroy the movie business altogether. As we noted in one of the two biographies  we wrote about  Walt Disney, film producers regarded television as “the monster,” and viewed it “with suspicion fearing it would keep audiences out of movie theaters.” And now here comes AI. We turned to ChatGPT for its view on this issue (Going to the source, seemed only fair.) It told us that “As AI systems become more sophisticated, there’s a risk that they may operate beyond human control or oversight.” Anyway, we feel pretty confident that while AI may have some unfortunate consequences, we don’t think it’s the stuff of nightmares. In fact, we’ve tended to write about many of the ways in which AI can help governments run more efficiently and effectively. But our eyes haven’t been closed to the downsides; the hazards of AI both in government and as a research tool. Here are five of the things that are on our mind: Many governments are working on policies designed to restrain improper uses of AI, and that’s a good thing. But based on years of watching how government regulations work, we worry that there’ll be insufficient oversight to make sure that the policies actually have been implemented. There’s an enormous amount of misinformation available in the AI-verse. We turned to ChatGPT to ask about ourselves, and after saying a bunch of nice things, it also mentioned that “they have authored or co-authored books such as “Balancing Act: A Practical Approach to Business Event Based Insights.” Having never heard of this book, we were sure we hadn’t written it and on investigation it turns out to have been written by Kip Twitchell  who works with IBM Global Business Services. It’s not only the bad information that’s the problem, it’s the information that isn’t even available through AI that can matter. Artificial Intelligence, after all, can only know what it can find by foraging the Internet, and a great deal of worthwhile knowledge hasn’t been digitized at all. This can include current information from smaller governments, remote geographic areas, groups of people whose sentiments or lives tend to go undocumented, or magazines, newspapers and journals that no one has ever made available in digital form. AI has the capacity to come to conclusions based on hidden biases. As an October 2023 brief from IBM  pointed out, “ Underrepresented data of women or minority groups can skew predictive AI algorithms. For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.” Also alarming to us is the commonplace notion that AI can make decisions for governments. It can certainly be a valuable tool, but as smart as AI may be right now (we won’t begin to think about the future), it can only serve as an advisor, not as a decision-maker.   For years now, we’ve advised against using the term “performance-based budgeting”, because we don’t believe that performance measures can be the sole guide to good budget decisions. We’ve always made it a point to talk about “performance-informed” budgeting.   The same goes for AI’s potential to help with decisions. It can inform them but shouldn’t be expected to make them. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #StateandLocalGovernmentData #StateLocalGovernmentAIPolicyandManagement #CityAIManagement #StateAIMnagement #CountyAIManagement #StateandLocalTechnologyManagement #StateandLocalPerformanceMeasurement #StateandLocalPerformanceManagement #StateLocalArtificialIntelligenceManagement #StateLocalArtificialIntelligencePolicy #GovernmentOversight #FearOfTechnology #GreeneWaltDisneyBio #StateLocalGovernmentRegulation #BarrettandGreeneInc

  • EMPLOYEE SURVEYS' TWO-EDGED SWORD

    Employee surveys can provide extremely valuable information for supervisors, managers and HR officials in cities, counties, and states when they reveal both the negative and positive sentiments that employees have about their workforce. They have the potential to lead to decisions that will improve workforce satisfaction and retention of valued employees. But when employee surveys are viewed as nothing more than another form to fill out and there’s no feedback or sense that the surveys were ever seen by anyone who cares, then they can have the opposite effect. “When you do a survey, you are implicitly making a commitment to employees that you are going to share the results and do something about it,” says Bob Lavigna, senior fellow-public sector for UKG, a firm that provides workforce and human resource management technology. “And if you don’t follow through on that commitment you are going to damage trust.” We called Warren Kagarise digital engagement manager for King County, Washington to ask him about this issue and his advice was simple: “Once the survey ends, close the loop with respondents — even if the result is not flattering to the agency.” He went on to say that “if you’re not able to close the loop and give the acknowledgment that we heard from you and this is the next step in the process, people assume that you were never listening in the first place.” Of course, simply making the results of surveys available is only half the equation. It’s equally important to let the people who filled it out know what the organization intends to do about the problems that have surfaced. Furthermore, even if there’s nothing much that can be done, it will serve an entity well to explain why that would be the case. “I f for example there are complaints about compensation and there aren’t enough resources to raise pay, then it can be worthwhile educating employees about the value of the benefits to their total compensation,” advises Lavigna, “as benefits like health care or pensions can add on another 30 percent or more to that total.” At a time when many state and local governments are concerned about their turnover rates, employee surveys can help keep a satisfied workforce in place. But if those surveys are perceived as a sham exercise, everyone would be better off if they weren’t used at all. #StateandLocalHumanResourceManagement #StateandLocalHumanResources #StateandLocalPerformanceManagement #StateandLocalEmployeeSurvey #EmployeeSurveyFollowUp #StateLocalTotalCompensation #EmployeeSurveyImpact #PublicSectorWorkplace #LaborManagementRelations #RobertLavigna #UKG #KingCountyDigitalEngagement #StateEmployeeSurveyFeedback #CityEmployeeSurveyFeedback #StateLocalEmployeeRetention

  • REBRANDING LOCAL GOVERNMENT

    Over the years, we’ve chatted with high school and college students about the potential of careers working for state and local governments. All too often their responses are something like “Yeah, I suppose so,” in tones that make us believe that they’re not supposing anything at all, except that the whole thing sounds like a crashing bore. This is particularly dismaying for the two of us who have devoted our careers to this corner of the world. We are pretty sure that for the most part they have little or no idea what kinds of jobs local government can offer aside from the obvious ones like police, fire, and sanitation. Over the last several years, as the workforce shortage has afflicted many governments, we’ve reported about all kinds of outreach efforts that attempt to show the life of a government employee as a secure one with potential for advancement and the opportunity to benefit the world. These efforts are paying off in some places, but we think that many cities, counties and states are missing out on a potentially more persuasive approach. Instead of trying to sell jobs in local government as though that had much meaning, do a better job at showing potential candidates all the genuinely exciting missions that local government leads to. In a recent conversation with John Bartle, President-Elect of the American Society for Public Administration and Dean of the College of Public Affairs and Community Service at the University of Nebraska at Omaha, he expounded on precisely what we’re thinking. “My children are 25 and 23,” he told us. “And they and their friends are not excited to work for the government. However, they are excited to advance many of the goals that require government and the nonprofit sector: peace-making, disaster response, improving racial and gender equity in programs, fair taxation, labor rights, and child protection.” According to the National Center for Education Statistics the following are the most common majors for aspiring students to earn a baccalaureate diploma: business, health professions and related programs; social sciences and history; biology and biomedical scientists; psychology and engineering. All of these fields offer a plethora of jobs in government. And that doesn’t even count the “helping jobs,” like social work that are appealing to many. Okay, we admit that maybe the big dreams of many teens – to be professional athletes, movie stars or rock musicians – aren’t on the roster of job titles in government. But we do believe that outside of fantasy-land job plans, there are plenty of young people who aspire to help other people. Better educating them in the many opportunities that exist in the realm of government is a way to sell the brands of jobs that cities, counties and states can offer. #StateandLocalHumanResources #StateandLocalGovernmentManagement #StateandLocalHiring #LocalGovernmentHiring #RecruitingNewGovernmentEmployees #RecruitingYoungPublicSectorEmployees #StateandLocalRecruitment #CityRecruitmentStrategy #CountyRecruitmentStrategy #StateRecruitmentStrategy #MarketingLocalGovernmentJobs #StateandLocalWorkforceShortage #WorkforceShortage #AmericanSocietyForPublicAdministration #ASPA #JohnBartle #UniversityOfNebraskaOmaha #AttractingYoungPeopleToGovernment

  • OUR RESPONSE TO RESPONSE TIMES

    When it comes to many vital public services, including police, fire and EMS, one of the primary – and sometimes the only – performance measurement that people use is response time. On the surface this makes a lot of sense. For emergency services, particularly, every moment can spell the difference between a minor incident and a tragedy. To the general public, fast response times are real tangible evidence that they are getting good service. Just ask anyone who has waited for an emergency vehicle when a relative or friend might be undergoing a heart attack. All that said, however, response times are often misunderstood. Sometimes, when they are overemphasized, they can actually lead to emergencies themselves. It’s our guess that most people who read about response times aren’t aware that they can be measured very differently by first responders. According to Lexipol, which provides information and tech solutions to help public safety organizations, there are three different ways that response times are generally measured. They are: · “Turnout time – the elapsed time from when a unit is dispatched until that unit changes their status to ‘responding.’” “Call processing time – the elapsed time from the call being received at the (public safety answering point) to the dispatching of the first unit.” “Travel time – the elapsed time from when a unit begins to respond until its arrival on the scene.” There’s a huge difference between the three – particularly from the point of view of the person who is urgently in need of help. With a shortage of EMS vehicles in many parts of the country, for example, after the 911 call is finished it can take the dispatcher valuable minutes to actually get an ambulance company to respond to the call. Once that happens, the ambulance still needs to arrive at the scene. From the perspective of the person who made the call, the response time might be 23 minutes (from call to help) not eight minutes (for the emergency vehicle to make the trip). If response times are truly to be used as helpful performance measures, we’d argue, that what really matters is the amount of time it takes from hanging up with 911 until help comes knocking on the door (or kicking it down in extreme instances). Other measures don’t really reflect the customer experience. Yet another issue with response times is that they don’t take into account the specific situation – and that can jeopardize safety for others, including the responder. If someone thinks they’ve broken an arm, for example, and calls 911 it probably doesn’t matter much if an ambulance arrives in ten minutes or twenty minutes. But if the call is for a fire or a heart attack then every minute counts. Yet these different scenarios are comingled in publishing response times. And that means that when emergency vehicles are summoned, responders who are being held accountable for their response times are responding to the scene as quickly as is possible – traveling far faster than the speed limit, going through stop signs and so on. No surprise that in 2021 according to the National Safety Council, 198 people “died in crashes involving emergency vehicles. The majority of these deaths were occupants of non-emergency vehicles.” Our recommendation is that response times, wherever possible, should be disaggregated in such a way as to differentiate between life and death emergencies and those that are far less serious in nature. This would not only make the response time measures more useful – it might save other innocent lives along the way. #StateandLocalGovernmentManagement #StateandLocalPerformanceMeasurement #ResponseTime #PoliceResponseTimeManagemen #FireResponseTimeManagement #EMSResponseTime #ResponseTimeManagement #PoliceManagement #PoliceData #FireManagement #FireDepartmentData #EmergencyManagementResponseTime #EMSResponseTime #StateandLocalDataGovernance #NationalSafetyCouncil #ResponseTimePerformanceMeasures #Lexipol #PerformanceMeasurement #PerformanceManagement #B&GReport

  • THE TWELVE BIG LIES ABOUT STATE AND LOCAL GOVERNMENT

    There are all kinds of variations on the theme of the three big lies that people tell in the normal course of day-to-day life. One of our favorite sets consists of: 1)    This is for your own good. 2)    It’ll be done by 3:00 3)    It must be true; I heard it on the news. In the old days there was another one that was exceedingly popular. It was “The check is in the mail,” but nowadays nobody much sends checks in the mail, so we’d offer a replacement for that one: “The check is being processed.” Those deceits, of course, are generic in nature. But over the years we’ve been collecting a series of mantras about the alleged reality of state and local government that don’t necessarily work in the real world. We’ve heard them from people at all levels of government, sometimes from established authorities and sometimes from people who just pretend they understand the way government works.  Here are our top twelve. We’d be interested in hearing additional ones from readers of this B&G Report. Of course, some of the dozen items that follow are valid sometimes.  But we’ve heard them repeatedly when ample evidence demonstrates that they’re wide of the mark. By the way, we hesitate to use the word “lies,” here. As that word seems to have become widely open to interpretation these days; and it’s frequently used just to describe something with which the accuser disagrees.  So, just to be specific, what follows are explanations about the way things work that are frequently NOT the way things work. And the list is based on both our own experience, and the understanding of states and localities we’ve accumulated over the last thirty years. 1.      “We know we are in financially sound shape because we have to pass a balanced budget.” 2.     “It’s impossible to fire a public sector employee.” 3.     “We’ll solve this problem by setting up a commission. Or a study group.” 4.     “Our transparency website means our government is transparent.” 5.     “Buying new technology will be the key.” 6.     “Merit pay is pay based on merit.” 7.     “The key reason we have a huge unfunded liability in our pensions, is that our benefits are too rich.” 8.     “You should just look at the general fund in order to analyze our city or state’s financial condition.” 9.     “You can always trust our data.” 10.    “Government can be run like a business.” 11.    “Everything we need to know is on the Internet.” 12.    "Once a piece of legislation is passed that means that something is really going to happen.” #StateandLocalGovernmentManagement #StateandLocalHumanResources #StateandLocalBudgeting #StateandLocalGovernmentPerformance #StateandLocalGovernmentTransparency #StateandLocalGovernmentWorkforce #StateandLocalPension #StateandLocalGovernmentTechnology

  • WHY THE PHRASE "BEST PRACTICE" MAKES US JITTERY

    We have to admit it. More than once we’ve referred to a policy or management approach as a “best practice.” But mostly those words were originally uttered by a source we quoted. Frankly, those two overwhelmingly common words often make us uneasy. There may be cases in which best practices can apply from city to city and state to state. Best budgeting practices, for example – such as those developed by the Government Finance Officers Association – can certainly be useful. It’s an accepted best practice in budgeting, for example, that entities should cover current year expenditures with current year revenues -- not revenues borrowed from the future. Who can argue with that? Outside of budgeting, there are some other areas in which best practices can certainly hold up. And many of them. which may not have held true in the past, are now thankfully self-evident. In human resources, for example, it's certainly a best practice to make every effort to avoid explicit or implicit racism in hiring or recruiting. Or consider the realm of information technology, where no one can deny that sufficient training can be fairly called a best practice. Before we go on, it seems worthwhile for us to provide our own definition of "best practice." Others may disagree, but it's the way the words sound to us -- and we suspect to many others. We believe that the ubiquitous phrase should be used to describe management policies that can be applied pretty much universally. Best practices, we'd argue, should be something like plug and play models that others can pick up and use with a reasonable assurance of success. But that's often not the way the words are used. For example, the latest glittery idea that seems appealing (but has only been proven as worthwhile in a smattering of places) can often be dubbed as best. We see this all over the place. People writing reports for any number of significant organizations will take the study of a handful of cities or states and list approaches they’ve uncovered as “best.” Not to seem cynical, but we've noticed that often the words "best practice" are used in consulting firms to sell their own approaches. For years, it was considered a best practice that states set aside exactly 5% of revenues in their rainy day funds. No more. No less. When we researched the topic, we discovered that precise number emanated from an off-the-cuff comment in a speech given by a leader in one of the ratings agencies. As years have passed, thinking on the topic has grown more sophisticated. The Volcker Alliance, for example, has thrown that 5% figure out the window and encourages states to tie their reserve funding to the volatility of revenues. Here are five reasons we are concerned when a best practice is ballyhooed by a government official. 1) Ideas that work in rural areas often don't apply well to densely populated cities 2) Approaches for homogeneous regions may leave out elements important in places with greater diversity 3) Things that work well in healthy economic times may need to be forgotten in the depths of a recession 4) Changing times generally require new solutions. For example, in the depths of the pandemic it was a best practice not to shake hands. Nowadays, people even hug hello. 5) The label is too often applied before a notion has been properly evaluated and proven to be generally workable. Fortunately, there are a number of alternative phrases that can be somewhat more accurate. We prefer "promising," "leading," or "accepted" practice. None of these reflects a universally, unquestionably, absolutely superior way of doing government business. Ultimately, this is all just a matter of semantics. The fundamental reason we feel as we do about practices being labeled the “best,” is that this phrasing may stand in the way of the evolution of thinking that’s necessary for progress in states and localities. If we know the best way to do something, then why look for a better way? And the search for better functioning government is the core of what we do for a living. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #EvidenceBasedPractices #BestPracticeCynicism #ErroneousBestPracticeLabeling #AvoidingBestPracticeLabels #StateandLocalBudgeting #StateManagement #LocalManagement #PerformanceManagement #EvidenceBasedManagement #EvidenceBasedDecisionMaking #EvidenceBasedDecisionMakingShortcoming #GovernmentConsultantOverreach

  • FIGHTING FRAUD: ADVICE FOR SMALLER CITIES

    Whatever their size, all state and local governments must contend with the possibility that federal grant money or taxpayer revenues will be siphoned off by perpetrators of fraud. While many of the defenses against fraud are similar, there are some cautions that particularly apply to smaller communities. In an interview that appeared on Route Fifty’s October 11, 2022 “Follow the Money” broadcast, City Manager José Madrigal, talked about the effort to fight fraud in Durango, a city of 19,000 in a rural county in the southwest part of Colorado. We're highlighting here a few of Madrigal’s comments which are particularly germane to the 19,000 cities, towns and villages with fewer than 25,000 people. The following edited comments occur toward the end of the broadcast, slightly after the 15 minute mark. The hazards of personal connections in a smaller community: Madrigal said, “Connection can be a really great thing because, you know, with that small town feel, everybody knows each other. My kids go to school with a lot of my co-workers kids. They sometimes hang out. We’re very well connected and sometime with that personal connection comes a letdown in your guard. “You know the person. Our kids play soccer together and they play basketball together . . .You start building all of these social connections. In places where you may not be personally connected, it’s easier to be a little more suspicious." On learning from larger communities: Madrigal remarked that cities with smaller populations could still model themselves on bigger cities and not view size as a barrier. “Some people who have not been in a bigger city have this shield. ‘Oh, no. Can’t do that. . . They have more resources than we’ll ever have.’ “I think there’s ways where you can scale a lot of those things. I may not have a 30-member accounting department, but I have 15 and I can be able to do some things in a better way. “I think sometimes the bravado of coming from a small town or representing a small town (makes us think) we can’t do it like bigger towns. There’s a lot of processes that are out there that I think you can definitely scale down so as not to be intimidated by the processes of bigger areas. Look at them and say. ‘What can I bring in?’" #Fraud #PublicSectorFraud #Durango #Colorado #JoseMadrigal #FightingFraud #RouteFifty #FollowTheMoney #StateandLocalGovernment #StateandLocalGovernmentFraud #CityandCountyManagement #StateandLocalGovernmentManagement #SmallCityCulture #GovernmentOversight

  • GUIDELINES FOR BUILDING A POTENT PERFORMANCE MANAGEMENT SYSTEM

    In early 2022, we were honored to join the late Harry Hatry, a pioneer of performance management and a distinguished fellow at the Urban Institute and Batia Katz also of the Urban Institute, in co-authoring one of his last papers, which was titled “Do’s and Don’ts: Tips for Strengthening Your Performance Management Systems.” Hatry, who we'd known for decades, passed away on February 20, 2023 at the age of 92. The paper also included contributions from a formidable group of performance experts. The list at the time included Maria Aristigueta, dean of the Biden School of Public Policy and Administration at the University of Delaware, Don Moynihan, McCourt Chair at the McCourt School of Public Policy at Georgetown University and Kathy Newcomer, professor in the Trachtenberg School of Public Policy and Public Administration at the George Washington University. The paper sums up a great deal of performance measurement and management knowledge that Hatry and others put together over many years. Here are a handful of our favorite items under the category of “Do’s”. They aren’t taken verbatim from the paper – we’ve edited many for length. Collecting Performance Data Do seek input from stakeholders as part of your process for selecting performance measures to track. Stakeholders are likely to include frontline employees (those who serve your program participants); special interest group members; elected officials; and, especially, participants in each relevant demographic group. Do make sure that mission statements focus on expected benefits. What benefits or outcomes are sought from the program, and for whom? What negative outcomes does the program seek to avoid or alleviate? Too often, mission statements identify only how benefits will be achieved without clarifying what benefits are sought. Do include both output and outcome indicators in performance measurement systems. Select the measurements used to track the performance of new programs at an early stage of program development. Defining and sharing these measurements will provide guidance for people in positions of authority in the program about what is expected of them. Analyzing Performance Data Do compare the outcome values broken out (disaggregated) by demographic characteristics (such as by age group, race/ethnicity, gender, educational level, and location.) This is of major importance in identifying service equity issues and identifying different service procedures that would boost the outcomes for different demographic groups. Identify and highlight in performance reports unexpected issues indicated by these breakouts. Do compare the performance values over time. Look for trends that indicate the need for action. Do compare performance indicator actual values with targets that had been set for individual performance indicators. Targets are used both as an accountability tool and to motivate program staff. Presenting Performance Findings Do identify any significant ways in which the data collection has changed over time. Otherwise, users may be misled by year-to-year changes that are not attributable to real-world improvements or declines but simply changes in the way the data have been created. Do clearly define each performance indicator. Both the data collectors and data users should be able to understand what is being measured. For example, fire departments, can measure response time from the moment the call comes in until trucks arrive at the scene, or alternately they can provide the same measure beginning the moment the trucks leave the station. Do tell stories that illustrate data’s meaning and importance. Numbers alone will only communicate effectively to readers who enter a document with curiosity. Real-world anecdotes will engage a far larger audience. Disseminating Performance Findings Do share findings with the press. One common complaint is that performance information only gets attention when negative. That can only be counteracted with a proactive approach. One key to getting attention in the press is to provide information that runs contrary to common assumptions. Do make the latest data on the performance indicators readily accessible to program managers throughout the year. As issues arise during the year, the latest performance data should be available to managers to use in addressing those issues. Do provide summaries and highlights to report audiences after each performance report is produced. Using Performance Findings Do reply to queries about findings, even if they are critical in nature. If it turns out that a query challenges findings in a way that could raise some doubts, it’s worth acknowledging that. Trust and credibility grow when room for doubt is acknowledged. Do periodically review the performance measurement process and update it as needed. Is it tracking the right things? Are the performance data and the data collection procedures producing data of sufficient quality? Is the information of sufficient value to justify a measurement’s added cost? Are the performance findings clearly presented? Has the information gotten to all those who can use the information? Do unleash the full power of performance data, not only through regularly published reports, but also at other opportunities throughout the year. Use the performance measurement process to help address issues as they arise. This will enable decisions to be made with the latest available performance data. It will also enable program managers to obtain performance information tailored more closely to the particular issue at hand.

  • MOUSETRAPS FOR FLAWED DATA

    It may seem a little heavy-handed, but for years now we’ve been writing about the endless reams of bad data that are used to manage and to make policy. For the most part, we’ve pointed to issues that require careful examination of the information to determine if its trustworthy or not. But, as time has passed, we’ve come across a great many signals, easily spotted and identified, that point to quicker recognition that information should be scrutinized. Here are a half dozen examples: 1) Beware comparisons between absolute figures that come from different size cities or states. If, for example, something criminal happens to hundreds of people in California that may not be nearly as alarming a situation as when the same thing happens to dozens of residents of Wyoming or North Dakota. 2) Sometimes reports or articles use numbers that are so precise as to be unbelievable. It seems to us that when project spending is reported as $1,436,432.15, there’s no legitimate way to figure costs out to cents, dollars or hundreds of dollars. A tight range is often more useful and believable. 3) Speaking of ranges, it’s self-evidently problematic when an expense is reported as somewhere between $100 and $500 million. Either not enough due diligence has been done, or the estimators are living in the Land of the Wild Guess. 4) If you’re relying on data for which no assumptions are provided dig deeper. When discount rates vary between two state pension plans, it’s entirely possible that the liability figures are not comparable. 5) Watch out for figures that are huge beyond common sense. Some years ago, there was a lot of talk about one million children being abducted each year. Living in New York City, news reports were full of the story of just one little boy, Etan Patz who was last seen at a bus stop in lower Manhattan. How could it be that if such huge numbers of children were disappearing, one child was getting so very much attention? It turned out, according to the Denver Post in 1986 that the “national paranoia" raised by the one million figure wasn’t really reflective of scary men luring children into their cars with candy – but rather children taken in a custody battle. (And the often repeated one million figure was also an exaggeration. In 2017, the Justice Department reported that the number of serious parental abductions is closer to 156,000 annually of which about 30,500 reach the level of a police report.) 6) Information that is self-reported to the federal government by states or by cities and counties to the states can he questionable. A question like “does your city use performance information,” can get “yes” answers regardless of differing definitions and degree of use. In the past a big-city mayor told us that his community used measurements to make decisions about road conditions. When we pursued the question, it developed that the only data the city had was an accumulation of citizen complaints about potholes. #StateandLocalDataManagement #StateDataQuality #CityDataQuality #CountyDataQuality #SuspiciousGovernmentData #FlawedGovernmentData #BadData #InaccurateGovernmentData #StateandLocalGovernmentDataHazard #StateLocalGovernmentDataHazard #AvoidingInaccurateStateGovernmentDataComparisons #AvoidingGovernmentDataErrors #HowToIdentifyBadGovernmentData #TrustAndFaultyGovernmentData #StateGovernmentDataAssumptions #StateLocalPensionPlanDiscountRate #PublicSectorDataCaution #DataStandardization #SelfReportedDataSkepticism #SpottingCommonGovernmentDataMistakes

  • THE ACADEMIC-GOVERNMENT COMMUNICATIONS GAP

    We spend a lot of time talking with government practitioners and a lot of time talking with academic researchers. We’ve often wondered about the barriers that keep them from talking with one another as much as they should. That’s why we’ve been particularly charmed by the Office of Strategic Partnerships in North Carolina, which we wrote about in an April 16 column for Route Fifty. The effort there has been to close the gap that often exists between the multiple academic researchers in a state and the government officials who are often addressing the same topics – just in different ways. Here’s what Jenni Owen, the director of North Carolina’s Office of Strategic Partnership, told us last month. “People in academia who say they want to have an impact on policy really mean it. And people in government who say they want evidence and data to inform their decisions also really mean it. But the way they each go about doing that is often clunky.” Witness her experience in a recent meeting with about 100 faculty members and doctoral students. “How many of you are pursuing or have something on your research agenda that you think has implications for public policy?” she asked. Everybody raised their hand. Next question: “How many of you have talked to anybody in government ever about the topic you’re pursuing or thinking about pursuing from a research perspective?” And one person raised a hand. Cautiously. The problem is not one-sided. Government officials often have little time to seek out good research themselves and no easy way to know what’s going on in the multiple institutions of higher learning within the borders of their state or beyond. North Carolina has set up formal ways for government departments to communicate their research needs to universities across the state. But there were also ways that Owen, who has vast experience in both academia and government, and others cited in our conversations about how the relationship could be improved in relatively simple ways. 1.     While state and local governments can certainly do a better job communicating the different initiatives that they’re working on, researchers can also do more to actively learn about their own governments. Owen and OSP often advise researchers, to watch the press releases from an agency, for example; pay attention to interim committee study groups; learn about organizational structure; look at departmental goal-setting, strategic plans and areas of cross-department collaboration. 2.    Advice also focuses on Initiating communications at the beginning of a research project, not when it’s finished. This requires knowing about new or ongoing government initiatives that might connect with research and touching base at an early stage with a couple of sentences about the relevance of the research to the initiative. 3.     That means that before communicating, it's important to understand areas of jurisdiction, a bit about departmental organizational structure and some basics about operations. If the research is about county jails, it’s likely an error to focus attention on the State Department of Corrections, which may have limited responsibility in that area. Likewise, it sends a bad signal to inquire as to whether someone is likely to be re-elected when they’re actually appointed to their position. A few other tips for researchers who want to see their work having impact: Don’t wait until a journal article is published to send it to a government official and hope that they read it, without explanation as to why it would interest them. Make sure that the journal article is relevant to the work the official is doing and include a sentence as to why you’re sending it to them. Understand who the players are below the top level. Communications don’t have to go to the Cabinet Secretary or the Commissioner. As Owen told us, “Don’t assume that the leader is where you need to start.” Consider ways to collaborate. University research may overlap with a government’s own specific research needs. See if the research you’re doing can also address a government’s own specific needs in an overlapping area. Says Owen: “I dream about the day when a researcher says to a government entity, ‘I’m going to go interview new parents in rural areas. Is there even just one question you’d like me to ask?’ “There are not just small gestures of partnership, but they are also substantive. It shows that researchers are thinking about policies and programs and applications of research learning for government decision-making. This can be a game-changer for the partners.” One more thing: The gap between professions; the difficulty in communicating and the caution with which each side approaches the other is something we run into all the time.  Our book, “The Little Guide to Writing for Impact” was published last month. It was written in collaboration with Donald F. Kettl and motivated by our mutual sense of a pervasive frustration among academics, editors and publishers that different styles of writing and communication were often standing in the way of getting important research findings across to the practitioners who could put them into action. #AcademicGovernmentCommunicationsGap #NorthCarolinaOfficeOfStrategicPartnerships #JenniOwen #StateandLocalGovernmentResearch #StateGovernment&UniversityResearchLink #GovernmentAcademicCommunication #TipsforAcademicResearchers #StateandLocalGovernmentManagement #University&StateGovernmentRelations #OptimizingUniversityResearchforGovernmentDecisionMaking #UniversityGovernmentCollaboration #ImprovingResearchPartnerships #OptimizingGovernmentResearchNeeds #StateGovernmentResearchNeeds #ApplyingUniversityResearchToPolicy #UniversityGovernmentResearchGap #LinkingAcademicResearch&StateGovernment #B&GReport #RouteFifty #LittleGuidetoWritingforImpact #DonaldFKettl

  • WANT TO WRITE SO THAT OTHERS CAN USE WHAT YOU’VE WRITTEN? HERE’s YOUR CHANCE!

    In 1799 when Napoleon was bearing down on Egypt a stone slab was discovered that came to be called the Rosetta Stone. It bore text in three forms, including Egyptian hieroglyphics which hadn’t been understood since before the fall of the Roman Empire. The written wisdom of the ancient world had been lost for centuries, but the stone made it decipherable. We want to be the modern-day equivalent of the Rosetta Stone (a peculiar aspiration perhaps for people instead of rocks). Only instead of making ancient script comprehensible in the modern age, we want to unlock the mysteries of the kind of writing done by people trained in academese for the rest of the world. Toward that end, in collaboration with one of the smartest men we know, Donald F. Kettl, author of 25 books and professor emeritus and former dean of the Maryland School of Public Policy, we’ve written a new book titled “The Little Guide to Writing for Impact” (Rowman & Littlefield, 2024). The book presents a series of guidelines that will enable readers to successfully frame a policy argument; pitch it to editors; organize the work so that the ideas have real impact; support it with data and stories; find the right publisher; and follow up after publication to ensure that the argument has enduring impact. It’s aimed for people who want to write everything from short blog posts through op-eds, commentaries and policy briefs, dissertations, articles for both the popular press and academic journals, and books. Truth in Advertising: The major point of this B&G Report is to persuade you to: ·        Tell others about the book if you think they can make use of it. ·        Buy the book yourself. ·        Use the book in your classes if you’re teaching. In short, this is the most self-serving B&G Report we’ve ever written. But we’re just vain enough to believe that it can be of genuine use to you, your colleagues, your students, and your friends. Here are some comments we’ve received about the book: Donna Shalala, Interim President of The New School, and former secretary of the U.S. Department of Health and Human Services commented that the book is “A little book that will have a big impact on policy. Imagine a whole generation who can clearly communicate great ideas!" Katherine Willoughby, editor-in-chief of Public Administration Review and Golembiewski Professor of Public Administration at the University of Georgia, said that “If you want to author a classic book, have your research published in a premier academic journal, complete an award-winning dissertation, or simply write better, consult The Little Guide to Writing for Impact. This quick read is chock-full of golden nuggets that, if engaged, will boost your influence on people and policy through your writing.” Chris Morrill, the Executive Director of the Government Finance Officers Association, commented that “With notes of Strunk and White’s Elements of Style, Barrett, Greene, and Kettl have gifted us a highly practical guide for communicating in a hyper-distracted world. Even with an array of new digital tools and artificial intelligence, at core communicating involves crafting a clear, concise, and compelling message. Barrett, Greene, and Kettl gives us the tools to do so.” Finally (actually there are more, but we’re running out of space, Trevor Brown, dean of the John Glenn College of Public Affairs at The Ohio State University wrote that "If you read it carefully and take its lessons to heart, this little book can have a big impact on the quality of your writing. Useful, readable, and above all sensible, it's pitched to scholars and policy wonks who want to reach a broad audience, but it will be helpful to anyone who puts words on paper and wants them to be read, understood, and to matter." There are two ways for you to purchase this book: Go right to Amazon.com where you’ll find it by clicking here. Alternatively, for readers of our website, we're providing a 30% discount on the book. To take advantage of this offer, click here and after registering to make a purchase, enter the code: WF130. #LittleGuidetoWritingforImpact #StateandLocalManagement #StateandLocalGovernment #WritingforGovernmentImpact #WritingforPolicyImpact #AcademicImpactonPolicy #CommunicatingAcademicResearch #AcademicImpactonStateGovernment #AcademicImpactonLocalGovernment #WritingforImpact #KatherineBarrett #RichardGreene #DonaldFKettl #Rowman&Littlefield #AcademicWriting #CommunicatingWithPolicyMakers #WritingGuide #Barrett&Greene #B&GReport #NewBarrettGreeneKettlWritingGuide #UniversityofMarylandSchoolofPublicPolicy

  • THE PERILS AND PRICE OF SPEEDY COMMUNICATIONS

    We remember the exciting day when we bought our first IBM PC and a printer for $7,500 back in 1981. (Yes. You read that number right.)  Our exciting new computer had no hard drive and its operating systems existed on a floppy disc. Years later, after a few computer upgrades, we heard about this thing called a gigabyte. That seemed like an unimaginable amount of space – probably enough to store all the information in our world.  It wasn’t so much later that we had scores and then hundreds of gigabytes on our desks. These days we’re all hot and bothered about the ways we can use AI. So, before we say anything more about the various problems that come along with advances in communications technologies, let it be clear that we’re thoroughly captivated by technology and hope we always will be. But when it comes to communicating with one another we are frustrated by the losses we’ve suffered each time something new comes along. Back in the days when fax machines were the brave new world, lots of time was saved by sending letters instantaneously all around the world.   But soon afterwards, every organization had a fax machine, with the numbers on their business cards (those were the days when people still used business cards) and all kinds of hitches began to appear. For example, mass mailings (A free trip to the Bahamas!) started to clog up fax machines. Faxes often didn’t come through. They got ignored as they piled up in a central spot awaiting someone to bring them to their rightful recipient. But that was only the beginning of a downward spiral. E-mails are another example. Soon after we adjusted to communications arriving this way, we began to miss the old-fashioned mail system. Even more, we began to miss the old-fashioned telephone, which allowed you decipher, through the tone of someone’s voice, whether they were sincere or sarcastic. Of course, e-mails have made the world a speedier place. People can exchange information and documents quickly – a major plus for us as researchers.  But the negatives have mounted up. For one thing, e-mails have led to an unhealthily 24/7 world. E-mails pop up in the middle of the night and they know no such thing as weekends. For a while, we worked with someone who would send out e-mails on Sunday afternoons beginning by saying “Hope you’ve had a nice weekend,” under the assumption that recipients must be ready to get back to work on Sunday. Then there’s the lack of thought that many people put into what they send by e-mail. People in a rush can sound terse and even rude in an e-mail, even when that wasn’t their desire. Most people have learned that the use of all capital letters comes across like yelling, but that’s a lesson that bears repeating. It’s surprising how little care is taken in getting names spelled properly. Or even using the right names in the first place. Our little company is called “Barrett and Greene, Inc.” You might be surprised to know how many notes we get (and these aren’t mass mailings either) addressed to “Dear Barrett.” Of real frustration is the desire to move so quickly through seemingly endless stacks of e-mail that people never read the entirety of notes they receive, necessitating a long exchange that would have been avoided with five minutes on the phone.  Following is the kind of thing we (and we suspect you) go through regularly: Us: “Thank you for your willingness to work with us. Can we talk on March 31, and if so, what time would be good with you? Them: “Yes, the 31st will work.” Us: “Terrific. Just let us know what time will work for you and the best way to reach you.” Them: “How’s 3:30?” Us: “That will work fine. But did you mean Central Time or Eastern Time? And how should we reach you?” Them: “Sorry, I should have been clearer. I meant Central Time.” Us: That’ll work well. But could you please send us the best way to reach you?’ Then we wait for two days and write again asking for the best way to reach the other party, only to get an automatic reply saying they’re out of the office for the rest of the week. Worse yet, from the point of view of style and tone, is the growing number of people who are relying on texts, which often include acronyms that require us searching on the internet for their meaning. We got used to LOL a long time ago. And we picked up on IMHO, too. But the acronyms keep coming. Not long ago we got a three-letter text that just said “NVM.” Turned out it meant “nevermind,” which pertained to a prior text. And if style and tone can be lost in emails they entirely disappear in texts. As far back ago as 1546, writer John Heywood wrote “Hasteth maketh wasteth.” Some things never change. #ChangingCommunications #ChangingTimes #EmailFrustration #EmailMiscommunication #TextingFrustration #B&GReport

bottom of page