PROCURING FOR THE FUTURE.
AI HAZARDS AND GUARD RAILS
Since GenAI first appeared on the scene in late 2022, both benefits and hazards have been chronicled in multiple places, including this website. Advantages of AI play out on a daily basis providing cities and counties quicker results, increased staff efficiency, and improved government-resident communications.
But as generative AI use took off, media reports surfaced of fabrications delivered in response to prompts (known as hallucinations) and factual errors that were embarrassing and sometimes costly for governments and their vendors.
“If you don’t have a strategy or plan in place for how you deal with AI hazards, you’re going to get in trouble very fast,” says Brian Funderburk, an advocate for the responsible use of AI in government, and a retired city manager in Texas with 40 years of experience in local government.
The litany of problematic uses of AI seems to grow every day as its use expands. Just for starters, there have been fictitious precedents cited in legal cases. Chatbot errors have also surfaced with some frequency, notably in the much-heralded chatbot designed for businesses developed by New York City in the fall of 2023, that was roundly criticized the following spring for giving business callers incorrect information and sometimes advising them to engage in illegal behavior.
Multiple companies have had to deal with the consequences of AI mistakes, including Deloitte, which agreed to refund the equivalent of $290,000 in U.S. dollars to the Australian government for a report “that was littered with apparent AI-generated errors,” according to an AP News report.
Although hallucinations that AI can conjure have diminished to some extent, the continuing threat of errors requires extensive double-checking and triple-checking by humans that bear responsibility for what’s produced. “It will be a while before we can trust AI unconditionally,” says Funderburk who is currently Vice President and AI Safety Officer at Civic Marketplace.

Establishing Guard Rails
What’s critical is “making sure you’ve got guard rails in place,” says John Matelski, the Chief Information Officer at the National Association of Counties (NACo). “There need to be clear policies to ensure that there is human review happening.”
Those policies need to address acceptable use and prohibit entering confidential data into an AI tool. Without that, he sees the risk “of exposing sensitive data related to residents.”
Rules written into policies can ensure that no AI answer goes out without verification of sources and none is released without a human taking responsibility-- as in the commonly-used phrase “keeping humans in the loop”. Entities can also help to contain the content that AI explores – for example, limiting it to a government’s own contracts, statutes and policies.
It can also outline a method to classify data and how it is to be used in a public domain, in an internal document or whether it is to be treated as confidential or restricted.
The Challenge for Small Governments
While small cities and counties may lack staff, internal expertise and financial resources, that doesn’t remove the need to establish solid AI policies according to multiple AI and data governance experts. “Smaller jurisdictions face heightened exposure due to limited IT staffing, limited legal capacity and fewer formal data governance structures,” says Micah Gaudet, a frequent speaker on AI topics as they relate to local government, and the author of “Fragile Systems: An Ecological Approach to AI in Government”.
Even the smallest governments can’t simply ignore AI, even if they think they can. One of Gaudet’s biggest concerns is “shadow AI”. Although a government may tell employees it has banned AI, the use doesn’t stop. Rather, “it often shifts to personal devices, home accounts or other workarounds that operate entirely outside municipal networks and monitoring controls,” he says.
Fortunately, for many of the challenging issues that occur with AI – even in smaller entities – there are also solutions. Here are some of the areas to consider.
Internal Data Quality Issues
To avoid misinformation, many governments limit the reach of AI to their own websites, statutes and public records, thus avoiding internet misinformation. But individual government data is often also problematic and an AI search within a single government can easily yield factual errors due to out-of-date information, misinterpretation of regulations or policy, or inconsistent or poorly structured data.
To avoid problems, local governments need to take care of their own digital environment “So that when AI systems surface answers, those answers are grounded in accurate, authoritative municipal sources,” Gaudet says.
This means attention not just to AI governance, but to governance of data systems, attention to data quality, and clear policies for inventorying data, identifying source and time information, and keeping websites up-to-date.
Third party risk
There are thousands of AI companies in the United States, but the marketplace can be fragile and the products and services often depend on an amalgamation of different AI models and platforms, as well as policies and practices that buyers need to understand.
Do governments – particularly small ones – sufficiently appreciate vendor risk?
“Not enough,” says Nick James, founder and principal of WhitegloveAI, a company that has been established to help governments “secure, responsible and scalable AI solutions.”
He warns against relying on self-reported information from a vendor and emphasizes the importance of asking a series of critical questions in addition to testing products and references before purchase. “A lot of mistakes can be caught before you make a financial commitment if you ask the right questions,” he says, adding that he thinks “these conversations are not even happening right now.”
Some of the most important questions to ask are:
What’s the background of the founding team of the company and is the founding team still involved?
Is there a retention policy for data that the vendor has access to?
Where is the data kept?
How is it protected?
Is the data housed within the 50 states?
What AI models are used externally to the vendor’s own application?
Avoiding Bias
Matelski tells the story of a Midwest county that was planning an agentic AI rollout based on vendor-purchased AI to make bail decisions. But the way that the vendor’s AI software was analyzing the county’s data lacked transparency and was suspected of potential biases which could favor or hurt specific demographic groups.
“However they built the tool behind the scenes, it was built with some level of bias, which led the county to reevaluate the tool and how it was used,” says Matelski. The county subsequently modified its purchasing habits by adopting a set of questions for vendors designed to root out potential biases.
Even the prompts that are used to pull information out of an AI tool can be biased. One city saw the prompts that a staffer used pulled into a public information request involving its procurement management. For example, according to an NPR report, “The City of Bellingham (Washington) is launching an independent investigation in response to public records . . . that show a city employee asking ChatGPT for help tipping the scales to ensure a preferred vendor would be awarded a large city contract last year.”
The critical lesson is that governments need to understand more about the algorithms that are used in an AI product, the mechanisms involved in the analysis and the biases that may be reflected in the historical data that may have been accumulating over many years.
Double and Triple Checking Results
Beyond those basic categories, a key to successful AI use is double-checking and triple-checking processes and results, with human staffers always bearing ultimate responsibility. Six key suggestions:
Reevaluate the question or prompt that has been used when using AI to research a question. Consider whether the lack of context for a question or a potential bias in phrasing potentially skew the AI answer.
Use AI platforms that reveal the sources of the information provided and double-check against original sources to make sure that the AI application accurately picked up the facts it’s relaying.
Ask AI to challenge the information that has been received in answer to a prompt. Potentially use several different AI platforms to do this.
Rely on humans with knowledge to double check an AI result and make sure it passes a smell test. Be careful about troubleshooting in an area in which you do not have knowledge.
Use AI output as decision support not as the decision-maker.
This article was supported by and written in partnership with Civic Marketplace.
#StateLocalGovernmentGenerativeAIPolicyandManagement #StateandLocalArtificialIntelligenceManagement #StateandLocalArtificialIntelligenceRisk #StateandLocalArtificialIntelligenceHazards #StateandLocalArtificialIntelligenceGuardrails #StateandLocalAIGovernance #CityArtificialIntelligenceManagement #CountyArtificialIntelligenceManagement #CityGovernmentTechnologyManagement #CountyGovernmentTechnologyManagement #AIandFactualErrors #StateandLocalArtificialManagementGovernance #CityandCountyAIGovernance #StateandLocalTechnologyManagement #StateandLocalAI #AIandHumanResponsibility #AIandHumansInTheLoop #ThirdPartyRiskAndAI #LocalGovernmentProcurement #CityandCountyProcurement #VendorsAndStateandLocalArtificialIntelligenceRisk #AvoidingArtificialIntelligenceBias #ArtificialIntelligenceHallucinations #StateandLocalArtficialInteligenceBias #StateandLocalHumanOversightOfAI #StateandLocalProcurementManagementAndAI #BrianFunderburk #JohnMatelski #NickJames #NACoCIOJohnMatelski #WhitegloveAI #CivicMarketplace #BarrettandGreeneInc











