top of page

B&G REPORT.

Search
Writer's picturegreenebarrett

POTHOLES ON THE AI ROAD

We’re not scared of AI. It doesn’t take much historical memory to know that pretty much any new technology brings out fear and trepidation.


In fact, according to Techradar, “When the Stockton-Darlington Railway opened in 1825, people feared the worst: the human body, surely, wasn't made to travel at incredible speeds of 30 miles per hour. People genuinely believed that going that quickly would kill you in gruesome ways, such as your body melting.”



Over a century later, many were convinced that the arrival of television was going to ruin children’s eyesight and destroy the movie business altogether. As we noted in one of the two biographies we wrote about  Walt Disney, film producers regarded television as “the monster,” and viewed it “with suspicion fearing it would keep audiences out of movie theaters.”


And now here comes AI. We turned to ChatGPT for its view on this issue (Going to the source, seemed only fair.) It told us that “As AI systems become more sophisticated, there’s a risk that they may operate beyond human control or oversight.”


Anyway, we feel pretty confident that while AI may have some unfortunate consequences, we don’t think it’s the stuff of nightmares. In fact, we’ve tended to write about many of the ways in which AI can help governments run more efficiently and effectively.


But our eyes haven’t been closed to the downsides; the hazards of AI both in government and as a research tool.


Here are five of the things that are on our mind:


  • Many governments are working on policies designed to restrain improper uses of AI, and that’s a good thing. But based on years of watching how government regulations work, we worry that there’ll be insufficient oversight to make sure that the policies actually have been implemented.

  • There’s an enormous amount of misinformation available in the AI-verse. We turned to ChatGPT to ask about ourselves, and after saying a bunch of nice things, it also mentioned that “they have authored or co-authored books such as “Balancing Act: A Practical Approach to Business Event Based Insights.” Having never heard of this book, we were sure we hadn’t written it and on investigation it turns out to have been written by Kip Twitchell  who works with IBM Global Business Services.

  • It’s not only the bad information that’s the problem, it’s the information that isn’t even available through AI that can matter. Artificial Intelligence, after all, can only know what it can find by foraging the Internet, and a great deal of worthwhile knowledge hasn’t been digitized at all. This can include current information from smaller governments, remote geographic areas, groups of people whose sentiments or lives tend to go undocumented, or magazines, newspapers and journals that no one has ever made available in digital form.

  • AI has the capacity to come to conclusions based on hidden biases. As an October 2023 brief from IBM pointed out, “Underrepresented data of women or minority groups can skew predictive AI algorithms. For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.”

  • Also alarming to us is the commonplace notion that AI can make decisions for governments. It can certainly be a valuable tool, but as smart as AI may be right now (we won’t begin to think about the future), it can only serve as an advisor, not as a decision-maker.

 

For years now, we’ve advised against using the term “performance-based budgeting”, because we don’t believe that performance measures can be the sole guide to good budget decisions. We’ve always made it a point to talk about “performance-informed” budgeting.

 

The same goes for AI’s potential to help with decisions. It can inform them but shouldn’t be expected to make them.


Comments


bottom of page