Loosey goosey use of AI

The Editor
February 19, 2026
Share this post

We need to talk about government’s loosey goosey approach to AI, because it is starting to show, and not in a good way.

More and more official advertisements are popping up that are obviously AI-made, and not the polished, intentional kind. The rushed kind. Spelling errors, weird phrasing, graphics that look comical when the message is supposed to be serious. The end result is that government notices are starting to feel unofficial, and that is a problem government cannot afford to create for itself.

Government communication has one job: be clear, be credible, and leave no doubt. When a ministry posts something, the public should not have to pause and ask, “Is this real?” That question alone tells you something has already gone wrong. People should not be in WhatsApp groups debating whether an announcement was legitimate or a joke. Yet that is exactly where we are heading when official posts start looking like they were thrown together in five minutes.

And then there is the “dont-carish” way the country’s coat of arms is being handled. This one really gets under the skin. The coat of arms is not clipart. It is not a decoration you slap onto a background to make something look official. It is one of the clearest symbols of legitimacy we have, and lately it is being displayed in ways that do not even look like our coat of arms. Sometimes it is placed on our red, white, and blue in a way that looks like it could belong to another country. That is what happens when AI is used without care, and when the person using it does not understand that official symbols require precision.

People might think this is a small issue, just a graphic here or a typo there. It is not small when the post is announcing a deadline, a requirement, a public tender, a registration date, an appointment process, a scholarship timeline, or a health advisory. If official notices look unserious, the public will treat them as unserious. People will scroll right past, assume it is fake, or assume it is “one of those AI things.” Somebody will miss a deadline because they thought the announcement was not real. Then what? Then government will be upset with the public for not complying, while the public will be upset with government for communicating like it did not care. If you are posting official information, you do not get to be cute.

Yes, AI can boost productivity. It can speed up drafting, help with layouts, generate concepts, and make a small team move faster. That is all true. But AI does not replace standards. It does not replace training. It does not replace supervision. If anything, it makes standards more important because it can also multiply mistakes. AI can triple output, and triple errors right along with it, if nobody is checking.

Other places have already learned this lesson, which is why responsible AI use in government is such a big issue in Europe and elsewhere. They are putting rules around how AI can be used, who is allowed to use it, what has to be disclosed, what needs human oversight, and how the public is protected from confusion and harm. It is not because they hate technology, it is because they understand when the government speaks, it has to sound and look like the government. The public should never have to second guess.

That is why this cannot be left to personal preference inside departments, where one office is careful and another is experimenting like it is a school project. Government needs AI standards for official communication, and those standards need to be enforced across departments. Not suggested. Enforced.

At minimum, that should include:

• No AI-generated coat of arms, ever. Use the official, approved files, every time. If the correct version is not available, fix that first.

• A simple approval process for public posts. One person creates, another person checks, a final person approves.

• A basic checklist before anything goes out: spelling, dates, times, requirements, contact info, and whether the wording is plain enough for the public.

• Clear rules on what AI can be used for: drafts and internal work, fine. Official announcements and visuals that signal legitimacy, handle with strict controls.

• Training for anyone using AI in the communications workflow. If you are going to use the tool, learn the tool, including how to prompt properly and how to verify what comes out.

And yes, hopefully the government’s Digital Transformation project takes this seriously. Digital transformation is not only about moving services online. It is also about building trust in how government communicates in a digital space. A public that doubts what it is seeing will not engage properly, will not comply consistently, and will not feel confident in the systems being rolled out. That is the opposite of what digital transformation is supposed to achieve.

Government does not need to stop using AI. It needs to stop using AI like it is harmless. If government wants the public to take its notices seriously, then those notices have to look serious, sound serious, and feel unquestionably official. Right now, with the loosey goosey AI approach we are seeing, government is moving in the opposite direction.

Share this post