Our pledge to the people of Kansas: We don’t use artificial intelligence to write stories or columns

Posted May 11, 2026

How will we see or understand journalism if artificial intelligence serves us the news we read, wonders our columnist.

Other journalism outlets have wrangled with artificial intelligence. Kansas Reflector does not use it to produce stories or columns. (Illustration by Eric Thomas for Kansas Reflector) 

Let’s get straight to the point.

Kansas Reflector has not and will not run stories or columns created by artificial intelligence. We help people understand the world in which they live through journalism written by other people. Different outlets may make different choices, but we want our readers to know that the words they read and images they see come from their fellow Kansans.

I’m stating this so bluntly now because States Newsroom, our parent organization, released its official AI policy last week. From one perspective, the policy simply codifies current practices. From another, it takes a stand against the waves of meaningless slop flooding social media platforms and websites.

It says: “States Newsroom does not publish stories or commentaries generated by AI. We do not publish any images, videos or audio clips created or altered by AI. If the use of AI is the point of a story in question, AI-generated content may be used, but will be prominently labeled and explained.”

There you have it.

News outlets across the nation have debated the use of generative AI in recent months. The conflict often occurs between reporters — those who report and write stories — and the editors and executives running publications.

McClatchy, the newspaper chain that includes the Wichita Eagle and Kansas City Star, has faced pushback from journalists after deploying a “content scaling agent” meant to produce summaries and alternate versions of stories. Reporters have asked to have their bylines removed from the AI creations.

Similarly, artificial intelligence has become a sticking point in union negotiations at both the industry-leading The New York Times and ProPublica.

I can understand why reporters might want to use AI tools for research. As a rule, journalists enjoy experimenting with new technology to uncover skulduggery. But that’s behind the scenes. Who on earth really wants to ingest AI-produced content?

We all know that generative artificial intelligence models can churn out text or images or video. I’ve written multiple columns about its ability to do so, alternating between the amused and skeptical. But for all the technology’s supposed potential, I haven’t yet met a person who says: “That’s just what I need. Give me more of that.”

Instead, I’ve heard the opposite.

Audiences recoil at clunky AI widgets bolted onto search engines, websites and operating systems. They accuse awkward writing and outrageous videos of being created by AI. Teenagers and preteens have even started calling fake or inauthentic statements “AI.”

States Newsroom’s policy adopts a properly cautious stance: “While AI offers many benefits, we believe it raises many concerns related to accuracy, privacy, and the misuse of copyrighted material in training AI systems. It may also be used to spread misinformation and create deceptive words, images and sounds.”

Journalists and news outlets are in the truth business. New technology should help us locate and spread truth, not hide or distort it.

After following AI closely for the past four years, I’ve come to believe that ardent advocates and outraged opponents both miss the mark. I strongly doubt claims that AI will either “transform the world for the better” or “cause human extinction.” I suspect it will become just another piece of technology.

Depending on how you define AI, it’s been part of our lives since the first automated spell checkers. It pops up in popular computer programs such as Photoshop. Radiologists have come to incorporate its insights while doing their jobs. Our cellphones employ AI shortcuts for sending text messages, taking pictures or asking for directions.

In other words, while I don’t want AI writing my columns, I’m happy for the help on spelling “conscientious” correctly.

Our policy acknowledges this truth: “We allow limited use of AI for certain aspects of the editorial process, including but not limited to transcriptions of audio interviews, analysis of data sets, routine formatting tasks, brainstorming, and review of large documents or videos of lengthy meetings, on a case-by-case basis.”

Notably, this does not absolve reporters or columnists of any responsibility in the writing process. We should verify that transcribed quotes are correct. We should doublecheck data analysis. We should watch the flagged sections of meetings.

Above all, human beings bear responsibility. If we get something wrong, it’s our own fault.

I’m sure this won’t be my last column about artificial intelligence. While I’ve gone on the record as calling it just another piece of technology, I could be wrong. Yet I strongly believe that Kansas Reflector’s readers want to hear from other Kansans, not output from neural networks dispersed across data centers.

We live in this state, alongside you. We sleep and wake, work and play, eat and exercise. Our words come from our own minds, transmitted by our fingers to a keyboard, and then to you. They might not be perfect, but they’re ours.

Clay Wirestone is Kansas Reflector opinion editor. Through its opinion section, Kansas Reflector works to amplify the voices of people who are affected by public policies or excluded from public debate. Find information, including how to submit your own commentary, here.

Read more