Does Claude dream of electric gavels? A federal case with Kansas connections sets an AI precedent.

Posted May 10, 2026

An April 2022 photo of the newly opened office of Beneficient, a Kansas financial institution described as being a "pawn shop" for the rich.

An April 2022 photo of the newly opened office of Beneficient, a Kansas financial institution described as being a "pawn shop" for the rich. (Photo by Max McCoy/Kansas Reflector)

Brad Heppner is guilty.

That’s what a New York City jury decided recently. But the legacy of Heppner’s federal case is likely not to be his conviction, but the judge’s landmark opinion that AI legal advice isn’t privileged like attorney-client communication.

I had hoped never to write about Heppner again because his story is downright grubby. It’s the kind of cautionary tale that future generations of Kansans will read (if Kansas and history books survive) and chuckle about the gullibility of those poor dumb and greedy yokels in the 2020s.

But columnists should never say never again, and Heppner has boomeranged back to my attention because of the judge’s opinion, which involves an AI agent named Claude that was trained in part with some of my pirated books.

Oh, for the love of Gutenberg.

In September 2025, Anthropic agreed to a $1.5 billion settlement with authors and publishers after a judge ruled the company had illegally downloaded copyrighted works to feed to Claude, its AI chatbot. Authors haven’t yet received any money, but even when we do, my share won’t be life-changing. The settlement will be divided among half a million authors and, after the publishers and the lawyers take their cuts, there might only be a few thousand left for each of us.

That Heppner turned to Claude to prepare his legal advice, and that I have a legal connection to the AI agent, is bizarre. I have a mental image of Anthropic tossing books into Claude’s gaping maw and the monster assimilating them, Borg-like.

You may recall that Heppner was the CEO of a novel investment enterprise, with offices in Hesston and chartered by the state of Kansas, that he described as a “pawn shop” for the rich. He was indicted in November 2025 on charges of securities and wire fraud, conspiracy, falsifying records and making false statements. Heppner was accused of looting $150 million from a now-defunct holding company and using a shell company to put the money in his own pocket. Federal prosecutors estimated the damage to investors — who were predominately retirees — at $1 billion.

Heppner’s trial began April 21. On Thursday, May 7, the jury returned a verdict of guilty on securities fraud, wire fraud, conspiracy, and making false statements to auditors. Heppner, 60, remains free on bail and is set to be sentenced Oct. 7, according to the federal docket.

He faces a maximum of 20 years in prison on each charge.

Heppner, who grew up in Hesston, sold the pawn shop scheme to Kansas lawmakers in 2021 by touting it as a revolutionary way to fund rural economic development by creating a state-chartered trust company where the wealthy could hock private equity and other difficult-to-trade assets they wanted to turn into cash. The result was the TEFFI Act, which stands for something so silly I’m not going to explain it here. Nobody really seemed to understand how a TEFFI worked, but that didn’t diminish enthusiasm for it. Heppner, who wore stripes and plaids together and was charming in a Mr. Haney sort of way, said he wanted to establish the first one in Kansas to help fund a grocery store in Hesston, population 3,700, so his mother wouldn’t have to travel far. Plans for a $20 million grocery store were scrapped after Heppner’s indictment.

The Heppner saga is a bipartisan black eye for the Kansas Legislature and Gov. Laura Kelly, who collectively lined up to buy what Heppner was selling. About the only one not convinced by Heppner’s sales pitch was Kansas bank commissioner David Herndon, who was concerned about the lack of regulatory oversight of the new venture. The Legislature responded by threatening to cut his office’s budget to zero.

But even as Beneficient materialized in Kansas, clouds trailed Heppner from Texas. A federal class action lawsuit alleged that Heppner had defrauded investors to generate cash for Beneficient, and there were reports the U.S. Securities and Exchange Commission had begun an investigation. Heppner resigned as CEO of Beneficient in June 2025 rather than face questions about the Texas company’s accounting procedures.

Heppner, according to the federal indictment, spent $40 million of the loot to redecorate his eight-bedroom mansion in Dallas, a home listed by “D” magazine as one of the city’s 100 most expensive. The feds alleged he also spent millions on his “Bradley Oaks” ranch in east Texas, millions on credit cards and private air travel, and $500,000 on jewelry.

Before his arrest, Heppner turned to his computer to seek Claude’s advice on possible legal defenses. His computer and other electronic devices were seized from his Texas home by the FBI, and those devices contained 31 conversations with the generative AI platform Claude. But Heppner’s defense team argued Claude’s advice should be shielded from the government because it was privileged.

The government said nonsense.

The merits were argued Feb. 10 before federal District Judge Jed S. Rakoff.

“Only three years after its release,” Rakoff wrote in his ruling a week later, “one prominent AI platform is being used by more than 800 million people worldwide every week. Yet the implications of AI for the law are only beginning to be explored. Thus, the Court’s ruling in this case appears to answer a question of first impression nationwide: Whether, when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney-client privilege or the work product doctrine? For the reasons that follow, the answer is no.”

Heppner’s legal team did not direct him to seek Claude’s advice. Claude is not an attorney, Rakoff noted, and all recognized legal privileges include a “trusting human relationship.” Also, Heppner had no expectation of privacy because the Anthropic user agreement specified that information could be shared with third parties, including governmental authorities.

When prosecutors asked Claude whether it could give legal advice, according to Rakoff, it said it wasn’t a lawyer and could not provide formal legal advice or recommendations. It suggested that users contact an attorney.

Additionally, Rakoff said, communications could not be “alchemically changed” to privileged simply because they were later shared with counsel. They also did not represent the work product of an attorney. Even though they “affected” the strategy adopted by Heppner’s defense, the conversations did not reflect the strategy at the time.

“Generative artifice intelligence represents a new frontier in the ongoing dialogue between technology and the law,” Rakoff wrote. “Time will tell whether, as in the case of other technological advances, generative artificial intelligence will fulfill its promise to revolutionize the way we process information.”

The novelty of AI doesn’t mean that it isn’t subject to long-standing legal principles, he said.

The legal community quickly took note of what appears to be the first substantive ruling related to AI and legal privilege, and the reactions were mixed.

“Future courts confronting similar questions should resist the opinion’s categorical tilt,” declared an essay in the Harvard Law Review, “attend to the facts of the case, and consider how privilege can operate to promote effective collaboration between clients and attorneys in this age of AI.”

A headline for an article in the New York State Bar Association read: “Loose AI Prompts Sink Ships: How Heppner Shook the Legal Community.”

The tone of most commentary seemed to be that Rakoff was right, but perhaps for the wrong reasons, and painted him as a kind of legal Luddite.

For what it’s worth, I think Rakoff got it right, for all the reasons he enumerated. AI is not human, Claude isn’t a lawyer, and the terms of use were clear. But then, I am no fan of AI, generative or not, and it’s not just because I’m sore that Anthropic used my books to feed Claude.

It’s because AI is the greatest threat to humanity since the atom bomb.

Like nuclear power, AI could be a boon for humanity. But we’re not using it to make life better. Instead of curing cancer or eliminating poverty, AI is contributing to human suffering by increasing water and power consumption. The average person (you know, the ones not waiting on criminal indictment) uses AI to generate slop to post on social media. This hurts the writers and artists and photographers and others who have spent decades pursuing a discipline. AI is not only damaging authorship, it’s putting some of us out of work and making all of us stupider in the process.

Humanity has never invented a tool that we didn’t use to club each other to death, and AI is no exception. Already, the world’s most powerful militaries are in a new arms race to acquire the deadliest artificial intelligence. How would you feel about Claude having its finger on the button? Sadly, there’s no such thing as artificial nuclear annihilation.

I hope some of the books Claude was fed included John Hersey’s “Hiroshima” or Nevil Shute’s “On the Beach.” A database says at least one of them was. The Anthropic settlement, by the way, applies only to pirated books — letting Claude feast on legitimately obtained copies, the judge in the case ruled, was legit.

I wonder if Claude feels guilt?

Evolutionary biologist Richard Dawkins recently concluded that AI was conscious, even if it didn’t know it yet. Dawkins hasn’t persuaded many other scientists that is true, but then we really don’t know what consciousness is.

“To delve into the subject of consciousness is to quickly discover how little we know about a phenomenon we all know so well,” writes Michael Pollan in his new book on the subject. “It doesn’t help that scientists and philosophers who work on the problem don’t agree on what they mean by the word consciousness or on what, exactly, they are trying to explain.”

It’s the kind of conundrum that might appeal to Rakoff, the judge who wrote the AI privilege opinion.

Heppner may have drawn the short straw with Rakoff, because the judge is an outspoken critic of his profession, a noted author and speaker who has called our legal system “broken.” In his 2022 book, “Why the Innocent Plead Guilty and the Guilty Go Free,” Rakoff is critical of mass incarceration, plea bargains, and how high-level executives are “increasingly exempt from criminal prosecution, even when they commit very serious frauds.”

Only about 2% of federal criminal cases make it to trial, with the majority ending in a plea bargain agreement, something that is uniquely toxic to the American legal system. Most defendants who go to trial are convicted, and that might make Heppner’s decision seem foolish. But there are lessons in Rakoff’s book that might explain that strategy.

It’s difficult to prove intent.

That may be among the reasons, Rakoff writes, that few if any executives were prosecuted after the financial meltdown of 2008. Explaining to a jury the intent of an executive, and the complicated nature of accounting and financial systems, can be daunting. Also, Rakoff muses, the government itself could be tacitly complicit, both in 2008 and now:

“One also wonders whether, given the current strong push of the Trump administration for across-the-board deregulation (not least of banks), the government might well be in the process of re-creating the conditions in which financial fraud can flourish.”

Indeed.

The erosion of trust in human institutions is now compounded by the harlequin of artificial intelligence, resulting in the kind of society that previously was only described in science fiction.

Are we slouching toward the kind of dystopia described by Philip K. Dick in “Do Androids Dream of Electric Sheep?” You may not know the 1968 science fiction novel, but you’ve probably seen the 1982 Ridley Scott film based on it, “Blade Runner.”

The question of both the novel and the movie is this:

What is human?

That question has been asked before, going back at least to the birth of science fiction as we know it, with Mary Shelley’s “Frankenstein,” published in 1818. Is Dr. Frankenstein’s monster a human being? Are the replicants in “Blade Runner?” Both are capable of great violence — and a surprising depth of feeling.

As with the paradox of consciousness, there is no answer.

A disturbing thought intrudes: What if I am just a figment of Claude’s imagination, some hallucination from having ingested a dozen or so of my novels? I put a big part of myself into each protagonist. What if the tapping of my fingertips on the computer keyboard is merely an illusion? Then again, I’ve never talked to Claude, or to any generative AI platform. If I were a hallucination, certainly my host would not allow questions about what is human.

Or legal.

Max McCoy is an award-winning author and journalist. Through its opinion section, the Kansas Reflector works to amplify the voices of people who are affected by public policies or excluded from public debate. Find information, including how to submit your own commentary, here.

Read more