Andrea Colamedici invented a philosopher, presented him as an author and produced a book, secretly generated with the help of artificial intelligence, about manipulating reality in the digital age.
People were deceived. Accusations of dishonesty, bad ethics and even illegality flew.
But the man behind it, Mr. Colamedici, insists it was not a hoax; rather, he described it as a “philosophical experiment,” saying that it helps to show how A.I. will “slowly but inevitably destroy our capacity to think.”
Mr. Colamedici is an Italian publisher who — along with two A.I. tools — generated “Hypnocracy: Trump, Musk, and the Architecture of Reality,” a buzzy text ostensibly written by Jianwei Xun, the nonexistent philosopher.
In December, Mr. Colamedici’s press printed 70 copies of an Italian edition that he supposedly translated. Still, the book quickly gained outsize attention, being covered by media outlets in Germany, Spain, Italy and France, and being cited by tech luminaries.
“Hypnocracy” describes how powerful people use technology to shape perception with “hypnotic narratives,” putting the public in a kind of collective trance that may be exacerbated by relying on A.I.
The book’s publication came as schools, businesses, governments and internet users all over the world are wrestling with how to use — and not use — A.I. tools, which tech giants and startups have made widely available. (The New York Times has sued OpenAI, the creator of Chat GPT, and its partner, Microsoft, claiming copyright infringement of news content. The two companies have denied the suit’s claims.)
Yet the book turned out to also be a demonstration of its thesis, playing out on unwitting readers.
The book, Mr. Colamedici said, was meant to show the dangers of “cognitive apathy” that could develop if thinking were delegated to machines and if people don’t cultivate their discernment.
“I tried to create a performance, an experience that is not just the book,” he said.
Mr. Colamedici teaches what he calls “the art of prompting,” or how to ask A.I. smart questions and give it actionable instructions, at the European Institute of Design in Rome. He said that he often sees two extreme, if opposite, responses to tools like ChatGPT, with many students wanting to rely on them exclusively and many teachers thinking that A.I. is inherently wrong. He instead tries to teach users how to discern fact from fabrication and how to engage with the tools productively.
The book is an extension of this effort, Mr. Colamedici argued. The A.I. tools he used helped him to refine the ideas, while clues (real and invented) about the fake author (online and in the book), intentionally suggested potential problems to prompt readers to ask questions, he said.
The first chapter discusses fake authorship, for example, and the book contains obscure references to Italian culture unlikely to come from a young philosopher from Hong Kong, which eventually helped to lead one reviewer to the true author operating as a translator.
Sabina Minardi, an editor at the Italian outlet L’Espresso, picked up on the clues, exposing Jianwei Xun as a fake early this month.
Mr. Colamedici then updated the fake author’s bio page, spoke to publications, including some deceived by his work. New editions and excerpts printed this month come with postscripts about the truth.
But some who first embraced the book now reject it and question whether Mr. Colamedici has acted unethically or broken a European Union law about the use of A.I.
The French news outlet Le Figaro wrote about “L’affaire Jianwei Xun,” explaining that the “problem” with its earlier interview of the Hong Kong philosopher was that “he doesn’t exist.”
The Spanish newspaper El País in Spain retracted a report about the book, replacing it with a note that said “the book failed to acknowledge A.I.’s involvement in the creation of the text, a violation of the new European AI Act.”
Article 50 of that law says that if someone uses an A.I. system to generate text for the purposes of “informing the public on matters of public interest,” then it must (with limited exceptions) be disclosed that generative A.I. was used, said Noah Feldman, a law professor at Harvard University who advises tech companies.
“That provision on its face seems to cover the creator of the book and perhaps anyone republishing its content,” he said. “The law does not go into effect until August 2026 but it is common in the E.U. for people and institutions to want to follow laws that seem morally good even when they don’t yet technically apply.”
Jonathan Zittrain, a law and computer science professor at Harvard, said he was more inclined to call Mr. Colamedici’s book “a piece of performance art, or simply marketing, that involved using a pen name.”
Mr. Colamedici is disappointed some initial champions have decried the experiment. But he plans to keep using A.I. to demonstrate the very dangers it raises. “This is the moment,” he said. “We are risking cognition. It’s use it or lose it.”
He said he plans to have Jianwei Xun — describing it as a collective of humans and artificial intelligence — teach a course about A.I. next fall.