Limits of artificial intelligence in publishing

11/6/25

X min

As artificial intelligence continues to evolve, its tools become more powerful by the week, with a constant stream of new features. In our previous articles, we’ve explored how AI is being used in the publishing industry.

But it’s important not to overestimate what AI can do. As impressive as it may be, artificial intelligence is no magic wand and it won’t replace editors (which is good news!).

To use it effectively, it’s essential to understand its limitations, especially in the publishing world.

In this article, we take a closer look at the boundaries of AI in publishing to help you navigate its role more clearly.

💡 Want to learn more about best practices for using AI in publishing? Take a look at our practical guide.

3 key limitations of AI models you should know about

1) A great associative memory that can be outdated

Artificial intelligence operates on a vast database it was trained on. This gives it an impressive associative memory, allowing it to quickly draw on general knowledge and generate coherent responses.

However, this memory does not update in real time. A model like GPT-3.5, which powers ChatGPT, was trained on data up to 2020. It can therefore provide detailed answers on topics like publishing trends or literary awards… up to that point in time.

For example, it may confidently discuss trends in the UK publishing industry up to 2020. However, when asked about more recent developments, such as the winner of the 2025 Booker Prize, its responses may be incomplete or entirely fabricated.

Since 2023, ChatGPT has been connected to the internet via the Bing search engine, which allows it to access more up-to-date information. However, the reliability of online sources remains variable.

2) A lack of creativity

AI models lack the ability to truly think outside the box.

While they excel at recombining information they’ve been exposed to, they struggle to generate genuinely creative content on entirely novel subjects.

This limitation stems from their training: these models learn from pre-existing data, published texts, encyclopedias, forums, books, articles, and websites. As a result, it’s inherently difficult for AI to produce content or imagery that extends beyond what is already imaginable.

3) AI models can "hallucinate"

One of the key limitations of artificial intelligence models is their tendency to “hallucinate.” AI hallucination refers to the generation of false or incoherent answers that are presented as if they were correct.

This occurs when the model is faced with a question for which it lacks reliable or clear information in its training data, it fills in the gaps by “inventing” a response.

The model isn’t lying on purpose; it is attempting to produce the most plausible continuation based on the examples it has seen, even if this leads to factual inaccuracies.

Tasks AI Is Ill-Equipped to Handle in Publishing

Because of these three limitations, AI models are not well-suited to carrying out the following tasks for publishing professionals:

Interpreting subtext and authorial intent

AI excels at linguistic, stylistic, structural, and narrative analysis but it cannot perceive creativity, originality, or grasp emotional nuance. It remains entirely dependent on pre-existing data. While an AI tool can support initial screenings or shortlisting in an editorial context, it will never replace human intuition.

Even when it generates coherent text or visually appealing images, AI does not understand the world as a human does. It identifies statistical patterns. For example, a language model predicts the next word in a sentence based on probability not an understanding of meaning. This often leads to a loss of context: AI may get confused by lengthy texts or fail to pick up on ambiguity.

Consider a line in a novel such as, “Great, another power cut…” A human editor would immediately recognize the sarcasm. An AI, lacking true comprehension, might misread it as a positive remark. Human editors grasp subtext, irony, and emotional undertones, subtle elements that AI still struggles to detect.

These limitations become critical when the task involves editing or translating literary content, where nuance is everything.

Providing answers to complex legal questions

Artificial intelligence can be a helpful tool for outlining the key points of a publishing contract or conducting a preliminary review of a legal document. It can, for instance, highlight potentially unfair or non-compliant clauses, such as an unlimited rights assignment without compensation.

However, its capabilities are limited when it comes to interpreting specific legal contexts.

Without access to up-to-date legal databases, AI cannot guarantee the reliability of its recommendations.

As such, it should not be viewed as a substitute for professional legal advice.

Producing original creative work (text or visual)

AI's performance in the realm of fiction still falls short of expectations. As Jennifer Becker, a German author and academic, stated during a panel discussion at the Frankfurt Book Fair:

" I don’t yet see the moment when we’ll entrust the act of writing entirely to AI.”

While AI excels at mimicking styles or blending existing elements, it often struggles to deliver the element of surprise. Ask it to write a love story, and it may produce a clichéd narrative filled with flowers and sunsets common tropes in the data it has been trained on.

Finding a fresh, unexpected perspective, often the hallmark of a great human author, remains a challenge.

Current models lack true intent or creative will: they don’t write to express something, but because they’ve been prompted to, drawing solely from what they’ve “digested” from other texts.

Ensuring information accuracy

AI can be a valuable source of information, but it does not guarantee accuracy in all cases. It is ultimately the responsibility of the human editor to verify any content and ensure its reliability. This is particularly true when it comes to figures and quotations, where AI tools for publishers tend to fabricate details.

This poses a specific issue when quoting well-known individuals: the risk of invented or distorted attributions is high. Translations offered by AI are also frequently inaccurate, imprecise, or misleading.

For instance, a quote originally spoken in English by an Anglophone author can easily be misinterpreted or poorly adapted into French, resulting in a loss of meaning.

Therefore, it is crucial to avoid relying on AI-generated sources in scientific journals or academic articles, as the likelihood of error is significant.

Understanding a long text as a whole

Another limitation lies in the AI’s ability to grasp the overall structure and meaning of a long text. AI models have a limited memory in terms of tokens (units of text), which restricts how much context they can retain at once.

For example, if a novel is 300 pages long, the AI cannot hold the entire book in memory during a single processing session, unless using very recent, specialized models, which still face constraints. As a result, for complex editorial tasks—such as identifying inconsistencies between Chapter 2 and Chapter 18, a human remains far more effective.

AI is better suited to shorter contexts. Some models can handle the equivalent of up to 50 pages of text at once, which is already impressive, but still insufficient for comprehensive, book-length analysis.

Navigating ethical dilemmas in editorial decisions

Artificial intelligence systems raise numerous ethical concerns for publishing professionals. Among the most pressing issues are bias, hallucinations, and the lack of transparency in how models operate—commonly referred to as the “black box” effect.

The ethics of recommendation

For instance, using AI to suggest books can enhance the reader’s experience, but it also risks confining users within algorithmic bubbles, recommending only similar titles to what they’ve already read.

👉 Example: A reader who has consumed several thrillers by male authors may be repeatedly shown similar works, without ever being exposed to female voices, alternative literary genres, or independent publishers.

This filtering effect, well-documented on streaming platforms and social media, is known as the filter bubble or algorithmic echo chamber.

The “Black Box” effect: opaque AI decision-making

AI decision-making is often inscrutable. In editorial settings, this becomes problematic when there's a need to justify decisions such as which books to highlight, publish, or exclude. Without transparency, it's difficult to defend or audit those choices.

Algorithmic bias

AI models are not neutral. Trained on human data, they inevitably mirror real-world inequalities and stereotypes. This can lead to biased recommendations, the erasure of certain voices or types of content, and even the reinforcement of discrimination.

As such, AI must be used cautiously, guided by ethical principles and compliant with regulations such as the GDPR.

Human connection in author relations

When it comes to empathy and intellectual value, one need only consider the author–editor relationship: an AI tool cannot replace the dialogue and probing questions a human editor brings to help an author elevate their manuscript.

AI may offer raw suggestions, such as flagging a passage as unclear due to syntactic issues—but it cannot grasp the author’s deeper intent or guide a rewrite in a nuanced and meaningful way.

Anticipating and understanding reader expectations

Artificial intelligence can analyze historical trends, detect recurring patterns in sales, and identify popular themes. However, it cannot predict which literary genre will break out in 2026, nor can it reliably identify the next bestseller from among unsolicited manuscripts.

Translating with artistic sensitivity

Machine translation has made significant strides. But when it comes to literary style, cultural nuance, or idiomatic expression, human expertise remains indispensable.

A fully AI-generated translation may feel mechanical and lack authenticity. Rhythm, musicality, and word choice, subtle yet essential aspects, still elude algorithms.

The result? A reading experience that may feel flat and fail to capture the stylistic richness readers expect. Entrusting an artistic translation solely to AI risks losing part of the text’s soul.

AI can assist the process, but it cannot replace the translator’s sensitivity and craftsmanship.

Understanding AI limitations to move forward effectively

You now have a comprehensive overview of the main limitations of artificial intelligence in the publishing sector.

Naturally, these tools are evolving rapidly, and some of these constraints may be overcome in the future.

In the meantime, however, it remains essential to adopt best practices to make the most of AI, safely and responsibly.

📘 To explore further, consult our dedicated AI guide for publishers on best practices and practical recommendations for using AI in publishing.

👋 Crealo is the best royalty accounting software for publishers. Automate royalty reporting, contracts, and social contributions all in one place. 👉 Contact us to learn more.

Posted by
Photo Apolline Perivier Crealo
Apolline Perivier

Our latest articles

The Crealo environment for managing your royalties.

Let's talk about your project!

Book a demo