Disclaimer
The following is not an anti-AI rant. I have my biases, and there are ethical and environmental concerns to consider. I would like to set those aside if possible and present an objective answer to the question.
Regarding terminology, I am using “AI” in the popular sense: I recognize that what I am talking about is not “true AI”—yet.
If you spend enough time on the bookish or writerly side of social media, you will probably start to see ads for AI-powered writing and editing tools. I was recently subjected to an AI-generated ad, voiced by AI, promoting a service in which AI will write your book, edit it and format it for publication on KDP or other platforms.
We may have a problem.
Before we ring the alarm bells, let’s drill down into a specific aspect of these emerging “writing aids”—the manuscript review.

What is a Manuscript Evaluation
A manuscript evaluation (or review) is a professional read of your work. A reviewer will assess the work in the context of its genre, purpose and audience. The reviewer will provide a report that often contains a summary of the work, its strengths and areas for improvement, along with advice for further editing or publishing and marketing strategies. The purpose of the review is to help writers see their work from an outside perspective and to provide actionable advice for revisions.
How AI Fits into the Picture
Advanced AI tools can provide you with all the above at a fraction of the price of a professional editor and some beta readers. A key difference between a human and AI evaluator is this:
AI does it backwards.
Is that a problem? Isn’t subtraction and division the inverse of addition and multiplication? Don’t you get the same information regardless of operation? Let’s examine that.
Methodology
I began with ChatGPT and Claude. I researched a few common prompts that people have used to get what they consider strong manuscript evaluations of their work. I fed the beast—sorry, I’m trying to be objective—I inserted a short story and a few chapters from a novel, all which I wrote in high school. Hey, if people are going to steal content and try to sell it back to the public, I may as well do my part to quicken the degradation of AI-writing, right?
I then researched some of the common programs that offer AI-generated manuscript reviews. I focused on ProWritingAid (the most egregious offender when it comes to ads) and AutoCrit. I didn’t want to give either my money, so I relied mainly on articles and videos that showcase how these programs work. I looked for positive reviews (the programs at their best) and critical reviews (the limitations). There are similarities between the results from these specialized programs and my barebones version using ChatGPT and Claude.
Findings
Strengths
It would be naïve to say these programs have no value as manuscript reviewers. Well, Chapt GPT and Claude were quite useless, but that could be due to poor prompting on my end. But ProWritingAId’s and AutoCrit’s manuscript analysis features have their value as analytic tools. I don’t think it is surprising to say that computers are better at analyzing raw data. As part of their manuscript analyses, ProWritingAid and AutoCrit will do the “general checks” (spelling, grammar, consistency). It will check for repeated words. So far, this is everything you can do with macros: no AI needed. The next layer involves the programs providing detailed summaries of your individual chapters or entire book. The reviews I’ve seen agree that these summaries are accurate. One small feature that is useful is the programs will flag anachronisms. I like to think I am skilled at spotting these, but I cannot compete with a computer’s knowledge bank. If you are writing historical fiction or fantasy grounded in a certain period, anachronism checking is very useful.
To go a layer deeper, the programs can flag major plot holes and character inconsistencies. If a character is in mortal danger and then walking around like nothing happened a few chapters later, the programs will catch that. This is important because it demonstrates a holistic understanding of the material. However, there are limits: I saw one example where ProWritingAid flagged a character inconsistency because the author described the character as being thirteen in one instance and almost fourteen in another. The program’s inability to pick up the “almost” is a major red flag.
One last analytic tool is that the programs will place your book in the context of the genre it thinks you are writing in. It will provide “comp titles”: successful books that share similarities to your book. Comp titles are useful when marketing your book to agents, publishers or the public. The programs seem to focus on the most popular books—but then again, so do human reviewers.
Areas for Improvement
When I say that AI does manuscript evaluation backwards, what do I mean?
When I (human, I think) do a manuscript evaluation, I start with what is on the page. I examine the characters’ arcs, the plot progression, the peaks and valleys in tension and the quality and clarity of writing. I combine this with genre conventions (when applicable) and audience considerations. I then use this information to assess the strengths and areas for improvement of the book.
AI can’t do this. Don’t get me wrong, these programs are advanced, but they are limited. This is the “stochastic parrot” aspect of LLMs and GenAI. What these programs are doing when they provide feedback and actionable advice is that they are starting from an endpoint. To simplify it a bit, the formula looks like this:
Check—protagonist’s development
Criteria—motivation, growth, flaws, etc.
The program will then determine whether the protagonist meets the criteria based on its understanding of what “growth” or “motivation” looks like. If a criterion is met, it goes in the strengths column and receives a (somewhat generic—sorry, objective) comment; if a criterion is not met, it goes in the areas for improvement column and, through a “Mad Libs” approach, details from the book are plucked out and fit into the actionable advice.
I’m failing at the objective part.
But you may ask, isn’t this the same thing? I mean, don’t you have a preset notion of what “good” character development or pacing looks like and aren’t you inherently comparing the book to others in the genre?
Yes and no. I may have certain markers that I am looking out for when assessing character development, but I don’t begin with them. I first look at the protagonist’s character arc in a vacuum; my only consideration is how will the audience respond to x. After that, I apply some comparisons. This accounts for nuance in a way AI cannot. Most modern protagonists—even in fantasy—don’t strictly follow the hero’s journey. Not all arcs are perfect parabolas. As a reviewer, you need to determine when detractions are going to resonate with audiences or when they will disappoint.
More to the point, AI’s “checks” may be expanding but they are limited to what it has been trained on. This is the problem AI will always bump up against: its inherit ceiling.
Takeaway
I didn’t set out to disparage AI manuscript evaluations. Honestly, if you have written a first draft and you know your book is not yet ready for editing, but you want some feedback, this is an affordable option to get you thinking about revisions. But, if you opt for the AI route, understand what you are getting and what you are not.
And if you are interested in a human manuscript review, you can learn about my services.