Discover more from The Mercenary Pen
The coming boom for non-fiction literature
AI is going to kill fiction, but non-fiction is about to get much more important.
Author and publisher Michael Anderle recently wrote a post on the popular “20BooksTo50k” self-publishing Facebook group, which is dedicated to self-publishing success. In it, Anderle explains that while he has not published any 100% AI books, he is using endlessly efficient AI tools like ChatGPT to enhance the output of his publishing firm, LMBPN.
LMBPN publishes fiction, ranging from epic sci-fi to sword/sorcery fantasy. Anderle wants to hit 400 book releases a year and have 10,000 books on Amazon’s shelves in a short time, and intends to use AI tools to make that happen as quickly as possible. After all, the “20BooksTo50k” is centered around the idea of self-publishing 20 books a year in order to have a salary of $50,000/year.
Subscribe now to help me write about other lost causes.
In his words: “AI isn’t going back into the past. Stare past your anxiety (I know I had to) and figure out how to grab the opportunities.”
He’s not wrong. You should be learning to use AI to do your work better. I don’t need to link to a bunch of sources to tell you that there’s an arms race on to get these AI tools working to churn content. Marketing copy, self-published novels, blog posts, it’s all fair game. Content-creators will need to evolve or die.
Speaking as a self-pub fiction author, it’s a little dismaying. How am I going to keep up with the AI deluge? And if you think it’s just me as a hungry author that should be worried, just wait until it comes for your job.
Where I haven’t seen the same discussion is in non-fiction. Given GPT’s propensity for hallucination, or the wholesale creation of a fact due to its messy training model, high-quality non-fiction work is going to not only be safe, but is going to thrive. After all, fiction is an organized hallucination. We’ll talk more about AI hallucination in a minute.
I’m predicting two things:
The first AI-generated books will hit the NYT bestseller lists in five years. Not a thrilling prediction, but there it is.
Non-fiction will be more important than ever, as we’re going to have a deluge of shoddy information coming from the people pushing out what the AI’s are producing.
Where machines can’t go
At the moment, I’m writing a non-fiction history book about mid-20th century Philadelphia. My work involves oral history, research into individuals well into the background of the historical record, archive-diving, databases behind paywalls, and the synthesis of a lot of ephemeral pieces of information. None of it is neatly indexed.
Stringing together actual history, where you have a responsibility to fidelity, and when much of the history is forever lost, is not going to be done by AI.
You can ask GPT yourself: it does not have the ability to access many books directly, it cannot read the contents of proprietary databases like, say, the Philadelphia Tribune’s database of news articles. This is because these sources are paywalled, and they are paywalled because publishers need to make money on the back catalogue. I submit that not having access to the city’s largest black newspaper in the Tribune means the AI doesn’t have a good grasp on what actually is going on in Philly at any time, past or present.
Further, the AI cannot establish a relationship of trust with an interview subject who witnessed something. It also has guard rails to prevent it from researching people who are not public figures.
And as I said, the AI frequently hallucinates sources. The other day, I happened to use GPT-4 to find sources for a concept I was exploring, namely racial integration of Philadelphia’s neighborhoods. GPT-4 is allegedly much less prone to hallucination of sources than 3.5. Yet it hallucinated a source for me, insisted the source was real, and then backtracked and admitted that it was wrong. This was for a single source on a single topic, let alone the 600+ references I’m deploying in this book.
When writing fiction, I could expect to pump out 1,000-2,000 words a day, regularly, getting to 75,000-100,000 words in a few months. With GPT, I could do even more hallucinating of interesting stories.
But with a non-fiction project, where fidelity to actual events has consequence, the AI’s are simply not going to do a passable job churning out a ton of content. Even if they did, it would become quickly obvious what it had made up when digging into the footnotes. I am not also sure that the AI can save you a lot of time when writing on something truly original. I suppose you could feed all the sources into the AI, but that’s not easy yet. Where it’s helped me is in method - how do I find this or that piece of information.
Maybe I put this essay together to help me deal with the fact that I’m becoming obsolete, the same way artists began to feel obsolete after the image-creation AI’s arrived.
However, given how these AI’s can work 24/7/365 on content creation, we’re also going to be facing a reality where an enormous amount of textual information is going to be created instantly and non-stop. That makes non-fiction of quality much more powerful. The question will be how people will find your quality non-fiction.
But people want to know what happened. They’ll go to great lengths to know. And there’s places no machine can go.
The Mercenary Pen is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.