Generative AI tools like ChatGPT, Claude, Llama, and many others are going to fundamentally change much of the work many of us do today. It’s early days in the LLM revolution, but there is no question that it’s going to have a profound impact.
So, to be fair, I should qualify my statement: AI will definitely replace some technical writers. But not all. Or even most.
Generative AI tools are already very good at writing, debugging, and documenting specific code modules. They are very good at summarizing text. They can identify and describe objects in images. They can search, analyze, manipulate, and translate text with high accuracy.
Generative AI tools are less good at creating novel new text artifacts at high quality without significant instruction and revision. And this is my point.
A modern AI model is trained on an enormous corpus of data. When you prompt an LLM, it generates an answer from mathematically contextual data in the model. Generally speaking, this makes it very good at creating useful content. When it has lots of useful context, it can simmer that context down into a response.
If it doesn’t have a useful context on which to draw — if it doesn’t know anything about the subject on which you’ve prompted it — the LLM can, at best, say it doesn’t know the answer. Or, at worst, make something up. Hallucinate.
It’s more complicated than that in action. There are methods for providing LLMs with additional context. Fine-tuning, embeddings, and RAG are methods for giving an existing model more information to work with. Agents and similar approaches give the LLM access to tools for finding or calculating additional information on the fly.
But the bottom line for all of these approaches is making sure the LLM has a broad context of relevant information from which it can generate an accurate, useful response.
If I want to create documentation or marketing copy for a unique new product, an LLM is of very limited use. It knows nothing about this new product — what it does, how it works, and which steps are needed to produce a useful result for the user. It has zero context for creating actually useful content.
Here’s where technical writers come back into the picture.
An experienced technical writer works with the experts on the team to understand the customer, their needs, and their expectations. What is the product meant to do, and what are the procedures to make it operate correctly? What external knowledge, tools, and information are needed? What types of content are most appropriate for customers? How do design guides and learning curricula with appropriate outcomes and guardrails. And so on.
Currently, people are the best at synthesizing this information based on access to people, often non-contextual resources, internal content you might not want to share with a commercial LLM vendor, and plain old experience.
I’m sure somewhere in the world there is a product team that writes extensive, detailed, accurate user stories and acceptance tests covering all expected use cases. Its code is clean, clear, well-documented, and accompanied by a comprehensive suite of tests. Those test suites use real-world data (including the error cases). The interface is uniquely intuitive to both people and computer use models. The customer use cases are clear, finite… and already documented.
Sure, an appropriately trained LLM could be fed that information and generate some documentation.
But the best person to feed the LLM and make sure the output is usable is an experienced technical writer.
For a novel product, it takes people to build the corpus of contextual information that an LLM would need to generate documentation, training, or marketing content.
Once you’ve built that corpus of knowledge, sure, you could feed it to an LLM to run a chatbot or draft more documentation. But it starts with experienced technical writers and those same technical writers — with deep knowledge of the product and its customers — are the best people to steer that process.
Technical writers have the skills and experience to know what to ask the LLM to produce and to confirm that it generates useful and accurate documentation. Technical writers coordinate between internal experts, sales, support, engineers, and users, making sure the right knowledge goes into the LLM and the right artifacts come out.
The job is likely to change but is perhaps more critical today than ever.
Technical writers have never been a high priority at most companies. Even less so at early-stage tech startups, which are the most likely early adopters of AI-based “help.” Executives at companies of all sizes are going to be sorely tempted by the thought of replacing writers with tools.
It would be a false economy.
An LLM needs a broad context of relevant information from which it can generate an accurate, useful response. An LLM is of very limited use if it knows nothing about your new product.
An experienced technical writer builds that broad context and then works with the LLM to expand it further.
I work mostly in developer tools and data science. For the last two years, I have extensively used LLMs, which have become a critical part of my documentation toolkit. I am all in on the capabilities these tools offer. They are a force multiplier for writers, particularly at tech companies.
But it’s not the tool that makes useful documentation. It’s my 30+ years of experience knowing what needs to be documented, how it should be documented, and getting the tool to help me create that documentation more quickly, more extensively, and more accurately.
I also love how quickly I can write my own custom tooling now.

Leave a comment