The way how people use documentation has shifted. Not long ago, the journey was predictable: search for a term, click a link, scan a page, maybe copy a snippet. Today, it’s far more conversational. People ask questions, and AI answers them - often instantly, often without sending the user to the source site. That change fundamentally reshapes what documentation needs to be.
Traditional search engines are still part of the picture. But they’re no longer the only interface. Large language models, AI assistants, and autonomous agents are stepping in as an intermediary layer between users and the content. Instead of browsing, you more frequently now ask your AI assistants things like:
“How do I authenticate with this API?”
“What’s the best way to handle errors here?”
And the answer you expect is not just a link but rather a generated response - assembled from multiple sources, interpreted, summarized, and delivered in a few comprehensive lines.
We recognize that. And, while we’d still like to see that you explore our documentation site where we serve you with nice user-friendly documentation, we understand the documentation we deliver is no longer just for human readers but also it must be accurately interpreted by machines.
This is where Generative Engine Optimization (GEO) comes in. GEO doesn’t yet have a well-defined rulebook or universal checklist because the patterns are being discovered in real time.
Building for AI Readiness: What We Already Support
While the standards are still forming, we are striving to respond to the needs and make our documentation accessible, well-structured, and user-, as well as, machine-friendly - so that it can be crawled and ingested by LLMs and AI tools. And we are happy to share what are the things that the Emporix Documentation Portal already supports:
At the core of our documentation is a simple but powerful choice: everything can be consumed as Markdown. You can simply append a page on our site with .md to see the formatted doc structure.
That might not sound revolutionary, but for AI systems, it makes a huge difference. Markdown strips away unnecessary complexity and presents content in a format that is structured, predictable, and easy to ingest. Headings, code blocks, lists, and examples are all clearly defined. There’s very little ambiguity, and very little noise. For LLMs, that means less guessing and more accurate understanding. For us, it means we’re providing content in a format that is already widely used in training data and AI pipelines.
One of the newer and more interesting developments in the GEO space is the idea of llms.txt. The file in Emporix site contain the index of page titles, path and descriptions of what a particular page is about.
The llms.txt helps guide AI crawlers toward the most relevant parts of the documentation. It’s a way of saying to machines:
“If you’re going to learn from or retrieve our content, start here.”
This is especially important because AI systems don’t “browse” like humans. They don’t rely on navigation menus or visual cues. They need clear entry points and signals about what matters.
The support for llms.txt reduces ambiguity for AI crawlers and increases the chances that the right information is used in generated answers.
You can simply append the root URL of the Emporix documentation site with the /llms.txt to see the automatically generated index of our docs:
https://developer.emporix.io/llms.txt
While llms.txt points to key content, llms-full.txt takes things a step further. It provides a more complete, structured representation of the documentation - essentially offering AI systems a ready-to-consume snapshot of the content. This is particularly useful for retrieval-augmented generation (RAG) systems or agents that need broader context rather than isolated pages. Instead of forcing AI systems to crawl and assemble information piece by piece, we’re giving them a cleaner, more coherent dataset to work with.
To refer your AI tools to the file, append the root URL of our Doc Portal with the /llms-full.txt, like this:
https://developer.emporix.io/llms-full.txt
Note, the llms-full.txt is very large so it automatically divides content to some portions. The subsequent files are indexed with numbers: /llms-full.txt/1, /llms-full.txt/2, /llms-full.txt/3 - the bottom of the file provides info what comes next.
There’s another layer that’s becoming increasingly important: what AI can do with the information. This is where Model Context Protocol (MCP) comes in. If you are implementation engineer working on Emporix enablement, you can connect your AI assistants with the exposed documentation MCPs. This allows you to work with Emporix more efficiently as the agent can use our docs as a basis to execute your prompts about APIs, workflows or business logic through the exposed tools. So instead of just answering:
“How do I create a cart?”
An AI agent can understand the process from the documentation, and then execute it by the exposed tools. MCPs make documentation no longer just a reference, but a part of the ecosystem where AI can take action.
To connect the documentation MCPs to your AI tooling, copy the MCP Server URL for a given doc space from the Portal, like this:
The key takeaway is that we’re moving into a world where documentation is no longer just read - it’s queried, processed, and acted upon. And while the rules for Generative Engine Optimization are in the process of being written, we still try to do our best to prepare the content you need and keep an eye of what is coming.
