Google’s John Mueller Says LLMs.txt Is as Pointless as the Keywords Meta Tag

Google’s John Mueller Says LLMs.txt Is as Pointless as the Keywords Meta Tag

A recent Reddit discussion has thrown cold water on the rising buzz surrounding LLMs.txt, a proposed standard aimed at making website content more accessible to large language models (LLMs) like OpenAI’s ChatGPT and Google Gemini. And the person doing the dousing? None other than Google’s John Mueller.

Mueller compared LLMs.txt to the long-defunct keywords meta tag—an outdated SEO tool that once allowed site owners to manually declare what their page was “about,” only to be discarded once search engines got smart enough to figure it out on their own. That same obsolescence, Mueller suggests, may be baked into the core of LLMs.txt.

LLMs.txt has been pitched as a helpful tool—a way to present the main content of your web pages in a clean, markdown format for AI bots to easily consume. This would mean no ads, no sidebars, just pure content. But unlike robots.txt, which gives bots explicit instructions on what they can or cannot crawl, LLMs.txt does not control AI behavior—it merely suggests what content might be useful.

And there lies the problem.

One Reddit user who implemented LLMs.txt on their blog reported no observable changes in AI bot behavior. Server logs revealed no activity. Curious, they reached out to the SEO and web development community to see if others had similar experiences.

The response was telling.

Another participant in the thread, who manages over 20,000 domains, confirmed that AI agents aren’t even looking at LLMs.txt files. The only user agents they noticed accessing the files came from small, niche tools, not major AI players like OpenAI or Anthropic.

John Mueller then stepped in to address the conversation head-on. He wrote:

“AFAIK, none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it). To me, it’s comparable to the keywords meta tag—this is what a site-owner claims their site is about… (Is the site really like that? Well, you can check it. At that point, why not just check the site directly?”

Mueller’s response cuts to the heart of the issue. Even if LLMs.txt delivers clean content, AI systems will likely crawl the full page and evaluate all available content, context, and structured data. This makes an external markdown file not only redundant but possibly even misleading.

The risk of manipulation is another concern. With no established trust mechanisms in place, what’s to stop a spammer from showing high-quality, AI-friendly content in LLMs.txt while serving junk or completely different material to human users and search engine crawlers? This tactic would resemble cloaking—a long-discredited black hat SEO trick.

Simone De Palma, who initiated the Reddit conversation, took the discussion to LinkedIn, where he echoed Mueller’s sentiments and raised new concerns. He emphasized that LLMs.txt does not include links back to the source website. This means any AI-generated citations could show a wall of markdown text instead of linking to a functional webpage, undermining the goal of increasing traffic or visibility.

Others in the LinkedIn thread echoed his conclusions. One expert explained they had seen very little bot activity and no measurable benefits. Their advice? Focus on what works: properly implemented structured data, solid robots.txt configurations, and clean sitemaps.

That’s advice most site owners should probably take to heart. Because, despite the well-meaning intentions behind LLMs.txt, it seems the industry has spoken—through both server logs and lack of support from major AI companies.

As of now, OpenAI, Anthropic, and Google have not announced any adoption or interest in the format. No official documentation suggests that LLMs.txt will play a part in future AI training or web crawling behavior. In the absence of that support, even its strongest advocates are starting to view the effort as wasted time.

It’s also a reminder of how digital standards evolve. The keywords meta tag was once hailed as a shortcut to better SEO rankings. But it was eventually phased out because it was too easy to exploit—and too unreliable to trust. Search engines now parse actual content and structured data to understand a site’s value. AI models are doing the same.

The takeaway? If you’re building a future-facing website, you’re better off doubling down on practices with proven SEO and indexing benefits. That includes:

  • Implementing schema markup for articles, products, reviews, and other structured content

  • Using robots.txt to manage crawler access to sensitive or unnecessary pages

  • Keeping sitemaps updated and submitted to major search engines

  • Optimizing for mobile and accessibility, both of which improve user experience and crawlability

At the end of the day, AI models, like search engines, reward quality content and structure. LLMs.txt, in its current form, offers little more than a wishful attempt to shape how AI understands your site, without the enforcement or adoption needed to make it meaningful.

Until the big players start recognizing and using the LLMs.txt file—or a better version of it—it may just be another forgotten footnote in the ever-evolving world of SEO and AI integration.

So for now, skip the LLMs.txt hype. Build better content instead.

Charles Poole is a versatile professional with extensive experience in digital solutions, helping businesses enhance their online presence. He combines his expertise in multiple areas to provide comprehensive and impactful strategies. Beyond his technical prowess, Charles is also a skilled writer, delivering insightful articles on diverse business topics. His commitment to excellence and client success makes him a trusted advisor for businesses aiming to thrive in the digital world.

Leave a Reply

Your email address will not be published. Required fields are marked *

Close