lade...

Blog.trakkr.ai

Blog.trakkr.ai

an avatar

The Trakkr Blog

Articles on the future of AI search

an icon 🌐 Visit Blog.trakkr.ai 🌐 Blog.trakkr.ai besuchen

Write rieview✍️ Rezension schreiben✍️ Get Badge!🏷️ Abzeichen holen!🏷️ Edit entry⚙️ Eintrag bearbeiten⚙️ News📰 Neuigkeiten📰

Write review

Tags: articles

Server location (172.67.156.52):Serverstandort (172.67.156.52 ):US United States

Server location (104.21.42.45):Serverstandort (104.21.42.45 ):US United States

Rieviews

Bewertungen

not yet rated noch nicht bewertet 0%

Be the first one
and write a rieview
about blog.trakkr.ai.
Sein Sie der erste
und schreiben Sie eine Rezension
über blog.trakkr.ai.

Blog.trakkr.ai News

61% of the Fortune 500 Are Invisible to AI

https://blog.trakkr.ai/61-of-the...

61% of the Fortune 500 Are Invisible to AI

At Trakkr, we analyzed the Fortune 500 to see what AI crawlers actually receive when they hit corporate sites. Of 500 companies tested, we successfully analyzed 427. The result: 61% of those sites are significantly degraded for AI systems.

We’re not talking about a few missing images. Many pages arrive to AI systems looking almost blank, either due to crawler-specific access conditions or heavy client-side rendering that never materializes into HTM, so the crawlers behind ChatGPT, Claude, and Perplexity often have very little to work with.

TL;DR


How We Tested

We fetched each site the same way widely used AI crawlers do:

This isolates what an AI system can quickly parse in milliseconds, versus what a human sees after scripts run.


Where Things Break (Two Big Buckets)

1) Crawler-Specific Access & Request Conditions

Many enterprise sites apply layered controls that change what arrives to non-human traffic: interstitials, redirects, stripped-down responses, session or region gating, and other protective measures. The net effect is that crawlers often receive placeholder or partial HTML rather than the rich content humans see.

Typical signatures:

2) Client-Side Rendering Without HTML Fallback

Modern apps frequently ship a lean HTML shell (e.g., <div id="root"></div>) and expect the browser to construct the page via React/Vue/Angular. That’s great for humans; crawlers that don’t execute JS receive only the shell.

Typical signatures:

Bottom line: AI systems commonly see less than half of what your customers see, either because the request conditions limit what’s delivered, or because the page relies on a render step the crawler won’t perform.

What “Invisible” Looks Like

A crawler might get:

<!DOCTYPE html>
<html>
<head>
<title>Financial Planning Solutions - ExampleCo</title>
<meta name="description" content="Trusted financial planning since 1857" />
</head>
<body>
<div id="root"></div>
<script src="/static/js/bundle.js"></script>
</body>
</html>

That’s it. No product copy, no service detail, no support links. Humans, after JS runs, get rich product pages, tools, and content hubs. The crawler’s view remains skeletal.


Why This Matters

AI Discovery Is Booming

AI-driven research, vendor selection, and comparison are growing fast. If AI systems can’t access your content, you’re not just missing traffic, you’re underrepresented in the knowledge these systems build and cite. That opens the door for third-party sources (including competitors) to define your narrative.

The Feedback Loop (Rich-Get-Richer)

If AI can’t access your content today, it won’t learn your products or language. That means fewer citations and mentions tomorrow - reducing your presence in future retrieval and training cycles - while visible competitors compound their advantage.

The Progress Paradox

Modern web stacks made sites better for users and teams, but worse for machines when there’s no HTML fallback or when request conditions constrain what’s delivered. The result: premium, interactive experiences for people; sparse, ambiguous signals for AI.


Common Fixes (and Their Trade-offs)

  1. Server-Side Rendering (SSR) / Hybrid Rendering
    Improves machine visibility, but re-platforming mature apps is expensive and lengthy. It reworks infra, QA, and team workflows.
  2. Dynamic Rendering
    Serve pre-rendered HTML to non-human traffic while humans get the JS app. Viable, but now you’re maintaining two execution paths. Content drift and operational complexity are common risks.
  3. Crawler-Aware Optimization
    Tailor responses and caching strategies for known crawlers. Effective, but demands ongoing engineering, accurate bot detection, and careful content parity to avoid inconsistencies.

The Edge Approach (What We Built in Prism)

Modern CDNs let you run code at the edge, milliseconds from the requester, so you can route and transform responses before they hit your origin.

How Prism works:

Why teams choose Prism:

Many organizations start with Prism purely for visibility, and stay for the origin offload. Caching crawler responses at the edge dramatically cuts duplicate fetches, smooths request bursts, and lowers the operational cost of being discoverable.

What “Good” Looks Like to AI

If you’re aiming for robust AI visibility, your crawler-facing responses should include:

Prism standardizes this for crawler traffic, without touching your human experience.


Getting Started

The visibility gap is widening as sites get fancier and AI becomes a primary discovery surface. Fortunately, it’s fixable, without re-platforming.

Prism sits at the edge to:

Setup typically takes minutes. Results are immediate: crawlers go from seeing a skeletal shell to receiving a complete, structured representation of your content; consistently, quickly, and with far less strain on your servers.

If 61% of the Fortune 500 are degraded to AI today, the winners will be the ones who close that gap first.

16.9.2025 07:1161% of the Fortune 500 Are Invisible to AI
https://blog.trakkr.ai/61-of-the...

We Audited 8,000 sites. Here's why AI isn't citing you.

https://blog.trakkr.ai/untitled/...

We Audited 8,000 sites. Here's why AI isn't citing you.

Over the past year, we ran AI-driven analyses on 8,000+ sites to see how machines actually read the web. When you strip away buzzwords and look at the markup, the same issues keep showing up—simple, structural things that make models decide, “I can’t trust this.”

This isn’t about the latest “AI SEO” hack. It’s about fundamentals that should’ve been solved years ago. Today, those basics are the ante for even being considered in AI-generated answers.

Below are the four patterns we kept seeing—what they are, why they matter, and how to fix them without rewriting your entire site.


1) The Semantic Void: 70%+ Hide Their Main Content

<main> is the clearest HTML signal for “this is the important stuff.” It lets crawlers separate your actual content from navigation, footer, and sidebar noise. Most sites still treat it like an optional extra.

The data (from our crawl):

Why this tanks trust:
Think of an AI crawler like a researcher. \<main> is the book’s core text. Without it, your page looks like a stack of footnotes—no focal point, no clear narrative, no way to separate argument from boilerplate. Models don’t waste time inferring hierarchy you didn’t declare. They move on.

What we looked for:

Quick fix checklist:


2) The Identity Crisis: 80% Send Conflicting Signals

Before a model cites you, it needs to confirm who you are. That means clean canonicalization and machine-readable links to your broader entity footprint.

The data:

Why this blocks citation:
Canonical conflicts are like having two passports with slightly different names—both get downgraded. Missing sameAs creates an entity vacuum: there’s no reliable, machine-readable bridge between your site and the brand the model sees in news, profiles, and social. To the model, you’re a stranger—not a safe source to quote.

What we looked for:

Quick fix checklist:


3) The Ghostwriter Problem: 90% Publish “Anonymous” Content

In a world drowning in AI-written text, provenance is currency. Models weight content from named, verifiable experts more heavily—and they need to read that attribution in structured data, not just see a byline in pixels.

The data:

Why this loses to your competitor’s post:
Without an author and date the model can parse, yesterday’s piece by a ten-year veteran looks identical to a 2017 article by an intern. Faced with a choice, models cite the source tied to a real person and a recent, verifiable publish date. It’s a risk-reduction move, not a style preference.

What we looked for:

Quick fix checklist:


4) The Echo Chamber: 95% Are Absent Where It Counts

Many brands have the right answers—buried on their own blogs. Meanwhile, the public conversations that shape training data happen elsewhere.

The data:

Why this erases you from AI outputs:
Those communities contribute heavily to what models learn as “trusted solutions.” Accepted answers and debates there become signals of authority. If your voice isn’t in that corpus, you’re less likely to appear in model outputs—no matter how good your on-site content is.

What we looked for:

Quick fix checklist:


The Universal Playbook (Baseline, Not “Tips”)

This isn’t a bag of tricks—it’s table stakes if you want to be cited by models.

1) Structure everything.

2) Make your HTML answer the question.
Lead with the answer in your meta—skip the teaser copy.

3) Go static for what matters.
AI bots don’t reliably execute JavaScript. If key content is client-rendered, assume it’s invisible. Server-side render anything models must see. No exceptions.

4) Get off your own website.
Join the conversations your audience already trusts—Reddit, Quora, Stack Overflow—and answer questions with no strings attached. Become the reference everyone (including models) learns from.

12.9.2025 12:45We Audited 8,000 sites. Here's why AI isn't citing you.
https://blog.trakkr.ai/untitled/...

The Future of SEO is on the Edge

https://blog.trakkr.ai/the-futur...

The Future of SEO is on the Edge

If you own a site today, you face a problem.

You've spent months, years, or maybe even decades optimising your site. Both for humans, to make sure your site speaks your brand's language, and for SEO crawlers, to make sure your site ranks.

But now there's an additional challenge, to make sure that your site is intelligible for the myriad of AI crawlers that scrape it every minute of every day (yes, that often).

Optimising a site for AI crawlers can often pull you in another direction to what you've been doing to optimise for humans and SEO:

So, what do you do? It seems like it's impossible to build a site that's optimal for humans, SEO crawlers, and AI crawlers simultaneously. Or is it?

Cue Edge SEO

I've spoken to SEOs with years of experience in the field that have never heard of the phrase Edge SEO, and you can't blame them. It's a niche concept, and it's a little technical, so I'll try to break it down as simply as possible.

The most basic model of how you access a website is that you type that website's domain into your browser bar, this sends a request to the website's server. The server then sends information (HTML, Javascript, all that good stuff) back to your browser for display, and you get to see the site.

While this is a helpful model, it's not actually how most websites operate.

Most modern websites use something called a CDN, the most popular being Cloudflare, to act as an intermediary between you and the website's server. Cloudflare intercepts your request as soon as it happens, and tries to serve you a cached (i.e. preprocessed, compressed, saved) version of the site, from a server that's close to you. This is great for everyone involved - you get a faster page load, and the site owner has fewer requests hitting their server - wins all around!

CDNs allow for something called edge compute - the ability to write lightning fast code which is executed in the milliseconds just as they receive a request. Edge compute allows you to do pretty much whatever you want with the request; you could use it to reject certain incoming requests, or modify all HTML which gets sent back with custom edits. It gives you complete control over how your site responds to requests, all without having to log into a CMS or actually edit your site.

Back to SEO

So, what's Edge SEO?

Well, it's the practice of using edge compute to modify the site content that gets sent back to specific requests. Some common examples are:

In fact, when you loaded this very page you actually experienced Edge SEO first-hand. I used Cloudflare's edge compute to take requests to trakkr.ai/blog/, and respond with content that actually lives at blog.trakkr.ai. Nothing really lives at the URL you're currently visiting; it actually lives here, and I've used Edge SEO to account for the fact that sub-path blogs tend to rank better than ones hosted on subdomains.

These are all are great examples of SEO executed at the edge, as they allow a site owner to write one piece of logic on their CDN that covers all requests, rather than having to go and edit or maintain every single page on their site.

And yet, despite the potential for Edge SEO, it's never really caught on. It's certainly more difficult to implement than just modifying a page directly, and there's never been a huge impetus to invest in it, until now.

Now is the time for Edge SEO

We discussed earlier how site owners are being pulled in different directions by trying to optimise for humans, SEO, and AI all at once. Well, this is a problem that Edge SEO is perfectly positioned to solve.

Edge compute allows you to intercept and route them differently depending on whether they're from humans and traditional SEO crawlers on the one hand, or AI crawlers on the other.

The former receive your usual page content, straight from your server, whereas AI crawlers can receive a version of your site that's been optimised for AI understanding. This helps them digest and understand your site, making them more confident in citing it in AI conversations, and ultimately winning you traffic and visibility.

Trakkr Prism

I've been building a first-of-its-kind system inside of Trakkr to allow site owners to do just this. It's called Prism, and it's a small piece of code that you deploy inside your CDN.

It handles human and SEO crawler requests just as normal, adding on only a few milliseconds of extra latency.

But for AI crawler requests, those that come from the likes of OpenAI's ChatGPT, or Anthropic's Claude, it takes a different path entirely. For these crawlers, it serves a cached, compressed, and optimized version of your site's pages.

The crucial differences are:

What's more, Prism adapts to your needs. Start with defaults that just work, or choose exactly which AI crawlers to optimize for – ChatGPT, Claude, Perplexity, or any combination.

The Future of SEO is on the Edge

Control every aspect of optimization – from FAQ generation to schema injection. Each feature toggles independently, making it as simple or powerful as you need.

The Future of SEO is on the Edge

What's next

Prism has been running in beta for the past two months, serving hundreds of thousands of requests per day. The results have been pretty clear - AI crawlers can finally see what's actually on Javascript-heavy sites, and those sites are getting cited more often in AI responses.

Prism is launching publicly next week for anyone with a site on Cloudflare. It's a single deployment to your CDN, takes about 10 minutes to set up, and then it runs automatically. Your human visitors and Google see your site exactly as before. AI crawlers get a version they can actually parse.

The reality is that a whole bunch of AI crawlers have probably hit your site in the time you've read this, trying to understand your content. Most of the sites I've seen have more crawler traffic than human traffic, and Cloudflare's own data highlights this. With AI referral traffic growing over 500% year-on year, the gap between sites that AI can understand and those it can't is only going to grow.

If you're using modern Javascript frameworks, they're probably not seeing much. And if you're prioritising human traffic and SEO crawlers, then they're probably struggling to understand your content. That's traffic and visibility you're missing out on, not in some future scenario, but right now.

Edge SEO made sense before, but there was never quite enough reason to invest in it. Now there is. You can keep maintaining three different versions of your site for three different audiences, or you can let Prism handle AI crawlers for you.

3.9.2025 15:45The Future of SEO is on the Edge
https://blog.trakkr.ai/the-futur...
Subscribe
More from our directory... Mehr aus unserem Verzeichnis...

🔝

Datenschutzerklärung    Impressum