Skip to content

Toolradar Research

The LLM Citation Index 2026: How Software Discovery Actually Works Now

Toolradar receives 95k pageviews/month. 30% from Bing, 24% DuckDuckGo, 16% ChatGPT. Google is 1.5%. The B2B software discovery layer is now LLM-mediated. First-party data on which page types get cited, which LLMs cite which content, and what it means.

LC
Louis Corneloup

Founder, Toolradar & Dupple

Published May 15, 2026
12 min read
Next update Aug 13, 2026

There is no public dataset of which software-discovery sites large language models actually cite. Every "GEO" thought-leadership post is built on speculation, log-file scraping of public Cloudflare data, or controlled-prompt experiments with LLM APIs. We had a different option: read our own server logs.

Toolradar receives roughly 95,000 pageviews per month. About a third of that traffic is referred from chatbot interfaces or LLM-powered search products. That is enough volume to see the real pattern of what LLMs cite when they answer "what's the best [X] tool" questions. This report documents what those patterns look like in May 2026.

It's the inverse of the usual GEO article: not "how to get cited," but "what actually gets cited, by which model, on what page types, at what volume." Useful for vendors deciding where to invest content, for buyers trying to understand the discovery layer they're using, and for journalists writing about AI-driven search.

TL;DR

  1. Google is no longer how people find Toolradar. It was 1.5% of referred traffic in the last 30 days. The dominant discovery source is Bing (30%), followed by DuckDuckGo (24%), then ChatGPT (16%).
  2. ChatGPT is the largest LLM-direct referrer by far. 4,406 visits in 30 days, 16x more than Claude or Copilot.
  3. LLMs prefer list-style content. "/best/free/[category]" pages get 45 visits per URL on average; "/blog/[post]" gets 34; "/tools/[slug]" detail pages get 1.6. Curation pays.
  4. Claude cites differently than ChatGPT. ChatGPT volume is concentrated in "/best/free/[X]" pages. Claude spreads across niche guides (MCP servers, password managers, AI animation).
  5. The "Google moment" for LLM search has happened. For B2B software discovery, the dominant channel is no longer Google. It is the LLM + non-Google search engine stack.

Methodology

We track every pageview on toolradar.com in our self-hosted Umami instance. For each pageview we record the referrer (when present), the destination URL path, and the user session. We then classify referrers by domain into discovery channels: ChatGPT, Claude, Perplexity, Copilot, Gemini for direct LLM interfaces; Bing, DuckDuckGo, Brave, Kagi, Ecosia, Qwant for non-Google search; and Google for Google search.

We exclude internal navigation (referrer host = toolradar.com), authentication callbacks (accounts.google.com), and known bot signatures. The reported numbers are 30-day totals as of 2026-05-15.

The classification of Bing as "LLM-adjacent" is deliberate but disputable. Microsoft has been deeply integrating Copilot into Bing's main search experience since 2024, and many "Bing" referrals are actually Copilot-grounded answers that route through bing.com. We track both bing.com and copilot.microsoft.com separately so the data lets you draw the line where you want.

Full methodology at toolradar.com/how-we-rate.

Where Toolradar's traffic actually comes from

The headline of the report. Discovery sources for the last 30 days, ordered by visits.

Toolradar referred traffic by discovery source

30 days through May 15, 2026. Counts are sessions; bars are pageviews.

Bing
8,0037,187
DuckDuckGo
6,3885,756
ChatGPT
4,4063,828
Yahoo
893813
Privacy search
629568Brave + Ecosia + Qwant
Google
401356
Claude
369302
Microsoft Copilot
302255
Kagi
10295
Gemini
3325
Perplexity
1312
Google search accounts for 1.5% of referred traffic. The Bing + LLM stack is the dominant channel.

A few things to absorb:

Google is fifth. Below Bing, DuckDuckGo, ChatGPT, and Yahoo. Above only the smaller LLM interfaces and privacy-search engines. This is not a story about Toolradar specifically; we hear similar shape from peers in the software-discovery vertical. Google's share of discovery for B2B software queries is collapsing into Bing+LLMs.

ChatGPT is the third-largest discovery channel for our content overall. 4,406 referrals in 30 days, which is roughly 5% of all our pageviews and 16% of all referred-from-external traffic.

Claude and Perplexity are small. Together about 380 visits. The "LLM that cites you" is overwhelmingly ChatGPT.

Yahoo is a stealth contender. 893 visits across multiple regional variants. Probably driven by people whose default search remains Yahoo because of regional defaults or older device setups.

ChatGPT's citation pattern

When ChatGPT cites Toolradar in answering a software-discovery question, what does it link to?

Top 10 Toolradar URLs cited by ChatGPT

30-day visit counts from chatgpt.com referrer

/best/free/ai-video-generation
1,154top citation
/best/free/presentation-design
359
/best/free/transcription
126
/blog/doodle-poll-alternative
125
/best/free/screen-recording
95
/best/free/logo-design
95
/guides/best-ai-diagram-tools
80
/best/free/animation
75
/best/free/video-ai
71
/blog/best-free-project-management
69
ChatGPT favors '/best/free/[category]' pages. List-style curation with explicit pricing wins.

The pattern is brutally consistent. ChatGPT cites:

  • /best/free/[category] pages overwhelmingly. 7 of the top 10 cited URLs match this pattern.
  • Specific alternative comparisons ("doodle poll alternative") when the user query implicitly names an incumbent.
  • Guide-style "best AI [thing]" when the user query asks for a category-wide recommendation.

What ChatGPT does NOT cite from our catalog in any meaningful volume:

  • Individual tool detail pages (/tools/[slug])
  • Tool comparison pages (/compare/...)
  • Category index pages (/categories/[slug])
  • Pricing-specific pages (/tools/[slug]/pricing)

The lesson: when a user asks ChatGPT "best free [X] tool", ChatGPT prefers to cite a single page that already curated the answer over linking to multiple individual products. Curation is the moat.

Claude's citation pattern

Claude (claude.ai + platform.claude.com) cites differently. Smaller volume, but the URL mix is more varied.

Top 10 Toolradar URLs cited by Claude

30-day visit counts from claude.ai

/blog/claude-desktop-mcp-server-setup
36tied for #1
/blog/best-free-note-taking-apps
36tied for #1
/blog/best-mcp-servers-claude-code
18
/guides/best-ai-animation-tools
17
/guides/best-ai-text-to-speech
14
/blog/best-password-managers-2026
12
/blog/best-mcp-servers-vscode
11
/blog/best-claude-code-skills-2026
10
/guides/best-mcp-servers
9
/blog/free-mcp-servers
8
Claude favors niche technical guides, especially MCP-related content. Self-promotion bias possible.

Two distinctive patterns:

MCP is overweight. 5 of Claude's top 10 cited URLs are about MCP (Model Context Protocol — Anthropic's own standard for AI tool integration). Claude favoring content about Claude's ecosystem is logical. Whether it's intentional ranking weight or just frequent query co-occurrence is unclear from our data.

Niche over general. Claude cites things like "best note-taking apps" and "best password managers" — concrete, evaluative content — but does not push the same volume toward "best free [X]" the way ChatGPT does. Claude's user base may have a more technical default mode.

Page type productivity

Across all LLM referrals (ChatGPT + Claude + Copilot + Bing-when-LLM + Gemini + Perplexity), here's what content type is the most productive on a per-URL basis:

Visits per cited URL by page type

LLM referrals last 30 days, normalized by URL count

/best/free/[X]
45.3 visits/URL5454 URLs cited, 2,447 visits
/blog/[post]
33.8 visits/URL104104 URLs, 3,511 visits
/guides/[topic]
15.3 visits/URL249249 URLs, 3,800 visits
/compare/...
1.9 visits/URL239239 URLs, 454 visits
/tools/[slug]
1.6 visits/URL995995 URLs, 1,642 visits
/best/[category]
3.6 visits/URL6363 URLs, 225 visits
/best/free/[X] is the most efficient surface. Per-tool pages get cited often but each carries less weight.

Interpreting this:

The "/best/free/[category]" surface is the citation efficiency king. Only 54 URLs total are cited at all, but they pull 2,447 visits — an average of 45.3 visits per cited URL. ChatGPT clearly leans on this format heavily.

Blog posts have the second-highest visits-per-URL. A well-written /blog/best-free-X-apps is the durable LLM citation surface, especially when paired with explicit pricing data, named picks, and methodology.

Guides have lower density but bigger surface. 249 distinct guide URLs cited brings 3,800 visits. Claude in particular seems to spread citations across more guide URLs than ChatGPT does on /best/free.

Tool detail pages get cited at scale, but per-URL volume is low. 995 different /tools/[slug] pages picked up a citation, but each averaged only 1.6 visits. LLMs cite tools individually when answering a "what does X do" question, but they don't drive much traffic per page.

Compare pages underperform. 239 different comparison URLs were cited, but at 1.9 visits per URL the channel is weak. LLMs answer "X vs Y" questions inline using their training data; they don't tend to link to comparison aggregators.

What this means for vendors

Three concrete implications:

Invest in curated list content. "Best free [category]" and "best [tool type] in 2026" are the highest-ROI surfaces. They get cited, they convert, and they compound. A single well-built list piece can drive thousands of LLM-referred visits per month for years.

Tool detail pages are necessary but not the channel. They're table stakes. Don't expect them to drive volume from LLM referrals directly. They convert visitors who already arrived via list content.

Comparison pages have weak LLM ROI. Don't over-invest. LLMs answer "X vs Y" questions inline. Comparison pages are useful for direct organic search, not for LLM grounding.

The MCP angle is interesting if you're an AI tool. Claude over-indexes on MCP content. If your product is in the AI tooling space, publishing thoughtful MCP integration content opens a Claude-specific channel.

What this means for buyers

Two implications:

If you're researching software, your default discovery layer is the LLM stack. It's worth knowing that the LLM citing the directory likely got the directory data from training cutoffs months ago. The pricing or feature claim might be stale. Verify on the vendor's site.

The single "best [X]" article is now load-bearing. When ChatGPT recommends 5 tools for video generation, it probably pulled from one or two curated lists. Quality of those lists matters. If you find one with deep methodology and named editorial picks, weight it more than the generic SEO listicle.

What this means for AI search products

Not actionable for our reader segments, but for completeness:

Citation transparency is a competitive moat. ChatGPT shows source links inline; Claude shows fewer; Perplexity shows the most. Users will eventually demand traceable sources for product recommendations. The product that shows up-to-date, citation-tracking pricing will win the trust game.

Citation update latency matters. When a tool's pricing changes, the LLM's grounding takes weeks to refresh. The directories that get cited need to be the most-recently-updated, not the most-traffic.

FAQ

Are these numbers representative beyond Toolradar?

Probably yes, with caveats. Other software-discovery and "best-of" directories report similar Bing+DuckDuckGo dominance and ChatGPT being the largest LLM-direct referrer. The exact ratios will vary by content niche. AI tools, dev tools, and SaaS discovery are particularly LLM-cited. Local services and consumer e-commerce probably look different.

Why is Google so low for you specifically?

Three contributing factors: (1) Toolradar's DR is 29 — too low to rank competitively for high-volume Google terms; (2) the March 2026 core update hit aggregators; (3) the categories we serve (software discovery, AI tools, SaaS) have moved heavily to LLM-mediated discovery faster than other categories. Sites with stronger Google footprint pre-2026 still see Google as a large channel.

How did you classify Bing as "LLM-adjacent"?

Bing's main results page now embeds Copilot-grounded answers prominently. Many bing.com referrals are people who clicked a citation inside a Copilot answer, even though the referrer is bing.com not copilot.microsoft.com. We list both separately in the data so readers can draw the line where they prefer.

Could the LLM referrers be bots?

We exclude known bot user agents and don't count crawler hits as pageviews in Umami. Some manual verification (sampling sessions, checking time-on-page, scroll depth) confirms these are real users.

How often does this update?

Quarterly. Next refresh August 2026, covering the period 2026-05-15 to 2026-08-15. Subscribe at toolradar.com/reports.

Closing

Most discussion of "AI search" assumes either utopian (LLMs replace Google, traffic vanishes) or pessimistic (LLMs scrape and don't cite) framings. The data is more mundane. For B2B software discovery, the channels split is now Bing + LLMs + non-Google search engines, with Google reduced to a footnote. ChatGPT drives meaningful, identifiable traffic. Claude drives less but is real. The total LLM-citation share for our category is one out of three visits and probably growing.

The optimization implication is straightforward. If you build software for a B2B audience and you want to be discoverable in 2026, you optimize for the channels that account for 98.5% of your discovery, not the channel that accounts for 1.5%.

Browse the 9,024 tools in our catalog at toolradar.com/tools, or read our other first-party reports on B2B software trends.

Cite this report

Use the data, credit the source.

Released under Creative Commons BY 4.0. You may quote, link, and reuse the data with attribution.

Toolradar Research (2026). The LLM Citation Index 2026: How Software Discovery Actually Works Now. Toolradar. https://toolradar.com/reports/llm-citation-index-2026