Build a Software Recommendation Bot in 10 Minutes (LangChain + Toolradar MCP)
A step-by-step tutorial for building an AI agent that answers "What is the best X for Y?" using live data from 8,600+ tools. Three approaches: Claude Code MCP, LangChain agent, and direct REST API.
LLMs are bad at software recommendations. They hallucinate pricing, miss tools launched after their training cutoff, and present guesses with total confidence. But connecting an LLM to a structured software database fixes all three problems.
This tutorial builds a bot that answers questions like "What is the best CRM for a 10-person startup under $20/user/month?" using verified data from Toolradar's API — 8,600+ tools with real pricing, editorial scores, and community reviews.
Three options, from simplest to most flexible:
- Option A: Claude Code + Toolradar MCP (2 minutes, zero code)
- Option B: LangChain + MCP adapters (Python agent with tool calling)
- Option C: Direct REST API (works with any framework or language)
Prerequisites
You need two things:
- A Toolradar API key — Free, 100 calls/day, no credit card. Get one at toolradar.com/dashboard/api-keys.
- Node.js 18+ (for Options A and C) or Python 3.10+ (for Option B).
Your API key starts with tr_live_. Keep it out of version control.
What the bot can do
The Toolradar API exposes six endpoints. Your bot will use them as tools:
| Tool | What it does |
|---|---|
search_tools | Search by keyword, category, pricing model. Returns ranked results. |
get_tool | Full details on one tool — pricing tiers, pros/cons, features, ratings. |
compare_tools | Side-by-side comparison of 2-4 tools with a "best overall" pick. |
get_alternatives | Competitors for a given tool, sorted by editorial score. |
get_pricing | Detailed pricing breakdown with tiers, hidden costs, and verdict. |
list_categories | All categories with tool counts. Useful for discovery. |
Each returns structured JSON — no HTML parsing, no scraping.
Option A: Claude Code + Toolradar MCP (2 minutes)
This is the fastest path. Claude Code supports MCP servers natively — you add the server, and Claude gains access to all six Toolradar tools with zero custom code.
Step 1: Set your API key
export TOOLRADAR_API_KEY=tr_live_your_key_here
Step 2: Add the MCP server
claude mcp add toolradar -e TOOLRADAR_API_KEY -- npx -y toolradar-mcp
That is it. Claude Code now has the Toolradar tools available.
Step 3: Ask questions
Open Claude Code and try these:
Compare the top 3 CRM tools for a 10-person startup under $20/user/month
What are the best free alternatives to Jira for a small dev team?
Get me detailed pricing for Linear, Asana, and Monday.com — include hidden costs
Claude will call search_tools, compare_tools, get_pricing, and get_alternatives as needed, then synthesize the results into a coherent answer with real pricing data and editorial scores.
How it works under the hood
When you add the MCP server, Claude Code starts the toolradar-mcp process via npx. The server exposes six MCP tools over stdio. Claude sees the tool descriptions and schemas, decides which to call based on your question, and chains multiple calls when needed.
For example, "Compare the top 3 project management tools" triggers:
search_tools({ query: "project management", sort: "score", limit: 3 })— find the top toolscompare_tools({ slugs: ["monday-com", "asana", "clickup"] })— get the structured comparison- Maybe
get_pricing({ slug: "monday-com" })— if the user asked about pricing details
All calls use your API key, authenticated via TOOLRADAR_API_KEY.
Claude Desktop setup
For Claude Desktop instead of Claude Code, add this to your claude_desktop_config.json:
{
"mcpServers": {
"toolradar": {
"command": "npx",
"args": ["-y", "toolradar-mcp"],
"env": {
"TOOLRADAR_API_KEY": "tr_live_your_key_here"
}
}
}
}
Restart Claude Desktop. The Toolradar tools appear in the tool picker.
Option B: LangChain + MCP client (Python agent)
If you are building a custom agent — a Slack bot, an API endpoint, a CLI tool — LangChain's MCP adapters let you connect to the Toolradar MCP server and use it as a tool provider inside a LangGraph agent.
Step 1: Install dependencies
pip install langchain-mcp-adapters langgraph
You also need an LLM. This example uses OpenAI, but any LangChain-compatible model works:
pip install "langchain[openai]"
export OPENAI_API_KEY=sk-your-openai-key
export TOOLRADAR_API_KEY=tr_live_your_key_here
Step 2: Build the agent
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
async def main():
async with MultiServerMCPClient(
{
"toolradar": {
"command": "npx",
"args": ["-y", "toolradar-mcp"],
"transport": "stdio",
"env": {
"TOOLRADAR_API_KEY": "tr_live_your_key_here",
},
}
}
) as client:
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
# Ask a question
response = await agent.ainvoke(
{"messages": "What are the best free project management tools?"}
)
print(response["messages"][-1].content)
asyncio.run(main())
What happens when you run it
$ python bot.py
Based on Toolradar's data, here are the top free project management tools:
1. **Linear** (Score: 92/100) — Free for up to 250 issues.
Clean interface, built for dev teams. No bloat.
2. **ClickUp** (Score: 85/100) — Free Forever plan with
unlimited tasks. Feature-dense but has a learning curve.
3. **Plane** (Score: 80/100) — Open-source Jira alternative.
Self-host for free, or use their cloud free tier.
All three offer free tiers suitable for small teams. Linear has
the highest editorial score but limits issue count. ClickUp is
the most feature-rich. Plane wins on cost if you self-host.
The agent called search_tools with { query: "project management", pricing: "free", sort: "score" }, then used the returned scores, pricing, and descriptions to compose its answer.
Step 3: Add multi-turn conversation
Wrap the agent in a loop to handle follow-up questions:
async def chat():
async with MultiServerMCPClient(
{
"toolradar": {
"command": "npx",
"args": ["-y", "toolradar-mcp"],
"transport": "stdio",
"env": {
"TOOLRADAR_API_KEY": "tr_live_your_key_here",
},
}
}
) as client:
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
messages = []
while True:
user_input = input("\nYou: ")
if user_input.lower() in ("quit", "exit"):
break
messages.append({"role": "user", "content": user_input})
response = await agent.ainvoke({"messages": messages})
assistant_msg = response["messages"][-1].content
messages = response["messages"]
print(f"\nBot: {assistant_msg}")
This gives you a conversational loop where the agent remembers context: "Compare those first two" works after an initial search.
Option C: Direct REST API (any language)
No MCP, no SDK dependencies. Just HTTP requests. This works with any language, framework, or agent system.
Authentication
Every request needs your API key in the Authorization header:
Authorization: Bearer tr_live_your_key_here
Search tools
curl -s "https://toolradar.com/api/v1/search?q=email+marketing&pricing=freemium&sort=score&limit=5" \
-H "Authorization: Bearer tr_live_your_key_here"
Response:
{
"tools": [
{
"name": "Mailchimp",
"slug": "mailchimp",
"description": "Email marketing platform with automation...",
"pricing": "freemium",
"editorialScore": 88,
"categories": [
{ "name": "Email Marketing", "slug": "email-marketing" }
],
"url": "https://toolradar.com/tools/mailchimp"
}
],
"total": 142,
"limit": 5,
"offset": 0
}
Compare tools
curl -s "https://toolradar.com/api/v1/compare?slugs=notion,clickup,asana" \
-H "Authorization: Bearer tr_live_your_key_here"
Returns a structured comparison with per-tool scores, pricing, pros/cons, and computed insights like "best overall" and "best value."
Get detailed pricing
curl -s "https://toolradar.com/api/v1/pricing/figma" \
-H "Authorization: Bearer tr_live_your_key_here"
Returns pricing tiers, free trial info, hidden costs, and an expert verdict.
Get alternatives
curl -s "https://toolradar.com/api/v1/alternatives/jira" \
-H "Authorization: Bearer tr_live_your_key_here"
Returns up to 10 direct competitors sorted by editorial score.
List categories
curl -s "https://toolradar.com/api/v1/categories" \
-H "Authorization: Bearer tr_live_your_key_here"
Returns all categories with tool counts. Useful for building a category picker in your UI.
Python example without MCP
import requests
import os
API_KEY = os.environ["TOOLRADAR_API_KEY"]
BASE = "https://toolradar.com/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
def search_tools(query, pricing=None, limit=10):
params = {"q": query, "limit": limit, "sort": "score"}
if pricing:
params["pricing"] = pricing
r = requests.get(f"{BASE}/search", headers=HEADERS, params=params)
r.raise_for_status()
return r.json()["tools"]
def compare_tools(slugs):
r = requests.get(
f"{BASE}/compare",
headers=HEADERS,
params={"slugs": ",".join(slugs)},
)
r.raise_for_status()
return r.json()
def get_pricing(slug):
r = requests.get(f"{BASE}/pricing/{slug}", headers=HEADERS)
r.raise_for_status()
return r.json()
# Find the top 3 CRMs, then compare them
crms = search_tools("CRM", limit=3)
slugs = [t["slug"] for t in crms]
comparison = compare_tools(slugs)
print(f"Comparing: {', '.join(t['name'] for t in crms)}")
print(f"Best overall: {comparison['insights']['bestOverall']}")
Node.js / TypeScript example
const API_KEY = process.env.TOOLRADAR_API_KEY;
const BASE = "https://toolradar.com/api/v1";
async function searchTools(query: string, options: Record<string, string> = {}) {
const params = new URLSearchParams({ q: query, sort: "score", ...options });
const res = await fetch(`${BASE}/search?${params}`, {
headers: { Authorization: `Bearer ${API_KEY}` },
});
if (!res.ok) throw new Error(`API error: ${res.status}`);
return res.json();
}
async function compareTools(slugs: string[]) {
const res = await fetch(`${BASE}/compare?slugs=${slugs.join(",")}`, {
headers: { Authorization: `Bearer ${API_KEY}` },
});
if (!res.ok) throw new Error(`API error: ${res.status}`);
return res.json();
}
// Usage
const { tools } = await searchTools("project management", { pricing: "free", limit: "3" });
const comparison = await compareTools(tools.map((t: any) => t.slug));
console.log(comparison);
Deploying as a Slack bot
Once you have a working agent (Option B or C), wrapping it in a Slack bot takes minimal code. Here is the skeleton using Slack Bolt:
import asyncio
from slack_bolt.async_app import AsyncApp
from slack_bolt.adapter.socket_mode.async_handler import AsyncSocketModeHandler
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
app = AsyncApp(token="xoxb-your-bot-token")
# Initialize agent once
client = None
agent = None
async def init_agent():
global client, agent
client = MultiServerMCPClient(
{
"toolradar": {
"command": "npx",
"args": ["-y", "toolradar-mcp"],
"transport": "stdio",
"env": {"TOOLRADAR_API_KEY": "tr_live_your_key_here"},
}
}
)
await client.__aenter__()
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
@app.event("app_mention")
async def handle_mention(event, say):
question = event["text"]
response = await agent.ainvoke({"messages": question})
await say(response["messages"][-1].content)
async def main():
await init_agent()
handler = AsyncSocketModeHandler(app, "xapp-your-app-token")
await handler.start_async()
asyncio.run(main())
Tag the bot in Slack with @SoftwareBot What is the best free alternative to Figma? and it calls Toolradar's API, processes the results, and responds with a recommendation grounded in real data.
Deploying as an API endpoint
For a REST API that other services can call, use FastAPI:
from fastapi import FastAPI
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from contextlib import asynccontextmanager
agent = None
@asynccontextmanager
async def lifespan(app):
global agent
async with MultiServerMCPClient(
{
"toolradar": {
"command": "npx",
"args": ["-y", "toolradar-mcp"],
"transport": "stdio",
"env": {"TOOLRADAR_API_KEY": "tr_live_your_key_here"},
}
}
) as client:
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
yield
app = FastAPI(lifespan=lifespan)
@app.post("/recommend")
async def recommend(query: str):
response = await agent.ainvoke({"messages": query})
return {"answer": response["messages"][-1].content}
uvicorn server:app --host 0.0.0.0 --port 8000
curl -X POST "http://localhost:8000/recommend?query=best+CI+CD+tools+for+startups"
OpenAI Agents SDK alternative
If you prefer OpenAI's agent framework over LangChain, the Agents SDK also supports MCP natively:
pip install openai-agents
export OPENAI_API_KEY=sk-your-key
export TOOLRADAR_API_KEY=tr_live_your_key_here
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with MCPServerStdio(
name="Toolradar",
params={
"command": "npx",
"args": ["-y", "toolradar-mcp"],
"env": {"TOOLRADAR_API_KEY": "tr_live_your_key_here"},
},
) as server:
agent = Agent(
name="Software Advisor",
instructions="You help users find the right software tools. "
"Use the Toolradar tools to search, compare, and get pricing. "
"Always cite editorial scores and verified pricing.",
mcp_servers=[server],
)
result = await Runner.run(
agent, "Compare the top 3 design tools for a startup"
)
print(result.final_output)
asyncio.run(main())
Rate limits and best practices
The free tier allows 100 API calls per day (resets at midnight UTC). Typical usage:
| Workflow | Calls per question |
|---|---|
| Simple search | 1 |
| Search + compare | 2 |
| Search + compare + pricing details | 4-6 |
| Full research (search, compare, alternatives, pricing) | 6-10 |
At 100 calls/day, you can handle 10-100 questions depending on complexity.
Tips to stay within limits:
- Cache responses for repeated queries. The data changes daily, not per-minute.
- Use
limitparameters to reduce payload size. - Combine
search_toolswithsort: "score"to get the best results first. - Use
compare_toolswith the exact slugs from search — no extra lookup needed.
Rate limit headers are included in every response:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 2026-03-27T00:00:00Z
What's next
You now have a software recommendation bot backed by real data. A few directions to take it:
- Add memory — Store past recommendations in a database so the bot learns user preferences over time.
- Filter by team context — Inject team size, budget, and tech stack as system prompt context so every recommendation is pre-filtered.
- Build a comparison dashboard — Use the compare and pricing endpoints to auto-generate comparison pages for your internal wiki.
- Chain with other MCP servers — Combine Toolradar with GitHub MCP (for repo analysis) or a browser MCP (for live demos) to create a full evaluation agent.
Full API documentation: toolradar.com/docs
MCP server landing page: toolradar.com/for-agents
Get your free API key: toolradar.com/dashboard/api-keys
npm package: toolradar-mcp on npm
Related Articles
The AI Agent Stack for Software Procurement: Automate Tool Selection
Software evaluation drains 20+ hours per purchase. An AI agent stack built on Claude and the Toolradar MCP server compresses that to under 30 minutes — with better coverage and structured output your team can actually use.
Every MCP Server Needs a Data Moat: Lessons from Building Toolradar MCP
We built an MCP server that connects AI agents to 8,500+ software tools. Here are five hard-won lessons about data quality, tool design, token efficiency, distribution, and pricing — for anyone considering building their own.
How AI Agents Choose Software (And Why They Get It Wrong)
AI assistants confidently recommend software with wrong prices, discontinued products, and missing alternatives. Here is why it happens and how MCP fixes it.