Skip to content
Expert GuideUpdated March 2026

Best MCP Servers in 2026

The 8 MCP servers worth installing — tested across Claude, Cursor, and VS Code

By · Updated

TL;DR

GitHub MCP is the most essential server for developers — full repo management with a zero-setup remote endpoint. Context7 solves stale documentation hallucinations with a single keyword trigger. Figma MCP bridges design-to-code with first-party access to Figma's node tree. Brave Search gives agents live web access through an independent index. Toolradar MCP provides verified software intelligence for tool recommendations. Playwright MCP handles browser automation via accessibility snapshots. Sentry MCP connects error monitoring with AI-powered root cause analysis. Supabase MCP manages your entire backend through natural language.

Over 20,000 MCP servers exist on public registries. The MCP SDK hit 97 million monthly downloads in March 2026. But most servers are toys, abandoned experiments, or security risks — 66% of scanned servers had security findings in a recent audit.

This guide cuts through the noise. After testing dozens of MCP servers across real development workflows in Claude Code, Cursor, and VS Code Copilot, these are the 8 that deliver genuine productivity gains. Each pick was evaluated on three criteria: does it save measurable time, is it actively maintained, and is it safe to install on a production machine.

What Are MCP Servers?

An MCP (Model Context Protocol) server is a program that gives your AI assistant capabilities it cannot access alone — searching the web, querying databases, managing code repositories, reading design files. Instead of copy-pasting data into Claude or Cursor, the AI calls the MCP server directly and gets structured data back.

MCP is now governed by the Agentic AI Foundation (AAIF) under the Linux Foundation, with 146 member organizations including Anthropic, OpenAI, Google, Microsoft, and AWS. The protocol has gone through four spec versions since its November 2024 launch, adding OAuth 2.1 authentication, remote server support via Streamable HTTP, and structured tool output.

The practical result: install an MCP server, restart your AI client, and the assistant gains new abilities. No API integration code. No middleware. The AI reads the server's tool descriptions and figures out when and how to call them.

Why MCP Servers Matter for Developers

AI coding assistants have a fundamental limitation: their training data has a cutoff, typically 6-18 months behind reality. They hallucinate library APIs, quote outdated pricing, and miss tools that launched after the cutoff. MCP servers fix this by giving the AI access to live, external data at call time.

The productivity impact is concrete. GitHub MCP eliminates context-switching between your editor and the GitHub web UI — PRs, issues, code search, all inline. Context7 stops the AI from generating code against last year's Next.js API. Figma MCP turns design handoffs from screenshot-guessing into structured code generation against real layout constraints.

The compound effect matters most. A developer with 3-4 well-chosen MCP servers spends less time on mechanical tasks (looking up docs, switching tabs, copy-pasting error traces) and more time on judgment calls that require human expertise.

Key Features to Look For

Repository ManagementEssential

Create branches, open PRs, search code across repos, manage issues and Actions workflows — all from your AI conversation.

Live Documentation LookupEssential

Fetch current, version-specific library docs at call time so the AI generates code against today's APIs.

Web SearchEssential

Give your AI real-time web search for current information, error messages, and documentation not in its training set.

Design-to-Code

Read Figma design structure and generate code that matches the actual design rather than a screenshot interpretation.

Error Monitoring

Pull stack traces and breadcrumbs from Sentry directly into the AI conversation for faster debugging.

Browser Automation

Navigate pages, fill forms, take screenshots, and run E2E tests through natural language.

Software Discovery

Search, compare, and get verified pricing for 8,500+ tools so the AI recommends based on data.

Backend Management

Design tables, write migrations, deploy Edge Functions — full Supabase management through natural language.

Evaluation Checklist

Does the server solve a problem you hit at least weekly?
Is it maintained by the vendor or a trusted organization (AAIF, Microsoft)?
Can you verify the npm package against the official source?
Does it require write access? Can you scope permissions?
Have you checked the GitHub repo for recent commits and security advisories?
Does the free tier cover your actual usage?

Pricing Comparison

ServerCostFree TierBest For
GitHub MCPFreeFull access (GitHub rate limits)Repository management
Context7Free / $10/seat/mo1,000 calls/monthLive documentation
Figma MCPRequires paid Figma6 calls/month on StarterDesign-to-code
Brave SearchFree / $5/1K req$5/month creditsWeb search
Toolradar MCPFree100 calls/daySoftware discovery
Playwright MCPFree (OSS)UnlimitedBrowser automation
Sentry MCPFree (Sentry limits)5K errors/monthError monitoring
Supabase MCPFree (Supabase limits)2 projects, 500MBBackend management

Most MCP servers are free. The real cost is context window tokens — each server's tool definitions consume 500-1,000 tokens.

Top Picks

Based on features, user feedback, and value for money.

Any developer using AI coding assistants who manages code on GitHub

+Three deployment modes: remote hosted (zero setup), Docker, and stdio binary
+OAuth scope filtering hides tools the token lacks permission for, reducing context noise
+Supports GitHub Enterprise Server and Cloud, not just github.com
Large toolset consumes significant context window even with filtering
Configuration syntax differs across MCP hosts

Developers working with fast-moving frameworks like Next.js, React, Prisma, or Tailwind

+Add 'use context7' to any prompt to get current docs
+Automatic version matching pulls docs for the specific version you use
+Free tier: 1,000 API calls/month plus 20 bonus calls/day at the limit
Library coverage is community-contributed with no completeness guarantee
Backend is proprietary

Frontend developers implementing Figma designs in code

+Reads full design context: node tree, layout constraints, variant info, design tokens
+Code Connect maps Figma components to your actual imports and prop interfaces
+Write-to-canvas lets agents create and modify designs (free during beta)
Requires Dev/Full seat on paid Figma plan ($12-$90/mo); Starter: 6 calls/month
No image/asset support yet

Any developer who needs their AI to access current web information

+Six tools: web, local, news, images, video, and AI summarizer
+Independent search index
+Free tier: $5/month credits (~1,000 searches)
Max 400 characters per query, pagination capped at ~200 results
Local search detail requires Pro plan

Developers choosing tools, consultants evaluating vendors, agents recommending software

+6 tools: search, details, compare, alternatives, pricing, categories
+Pricing verified weekly; tool data updated daily across 400+ categories
+Aggregates ratings from G2, Capterra, Trustpilot into editorial scores
Read-only
100 calls/day may limit automated workflows

Frontend developers and QA engineers automating web interaction

+Accessibility tree instead of screenshots
+Chromium, Firefox, WebKit + Chrome Extension mode with existing cookies
+Chrome Extension connects to your logged-in browser session
~114,000 tokens per typical task (CLI alternative uses ~27,000)
Misses visual-only elements like canvas-drawn interfaces

Teams using Sentry who want AI-powered debugging directly in their workflow

+16+ tools: issues, errors, projects, releases, performance, custom queries
+Seer AI provides root cause analysis and automated fix suggestions
+Fully remote with OAuth
Requires active Sentry account; free plan: 5K errors/month, 1 user
Useless without Sentry as your error tracker

Developers building on Supabase who want backend management via natural language

+35+ tools across 8 groups: DB, Edge Functions, storage, branching, debugging, dev, docs, account
+Remote server with OAuth
+Read-only mode for production safety
Branching requires paid Supabase plan (Pro $25/mo+)
Supabase warns: connect only to dev environments due to prompt injection risks

Mistakes to Avoid

  • ×

    Installing 10+ servers and wondering why the AI is slow — each adds tool definitions consuming context tokens

  • ×

    Using wrong npm packages (common: @anthropic-ai/ doesn't exist; correct: @modelcontextprotocol/ for reference servers)

  • ×

    Giving database MCP servers production credentials instead of read-only users

  • ×

    Assuming all servers are maintained — many are abandoned without notice

  • ×

    Not restarting Claude Desktop after adding a server to the config

Expert Tips

  • Start with 3 servers max: GitHub + Context7 + one domain-specific. Add more only when you feel the absence.

  • Use Claude Code's Tool Search to lazy-load MCP tools and reduce context usage by up to 95%

  • For remote servers (Figma, Sentry, Linear, Supabase), prefer hosted endpoints over local Docker

  • Keep a read-only database user specifically for MCP servers

  • Check Smithery (smithery.ai) for vetted servers before installing random npm packages

Red Flags to Watch For

  • !Package under @anthropic-ai/ scope — this scope does not exist on npm
  • !Server requires full filesystem access or sudo for basic functionality
  • !No GitHub repo, no docs, or last commit older than 6 months
  • !Server bundles more tools than needed — each consumes 500-1,000 tokens
  • !Asks for credentials beyond what its functionality requires

The Bottom Line

Install GitHub MCP first — it eliminates the most context-switching. Add Context7 for fast-moving frameworks. Add Figma MCP if you implement designs. Fill gaps with Brave Search, Toolradar MCP, or Sentry MCP. Three servers is the sweet spot. Five is the max before token overhead hurts. The best setup is the smallest one covering your actual weekly workflow.

Frequently Asked Questions

How many MCP servers should I install?

Three to five. Each adds 500-1,000 tokens per tool to your context window. Five servers with 15 tools each use 50,000-75,000 tokens before you ask anything. Install only servers solving weekly problems.

Are MCP servers safe?

Not universally. 66% of scanned servers had security findings. 30+ CVEs in Jan-Feb 2026. Stick to vendor-maintained servers (GitHub, Figma, Sentry, Microsoft) or AAIF reference servers. Verify npm packages against official docs.

Which AI clients support MCP?

Claude Desktop, Claude Code, ChatGPT (Developer Mode), Cursor, VS Code + Copilot, Windsurf, Zed, Cline, Continue.dev, Replit. Claude Code is the most capable — supports Tool Search and acts as both client and server.

Do MCP servers work with ChatGPT?

Yes, since September 2025 via Developer Mode for Plus/Pro/Team/Enterprise. Only remote servers (Streamable HTTP) — not local stdio servers like Filesystem or Docker.

MCP vs function calling?

Function calling defines tools per-request in vendor-specific schemas. MCP defines tools in a persistent, cross-vendor server. Function calling is for your app. MCP is for distributing tools across the AI ecosystem.

Related Guides

Ready to Choose?

Compare features, read reviews, and find the right tool.