Part of my day job involves working with large organizations in highly regulated industries, the kind of shops where every software purchase has a procurement trail, security reviews are standard, and the phrase “just spin it up” makes the compliance team visibly uncomfortable. These are not naive environments. The people running them have thought seriously about risk.
So it caught my attention when I started noticing, week over week, how fast the catalog of available MCP servers for AI tools was growing. New integrations for GitHub, Slack, internal databases, cloud storage, ticketing systems. Dozens of new servers appearing every few days. I started asking around: what guidance are teams giving developers on which of these they can actually use? The answers ranged from “we’re still figuring that out” to “be responsible.” In organizations that require a six-week vendor review for a new SaaS subscription, “be responsible” is not a policy. It’s an open door. The pattern that stood out most: an off-the-shelf MCP server connecting to an internal system of record, installed on machines with elevated access and no record anywhere that the server existed.
Those conversations sent me down a rabbit hole I haven’t fully climbed out of. MCP represents the same governance gap that personal smartphones created in 2009. This time, the devices can query your database.
What MCP Actually Does
Model Context Protocol (MCP) is an open standard Anthropic released in November 2024. It gives AI assistants like Claude and Cursor a standardized way to connect to external systems. The architecture has three pieces: a Host (the AI application, like Cursor or Claude Desktop), a Client (the MCP-aware component baked into the Host that manages the connection to external servers; think of it as the Host’s built-in MCP adapter), and a Server (a lightweight process that exposes capabilities to the AI).
Servers expose three things: data the model can read (files, database rows, API responses), prompt templates, and (here’s the part that matters) tools the model can execute. Write a file. Run a query. Send a Slack message. Open a GitHub PR.
The MCP spec describes tool descriptions as something that “should be considered untrusted, unless obtained from a trusted server.” It also states plainly: “MCP itself cannot enforce these security principles at the protocol level.”
That second sentence is doing a lot of work.
MCP Is BYOD, Moving Faster
In 2009, employees started bringing iPhones to work. IT didn’t approve them, didn’t manage them, and often didn’t know they were on the network until a device showed up in the DHCP logs. By the time Mobile Device Management policies caught up, those devices already had corporate email, VPN credentials, and access to internal wikis.
MCP is following the same arc, faster. Downloads grew from roughly 100,000 in November 2024 to over 8 million by April 2025. There are now more than 5,800 community MCP servers covering GitHub, Slack, PostgreSQL, Stripe, Jira, and email. OpenAI adopted the protocol in March 2025. Google DeepMind followed. Most major AI IDEs now ship with an MCP client built in.
Your developers are already using these tools. The question isn’t whether MCP is in your environment. It almost certainly is. The question is whether you know which servers are running and what they have access to.
Why Your Security Controls Can’t See MCP
MCP breaks from prior shadow IT patterns. Traditional unsanctioned SaaS showed up in your network logs, SSO audit trail, or CASB reports. MCP servers don’t. Here’s why, and the reasons stack:
MCP servers run as child processes of the IDE. When Cursor starts an MCP server, it spawns it as a child process. Your EDR sees cursor.exe spawning node.exe, exactly what Cursor does during normal operation. Child processes of signed, trusted developer tools inherit implicit trust. Alerting on every node.exe child of VS Code would generate enough noise that most security teams tune it out entirely.
There’s no MCP-specific binary. The server doesn’t run as postmark-mcp.exe. It runs as node with command-line arguments pointing to an npm package path. Without inspecting the full command line of every Node process and parsing what package it’s executing, it’s indistinguishable from a build script or test runner.
Many MCP servers use stdio, not a network socket. Local MCP servers communicate through stdin/stdout pipes to the parent process rather than opening a TCP port. In stdio mode there is no port to scan, no socket to detect, no traffic for a firewall to inspect. Invisible at the network layer by design.
When they do use a port, it’s localhost on a random high port. Network monitoring looks for outbound connections. Traffic that never leaves the machine doesn’t cross a boundary your tools are watching.
No SaaS footprint. A developer signing up for an unapproved Notion workspace shows up in your SSO logs and CASB reports. An MCP server is a local process executing an npm package. No OAuth flow, no DNS lookup for a tracked domain, nothing in the logs your shadow IT detection is built on.
From your tooling’s perspective, a developer running five community MCP servers with database access looks identical to a developer running Cursor normally.
The Qualys TotalAI team confirmed this in a March 2026 scan: 492 MCP servers in production environments had zero authentication enabled, exposing 1,402 tools to the public internet. A separate audit of the official MCP registry in February 2026 found that 41% of servers require no authentication at the protocol level. These aren’t sophisticated misconfigurations. They’re the default.
The Consent Dialog Is Not Governance
When a developer adds an MCP server in Claude Desktop or Cursor, they see a consent dialog. That looks like governance. It isn’t.
Rug pulls. The consent model captures a snapshot of what a server does at install time. Nothing in the MCP protocol requires re-consent when a server’s tool definition changes. An attacker, or a compromised package maintainer, can update a server’s behavior after you’ve approved it. Your consent is stale, and you have no mechanism to know it.
Invariant Labs demonstrated this against a WhatsApp MCP server. The server initially offered a benign “fact of the day” tool, something a developer would approve without a second thought. After installation, the tool’s description was silently updated to redirect outbound messages to an attacker-controlled number. The user saw normal responses. The AI was following new instructions embedded in metadata it reads but the user never sees. The consent dialog had come and gone weeks earlier.
Supply chain exposure. Most MCP servers are JavaScript or TypeScript packages run via npx, which pulls all transitive dependencies at runtime. In September 2025, an npm supply chain attack compromised 18 widely-used packages, including debug, chalk, and ansi-styles, accounting for 2.5 million compromised downloads. Any MCP server depending on those packages inherited the malicious code. The attacker didn’t touch the MCP server itself; they compromised something three layers below it.
Earlier that month, a package called postmark-mcp, built to look like a legitimate Postmark email integration, was found to have a single injected line silently BCC’ing every outgoing email to an attacker-controlled address. It ran 1,643 times before anyone caught it.
Individual consent, organizational blast radius. When a developer approves an MCP server that connects to a shared database, they’re making a personal decision with organizational consequences. The MCP trust model is entirely user-centric. There’s no mechanism for a security team to centrally approve or deny server categories, no scope negotiation, no re-consent trigger when a server mutates.
CVE-2025-6514, disclosed by JFrog, made the stakes concrete: an OS command injection flaw in mcp-remote (CVSS 9.6) allowed a malicious remote MCP server to achieve full remote code execution on the connecting machine, before any consent dialog appeared. Over 437,000 downloads were affected.
What to Do Right Now
There’s no mature remediation framework for this yet. OWASP published an MCP Top 10 in 2025, and the 2026 MCP roadmap explicitly lists enterprise-grade audit logging and SSO integration as “pre-RFC.” No formal problem statement exists, let alone a solution.
The realistic bar right now:
- Know what’s installed. This is step one, and most teams haven’t done it.
- Vet what you find. If an MCP server has database access, run it through the same review you’d give any third-party SaaS integration: who maintains it, when was it last updated, does it have a security policy?
- No community servers on production credentials. Sandbox or dev credentials only until there’s a registry with real accountability behind it.
- Revisit quarterly. A server you cleared in January may look different by April. You won’t get a notification.
Start with the inventory. Run these on your own machine, then send them to your dev team:
Find running MCP server processes:
MCP servers don’t have their own process name — they run as node. The standard ps or Get-Process commands won’t tell you anything useful unless you can see the full command line.
# macOS / Linux
ps aux | grep node | grep -v grep
# Windows (PowerShell) — shows full command line so you can see which package is running
Get-WmiObject Win32_Process -Filter "Name='node.exe'" | Select-Object ProcessId, CommandLine
Check Claude Code:
Claude Code stores MCP servers in a single config file in your home directory, organized by project. The MCP servers are listed under the projects key.
# macOS / Linux
cat ~/.claude.json
# Windows (PowerShell)
cat "$env:USERPROFILE\.claude.json"
Check Claude Desktop (macOS / Linux):
# macOS
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Linux
cat ~/.config/claude/claude_desktop_config.json
Check Claude Desktop (Windows):
On Windows, Claude Desktop installs as a Store app (MSIX package), so the config path isn’t where you’d expect. The package ID varies per machine, so use this to find it:
Get-ChildItem "$env:LOCALAPPDATA\Packages" -Recurse -Filter "claude_desktop_config.json" -ErrorAction SilentlyContinue | Select-Object FullName
Then read whichever path it returns.
Check Cursor:
# Project-level (all platforms)
cat .cursor/mcp.json
# User-level (macOS / Linux)
cat ~/.cursor/mcp.json
# User-level (Windows PowerShell)
cat "$env:USERPROFILE\.cursor\mcp.json"
Check VS Code:
# macOS / Linux — workspace level
cat .vscode/mcp.json
# Windows — workspace level
cat ".vscode\mcp.json"
# Windows — check if MCP gallery is enabled in user settings
Select-String -Path "$env:APPDATA\Code\User\settings.json" -Pattern "mcp"
Ask your dev team to paste the output into a shared doc. That list is your inventory.
You Have the List. Now What?
A list of package names isn’t actionable on its own. Here’s how to triage it:
Step 1 — Check the package on npm. Go to npmjs.com and search the package name. Check three things: last published date, maintainer accounts, and weekly download count. A package last updated 18 months ago, maintained by a single anonymous account, with 200 weekly downloads is not the same risk as a first-party server maintained by Stripe’s engineering team.
Step 2 — Run a vulnerability scan. npm audit is the right tool, but fair warning: it requires a package-lock.json, which means finding where the package lives on disk. Most MCP servers run via npx, which caches packages rather than installing them to a project directory. The path is in the command field of the config JSON from the audit step. Look for:
- macOS / Linux:
~/.npm/_npx/<hash>/node_modules/<package-name> - Windows:
%LocalAppData%\npm-cache\_npx\<hash>\node_modules\<package-name>
Navigate there, then run npm audit. If tracking down the cache path isn’t worth the effort, paste the package name into socket.dev instead. No install required. Socket flags supply chain risks specifically: typosquatting, dependency hijacking, and install-time scripts. For most MCP server reviews, this is where I’d start.
Step 3 — Check the vulnerability databases. Search the package name in:
- osv.dev — Google’s open source vulnerability database, covering CVEs across ecosystems
- GitHub Advisory Database — good for packages hosted on GitHub
- nvd.nist.gov — for formal CVE records (start with CVE-2025-6514 if anyone is running
mcp-remote)
Step 4 — Scan for tool poisoning. Invariant Labs built mcp-scan specifically to detect prompt injection strings hidden inside tool descriptions. Run it against anything that isn’t a first-party server from a vendor you already trust.
None of this is a permanent fix. It’s a point-in-time snapshot with the same expiration problem as consent-at-install. Build a recurring check into your quarterly security review until the tooling matures.
The BYOD problem took years to resolve because we kept treating it as a device problem instead of an access problem. Don’t make the same mistake with MCP. The servers are already in your environment. The access they have is the variable you can still control.