Skip to content

Analyst Overview

This section provides detailed technical documentation for security analysts reviewing MCP-Scan results or evaluating its detection capabilities.

Detection Philosophy

MCP-Scan is designed specifically for Model Context Protocol (MCP) server security analysis. Unlike general-purpose SAST tools, it focuses on:

  1. MCP-Specific Attack Vectors - Tool poisoning, prompt injection, cross-tool data flow
  2. Deterministic Results - Same input always produces same output
  3. Evidence-Based Reporting - Full taint traces and code snippets
  4. Confidence Levels - Clear indication of detection reliability

Analysis Modes

Fast Mode (Intra-Procedural)

  • Analyzes each function in isolation
  • Pattern-based detection using regex and AST
  • Taint tracking within function boundaries
  • Runtime: 2-5 seconds typically

Best for: CI/CD pipelines, quick developer feedback

Deep Mode (Inter-Procedural)

  • Follows data flow across function calls
  • Tracks taint through function returns and parameters
  • Enables additional rule categories (H, I, J, K)
  • Uses function summaries for efficiency
  • Runtime: 10-60 seconds typically

Best for: Security audits, thorough analysis

Detection Components

1. MCP Surface Extraction

Identifies MCP-specific elements:

┌─────────────────────────────────────────┐
│         MCP Surface Extractor           │
├─────────────────────────────────────────┤
│  • Transport: stdio, HTTP, WebSocket    │
│  • Tools: name, description, handler    │
│  • Resources: declared resources        │
│  • Auth Signals: JWT, OAuth, cookies    │
└─────────────────────────────────────────┘

2. Taint Analysis Engine

Tracks data flow from sources to sinks:

Sources (untrusted input)
┌─────────────────────────────────────────┐
│           Taint Propagation             │
│  • Assignments: a = b (taint flows)     │
│  • Concatenation: a + b (taint merges)  │
│  • Function calls: f(tainted) → output  │
│  • Returns: return tainted_value        │
└─────────────────────────────────────────┘
Sinks (dangerous operations)
Finding (if no sanitizer)

3. Pattern Engine

Applies vulnerability rules:

┌─────────────────────────────────────────┐
│            Pattern Engine               │
├─────────────────────────────────────────┤
│  For each file:                         │
│    For each rule:                       │
│      detector.Detect(file, surface)     │
│        → Pattern matching               │
│        → AST analysis                   │
│        → Taint checking                 │
│      If match:                          │
│        Generate finding with evidence   │
└─────────────────────────────────────────┘

Vulnerability Classes

MCP-Scan organizes findings into 14 classes (A-N):

Class Category Fast Mode Deep Mode
A RCE
B Filesystem
C SSRF
D SQL Injection
E Secrets
F Auth
G Tool Poisoning
H Prompt Injection
I Privilege Escalation
J Cross-Tool
K Auth Bypass
L Lifecycle
M Hidden Network
N Supply Chain

See Vulnerability Classes for detailed descriptions.

Severity and Confidence

Severity Levels

Level Impact Examples
critical Remote code execution, full system compromise Shell injection, eval with user input
high Significant data exposure or system access Path traversal, SSRF, SQL injection
medium Limited exposure or requires specific conditions Insecure cookies, weak JWT verification
low Minor issues, defense-in-depth concerns Missing lockfile, info disclosure
info Observations, potential issues Suspicious patterns, style issues

Confidence Levels

Level Meaning False Positive Rate
high Strong evidence, clear vulnerability < 5%
medium Likely issue, some uncertainty 5-20%
low Possible issue, needs manual review > 20%

Finding Structure

Each finding includes:

{
  "id": "stable_hash_id",        // Deterministic, based on location + rule
  "rule_id": "MCP-A003",         // Rule identifier
  "severity": "critical",        // Impact level
  "confidence": "high",          // Detection reliability
  "location": {                  // Exact code location
    "file": "src/handler.py",
    "start_line": 42,
    "start_col": 5
  },
  "mcp_context": {               // MCP-specific context
    "tool_name": "execute_cmd",
    "handler_name": "handle_execute"
  },
  "trace": {                     // Data flow trace
    "source": {...},
    "sink": {...},
    "steps": [...]
  },
  "evidence": {                  // Code evidence
    "snippet": "os.system(cmd)",
    "snippet_hash": "sha256:..."
  },
  "description": "...",          // What was found
  "remediation": "..."           // How to fix
}

MCP Context

When a finding occurs within an MCP tool handler, additional context is provided:

  • tool_name: The MCP tool where the vulnerability exists
  • handler_name: The function handling the tool call
  • transport: How the MCP server communicates (stdio, HTTP, WebSocket)

This context is crucial for assessing real-world exploitability:

  • A vulnerability in a tool handler is directly accessible via MCP protocol
  • The tool's description may indicate user-controllable input
  • Transport type affects attack surface (e.g., HTTP vs stdio)

MSSS Compliance

MCP-Scan evaluates compliance with the MCP Server Security Standard (MSSS):

Level Score Required Findings Allowed
0 < 60 or any critical Not compliant
1 ≥ 60 ≤ 3 high, no critical
2 ≥ 80 No high or critical
3 ≥ 90 No high or critical, comprehensive analysis

See MSSS Scoring for calculation details.

Reviewing Results

Prioritization Strategy

  1. Critical + High Confidence - Immediate attention, likely exploitable
  2. Critical + Medium Confidence - Review within 24 hours
  3. High + High Confidence - Address in current sprint
  4. Medium + Any Confidence - Add to backlog
  5. Low + Low Confidence - Evaluate for false positives

False Positive Assessment

Check these factors:

  1. Is the source actually user-controlled?
  2. Environment variables in production may be trusted
  3. Config files may be read-only in deployment

  4. Is there sanitization not detected?

  5. Custom validation functions
  6. Framework-level protection

  7. Is the sink actually dangerous in context?

  8. Logging to secure systems
  9. Database with parameterized queries at ORM level

  10. Is the code reachable?

  11. Dead code paths
  12. Test-only code

Using Baselines

For accepted findings (false positives or accepted risks):

# Generate baseline
mcp-scan baseline generate . --reason "Reviewed 2024-01-15" --accepted-by "security@company.com"

# Scan with baseline
mcp-scan scan . --baseline .mcp-scan-baseline.json

Integration with Security Workflow

┌─────────────────────────────────────────────────────────────┐
│                  Security Workflow                          │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  1. Development ──► Fast Mode Scan ──► Quick Feedback      │
│                                                             │
│  2. Pull Request ──► Fast Mode + Baseline ──► CI Gate      │
│                                                             │
│  3. Main Branch ──► Deep Mode ──► SARIF to GitHub          │
│                                                             │
│  4. Release ──► Evidence Bundle ──► Security Review        │
│                                                             │
│  5. Audit ──► Deep Mode + Full Report ──► Compliance       │
│                                                             │
└─────────────────────────────────────────────────────────────┘
Document Description
Analysis Pipeline NEW: How the surface command relates to scan - the complete analysis pipeline flow
Taint Analysis Detailed explanation of the taint analysis engine: algorithm, sources, sinks, sanitizers
MCP Surface Detection MCP surface extraction: tools, resources, prompts, SDK detection
Vulnerability Classes The 14 vulnerability classes (A-N): descriptions, examples, severity
Rules Reference Complete reference for all detection rules
MSSS Scoring MSSS scoring system: category weights, penalty calculation, level mapping