9 MCP Tools That Make AI Understand Your Design System
By Conan McNichollYesterday we announced Fragments and the idea behind it: if AI is writing your UI, it needs real, queryable context about your components. Not training data. Not a rules file. A live query layer.
Today we're going to walk through the 9 MCP tools that make that happen.
What is MCP?
Model Context Protocol (MCP) is an open standard that lets AI assistants call tools in external systems. Think of it as a function-call API that works across Claude Desktop, Cursor, VS Code, Claude Code, and any client that supports the protocol.
Instead of relying on training data that might be six months old or completely generic, AI can query YOUR design system in real-time. Every component, every prop, every token, every accessibility rule. All available on demand.
Fragments ships with a standalone MCP server (@fragments-sdk/mcp) that exposes 9 tools. Here's the full picture:
| Tool | Purpose | Requires Dev Server |
|---|---|---|
fragments_discover | Find components by keyword, category, or use case | No |
fragments_inspect | Deep dive into one component's props, usage, and examples | No |
fragments_implement | One-shot helper: describe what you want, get everything back | No |
fragments_render | Render a component in a real browser, return a screenshot | Yes |
fragments_fix | Generate patches to replace hardcoded CSS with design tokens | Yes |
fragments_a11y | Run axe-core accessibility audit, return WCAG compliance score | Yes |
fragments_graph | Query the component relationship graph | No |
fragments_blocks | Search pre-built composition patterns | No |
fragments_tokens | Browse CSS design tokens by category | No |
Six tools work out of the box with just your fragments.json file. Three require a running dev server for browser-based operations. Let's go through each one.
The Discovery Workflow: discover, inspect, implement
These three tools form the core loop. They're designed to be called in sequence, or collapsed into a single call when speed matters.
fragments_discover
The starting point. AI uses this to find components by keyword, filter by category or status, or ask for AI-powered suggestions based on a use case description.
fragments_discover(useCase: "user authentication form")Returns ranked suggestions with confidence scores, descriptions, and usage guidance. But notice the blockHint in the response:
{
"useCase": "user authentication form",
"suggestions": [
{ "component": "Input", "confidence": "high", "category": "forms" },
{ "component": "Button", "confidence": "high", "category": "forms" },
{ "component": "Card", "confidence": "medium", "category": "layout" }
],
"recommendation": "Best match: Input (high confidence)",
"blockHint": "Related blocks: Login Form. Use fragments_blocks(search: \"authentication\") for ready-to-use patterns."
}The tool doesn't just find individual components. It cross-references against composition blocks so AI knows there's already a ready-made Login Form pattern available. No need to assemble from scratch.
You can also filter by category (forms, layout, feedback, navigation, etc.), status (stable, beta, deprecated), or use compact mode for token-efficient responses.
fragments_inspect
Once AI knows which component it wants, fragments_inspect delivers the full picture: props with types and defaults, usage guidelines, accessibility rules, code examples, and relationships to other components.
fragments_inspect(component: "Input", fields: ["props", "guidelines", "examples"])The fields parameter is key here. Instead of dumping the entire fragment (which might be hundreds of lines), AI can request only what it needs. Need just the props? Ask for ["props"]. Want usage rules and accessibility notes? Ask for ["guidelines.when", "guidelines.accessibility"]. This keeps token usage tight.
A full inspect response includes:
- meta: name, category, status, tags, description
- props: every prop with type, required/optional, default value, constraints, and description
- guidelines: when to use, when NOT to use, general guidelines, accessibility rules, and alternative components
- examples: import statement, variant code examples with descriptions
The "when not to use" guidance is particularly valuable. When AI is deciding between Button and Link, or between Switch and Checkbox, these rules give it the same judgment call a senior engineer would make.
fragments_implement
The one-shot helper. Describe what you want to build and get back everything in a single call. No need to chain discover, inspect, blocks, and tokens separately.
fragments_implement(useCase: "login form with email and password")Under the hood, this runs parallel searches across components, blocks, and tokens and fuses the results. The response includes:
- Top matching components with import statements, prop summaries, and code examples
- Matching composition blocks with ready-to-use code
- Relevant CSS tokens grouped by category
For that login form query, you'd get back the Input, Button, Card, Stack, Text, and Link components, plus the Login Form block with this exact code:
<Card variant="elevated">
<Card.Header>
<Card.Title>Sign In</Card.Title>
<Card.Description>Welcome back! Please enter your details.</Card.Description>
</Card.Header>
<Card.Body>
<Stack gap="md">
<Input label="Email" type="email" placeholder="Enter your email" />
<Input label="Password" type="password" placeholder="Enter your password" />
<Link href="#" variant="subtle"><Text size="sm">Forgot password?</Text></Link>
<Button variant="primary" fullWidth>Sign In</Button>
</Stack>
</Card.Body>
<Card.Footer>
<Text size="sm" color="tertiary">
Don't have an account? <Link href="#">Sign up</Link>
</Text>
</Card.Footer>
</Card>That's your actual components with your actual API. Not a generic HTML form. Not a guess. AI starts from a real, tested pattern and adapts it.
Visual Verification: render
Here's where things get interesting. fragments_render renders a component in a real browser (via a running Fragments dev server) and returns a screenshot. AI can literally see what it's building.
fragments_render(component: "Button", variant: "Primary", props: { children: "Submit" })Returns a PNG screenshot of the rendered component. You can customize the viewport size, pass any props, and select specific variants.
But the real power is the Figma comparison mode. Pass a figmaUrl and the tool will:
- Render your component in the browser
- Fetch the Figma design frame
- Run a pixel-level diff between the two
- Return the diff percentage and a highlighted overlay showing exactly where they diverge
fragments_render(
component: "Card",
variant: "Elevated",
figmaUrl: "https://figma.com/file/abc123?node-id=45:678",
threshold: 1.0
)If the diff exceeds the threshold, AI knows the implementation doesn't match design. It can inspect the component, identify what's off, and self-correct. This closes the loop between "AI generates code" and "code matches the design."
Quality Enforcement: fix + a11y
Two tools that catch problems before they ship.
fragments_fix
Token drift is the silent killer of design systems. Someone hardcodes color: #6366f1 instead of using var(--fui-brand). Works today, breaks tomorrow when the brand color changes.
fragments_fix scans rendered styles for hardcoded values and generates unified diff patches:
fragments_fix(component: "StatsCard", fixType: "token")Returns patches like:
- color: #6366f1;
+ color: var(--fui-brand);
- padding: 16px;
+ padding: var(--fui-space-4);AI can apply these automatically or present them for review. Either way, drift gets caught and fixed at the tool level, not in a code review three weeks later.
fragments_a11y
Runs an axe-core accessibility audit on rendered components and returns WCAG compliance results. Checks against AA by default, with optional AAA mode for enhanced compliance.
fragments_a11y(component: "Input", standard: "AA", includeFixPatches: true)Returns:
- Total violations count
- Per-variant pass/fail results
- AA and AAA compliance percentages
- Violation details with impact level and description
- Optional auto-fix suggestions when
includeFixPatchesis true
AI can check its own work. Build a component, audit it, fix the violations, re-audit. No human in the loop for catching missing aria-labels or insufficient color contrast.
Architecture Awareness: graph
fragments_graph queries the component relationship graph that Fragments builds during compilation. This gives AI structural understanding of how your design system fits together.
Eight query modes:
| Mode | What it answers |
|---|---|
dependencies | What does this component depend on? |
dependents | What depends on this component? |
impact | What breaks if I change Button? (transitive) |
path | What's the shortest path from Card to Dialog? |
composition | Show the compound component tree (e.g., Card -> Card.Header, Card.Body, Card.Footer) |
alternatives | What can I use instead of this component? |
islands | Are there disconnected component groups? |
health | Overall graph metrics: density, orphans, clusters |
The impact mode is particularly useful. Before AI refactors a foundational component, it can check the blast radius:
fragments_graph(mode: "impact", component: "Button", maxDepth: 3)Returns every component, block, and pattern affected by a Button change, up to 3 levels deep. That's architectural awareness most developers don't have without manually tracing dependencies.
Composition at Scale: blocks + tokens
fragments_blocks
Pre-built composition patterns across categories including:
- Authentication: Login Form, Registration Form
- Marketing: Hero Section, Feature Grid, Pricing Comparison, FAQ Section, Contact Form
- Dashboard: Dashboard Layout, Dashboard Page, Stats Card, Data Table, Activity Feed, Empty State, Paginated Table
- Settings: Settings Panel, Account Settings, Settings Drawer
- E-commerce: Product Card, Shopping Cart, Checkout Form
- AI: Chat Interface, Chat Messages, Thinking States
- Navigation: Command Palette
Each block includes the component list, import statements, and ready-to-use JSX code. AI uses these instead of inventing structure from scratch.
fragments_blocks(category: "authentication")Returns the Login Form and Registration Form blocks with full code. AI can then adapt the pattern to the specific requirements instead of starting from a blank file.
fragments_tokens
Browse all CSS design tokens by category. Colors, spacing, typography, surfaces, shadows, radius, borders, text, focus, layout, code, component-sizing. All queryable.
fragments_tokens(category: "colors")Returns the actual token names (--fui-brand, --fui-brand-hover, --fui-surface-1, etc.) with descriptions. AI gets the real custom property names from your system, not made-up class names from training data. These tokens are all generated from just 4 seed values, so they stay consistent across your entire application.
Setup
A few lines in your MCP config:
{
"mcpServers": {
"fragments": {
"command": "npx",
"args": ["-y", "@fragments-sdk/mcp@latest"]
}
}
}Drop that into your .cursor/mcp.json, .vscode/mcp.json, or Claude Desktop config. The server auto-discovers your fragments.json from the project root.
For the three tools that require a browser (render, fix, a11y), start the dev server and pass the viewer URL:
{
"mcpServers": {
"fragments": {
"command": "npx",
"args": ["-y", "@fragments-sdk/mcp@latest", "--viewer-url", "http://localhost:6006"]
}
}
}Works with Claude Desktop, Cursor, VS Code, Windsurf, and Claude Code.
The Bigger Picture
A design system without a query layer is just a collection of files that AI has to guess about. Training data goes stale. Rules files get ignored. Documentation drifts.
These 9 tools turn a passive component library into an active intelligence layer. AI doesn't guess anymore. It discovers, inspects, implements from tested patterns, renders to verify, audits for accessibility, and self-corrects when things drift.
That's not a prompt hack. That's infrastructure.
Tomorrow we'll look at how Fragments compares to other approaches for keeping AI consistent with your design system. Stay tuned.
Browse the components, try the MCP server, or read the announcement post.
