The Three Primitives Explained: When to Use MCP Tools, Resources, and Prompts
Master the core concepts of MCP with a crystal-clear framework for understanding Tools, Resources, and Prompts - know exactly when to use each one.
What You'll Learn
- The fundamental differences between Tools, Resources, and Prompts
- A simple framework for choosing the right primitive
- +3 more
Time & Difficulty
Time: 15 minutes
Level: Beginner
What You'll Need
- No technical setup required
- Basic understanding of client-server architecture
Prerequisites
- Basic understanding of MCP concepts
- Familiarity with JSON
The Three Primitives Explained: When to Use MCP Tools, Resources, and Prompts
When you’re starting with the Model Context Protocol (MCP), the three core primitives - Tools, Resources, and Prompts - can seem confusingly similar. They all enable interactions between clients and servers, but when and how you use each one makes all the difference.
New to MCP? Start with our beginner-friendly overview: What is MCP in Plain English? Unpacking the ‘USB-C for AI’ Analogy.
This guide provides a crystal-clear framework to eliminate the confusion forever.
The Simple Framework
Think of the three primitives as answers to three fundamental questions:
Primitive | Key Question | Control | Example |
---|---|---|---|
Resources | ”What does the AI need to know?” | Application-controlled | File contents, database schemas |
Tools | ”What does the AI need to do?” | Model-controlled | API calls, file modifications |
Prompts | ”How can I guide the user or AI?” | User-controlled | Workflow templates, slash commands |
The most important distinction is who controls when they’re used.
Resources: “What does the AI need to know?”
The Concept
Resources are read-only data that provide context to AI models. Think of them as a library of information that the AI can reference but not modify.
When to Use Resources
- ✅ Providing reference data (documentation, schemas, logs)
- ✅ Giving AI access to file contents
- ✅ Sharing configuration or status information
- ✅ When the client decides what information to include
Control Model: Application-Controlled
The client application (like VS Code or Claude Desktop) is in charge of deciding which resources to load. While a user might explicitly attach a file, the application can also use its own logic—or even hints from the AI model - to automatically fetch relevant resources. The key is that the AI model itself doesn’t directly invoke a “read resource” command in the same way it invokes a tool; the client acts as the intermediary.
Real-World Example
{
"uri": "file:///project/config.json",
"name": "Project Configuration",
"mimeType": "application/json",
"description": "Current project settings and API keys"
}
Business Impact: Resources improve AI accuracy and reduce costs. By providing comprehensive context upfront (like a full file or database schema), the AI can make better decisions with fewer follow-up questions, leading to faster, more accurate results and lower token usage.
Tools: “What does the AI need to do?”
The Concept
Tools are executable functions that allow AI models to take actions and interact with external systems. They’re the “hands” of the AI.
When to Use Tools
- ✅ Performing actions (create, update, delete)
- ✅ Making API calls to external services
- ✅ Running calculations or data processing
- ✅ When the AI should automatically decide to use them
Control Model: Model-Controlled
The AI model decides when to invoke tools based on the conversation context. The model analyzes what needs to be done and automatically calls the appropriate tool (with human approval).
Real-World Example
{
"name": "create_database_table",
"description": "Create a new table in the database",
"inputSchema": {
"type": "object",
"properties": {
"table_name": { "type": "string" },
"columns": { "type": "array" }
},
"required": ["table_name", "columns"]
}
}
Business Impact: Tools enable automation and reduce manual work. A single conversation can result in multiple automated actions, dramatically improving productivity. They enable AI to move from being an assistant to being an autonomous agent that can execute multi-step workflows, such as creating a JIRA ticket, running a database query, and sending a Slack notification — all from a single user prompt. This dramatically improves productivity.
To understand the broader implications of this shift toward autonomous AI agents, read: The MCP Ripple Effect: How One Protocol is Reshaping AI Development.
Prompts: “How can I guide the user or AI?”
The Concept
Prompts are reusable templates that create standardized workflows. They’re like “shortcuts” that users can trigger to start specific processes.
When to Use Prompts
- ✅ Creating standardized workflows
- ✅ Providing guided templates for common tasks
- ✅ When users should explicitly choose what to do
- ✅ Making complex processes easily discoverable
Control Model: User-Controlled
Users explicitly invoke prompts through UI elements like slash commands or menu options. They’re never automatically triggered.
Real-World Example
{
"name": "code-review",
"description": "Review code changes for best practices",
"arguments": [{
"name": "git_diff",
"description": "Git diff output to review",
"required": true
}]
}
Business Impact: Prompts standardize processes across teams and reduce training time, they democratize expertise. Prompts allow you to encode expert-level workflows into simple, reusable commands. This standardizes best practices across organizations, reduces onboarding time for new team members, and ensures consistent quality in automated tasks.
The Decision Matrix
Use this matrix when choosing between primitives:
Does the AI need information to make decisions?
→ Use Resources
- File contents for code analysis
- Database schemas for query generation
- Log files for troubleshooting
Should the AI automatically take action?
→ Use Tools
- Send emails based on conversation
- Create database entries
- Make API calls to external services
Should users explicitly trigger workflows?
→ Use Prompts
- Code review templates
- Documentation generation
- Debugging workflows
Common Mistakes to Avoid
❌ Wrong: Using Tools for Read-Only Operations
{
"name": "get_file_contents",
"description": "Read a file"
}
Why it’s wrong: Tools should perform actions, not just retrieve data. Better: Use a Resource with the file URI.
❌ Wrong: Using Resources for Dynamic Actions
{
"uri": "action://send-email",
"name": "Send Email Function"
}
Why it’s wrong: Resources are for data, not actions. Better: Create a Tool for sending emails.
❌ Wrong: Using Prompts for Automatic Actions
{
"name": "auto-backup",
"description": "Automatically backup files"
}
Why it’s wrong: Prompts require user initiation. Better: Use a Tool that the AI can call automatically.
Implementation Best Practices
For Resources
- Use descriptive URIs:
file:///docs/api-spec.json
is better thanfile:///temp/1.json
- Set proper MIME types: Helps clients understand content format
- Subscribe to updates: For frequently changing resources
- Consider size limits: Large resources should be selectively loaded
For Tools
- Detailed schemas: Provide complete JSON Schema definitions
- Clear descriptions: Help the AI understand when to use each tool
- Error handling: Return meaningful error messages
- Atomic operations: Keep each tool focused on one specific task
For Prompts
- Intuitive names: Users should understand what the prompt does
- Required vs optional: Clearly mark which arguments are needed
- Validation: Check argument formats before processing
- Documentation: Provide examples of expected inputs
The Control Hierarchy in Practice
Understanding who controls what is crucial for building intuitive MCP integrations:
[ START: User Interaction ]
|
v
+-----------------------------+
| 1. USER invokes a PROMPT | <-- User-Controlled
| (e.g., /code-review) | (The "What should we do?" trigger)
+-----------------------------+
|
v
+-----------------------------+
| 2. APP loads RESOURCES | <-- Application-Controlled
| (e.g., attaches files) | (The "What do we need to know?" context)
+-----------------------------+
|
v
+-----------------------------+
| 3. AI model calls TOOLS | <-- Model-Controlled
| (e.g., runs linter) | (The "How do we do it?" action)
+-----------------------------+
|
v
[ END: Task Complete ]
- A User Initiates a Workflow (Prompt) The user triggers the process by invoking a /code-review Prompt. This is the user-controlled entry point.
- The Application Gathers Context (Resources) Based on the user’s open files, the client application automatically attaches the relevant source code files as Resources. This is the application-controlled context-setting step.
- The AI Takes Action (Tools) The AI model analyzes the code provided in the resources. It decides to call a lint-file Tool to check for syntax errors and a post-to-github Tool to leave a comment on the pull request. This is the model-controlled action step.
- Tools interact with external systems
Real-World Business Scenario
Let’s see all three primitives working together in a customer support scenario:
The Setup
A customer submits a bug report about slow database queries.
The Flow
- User selects the
/debug-performance
Prompt - Client loads database logs as Resources
- AI analyzes the logs and calls a Tool to run performance diagnostics
- AI uses another Tool to create a support ticket with findings
The Result
- Resources provided context about system state
- Tools performed analysis and took action
- Prompts made the complex workflow accessible with one command
Next Steps
Now that you understand the three primitives:
- Audit your current MCP implementations: Are you using the right primitive for each use case?
- Start simple: Implement one primitive at a time in new projects
- Think about control: Always ask “who should control when this happens?”
- User experience: Design your primitives from the user’s perspective
Ready to implement? Put these concepts into practice with our hands-on tutorial: Building Your First MCP Server with Python.
Remember: The best MCP implementations feel natural because they use the right primitive for each interaction. Master this framework, and your MCP integrations will be intuitive and powerful.
Key Takeaways
- Resources = Context (application-controlled)
- Tools = Actions (model-controlled)
- Prompts = Workflows (user-controlled)
- Control determines when and how primitives are used
- Choose based on who should initiate the interaction
- All three often work together in complete workflows
Related Guides
A Developer's Guide to MCP Security: Beyond the Basics
Centralize your understanding of MCP security with this comprehensive guide. Learn practical steps for authenticating servers, preventing prompt injection, validating URIs, and managing secrets.
Building Your First MCP Server with Python
A step-by-step tutorial on how to create and run a basic Model Context Protocol (MCP) server using the Python SDK, FastMCP.
Connect Claude to Your Business Files with MCP
Step-by-step guide to setting up Claude AI to read, analyze, and work with your business documents and spreadsheets automatically.
Want More Step-by-Step Guides?
Get weekly implementation guides and practical MCP tutorials delivered to your inbox.
Subscribe for Weekly Guides