Skip to main content
intermediate Featured
Difficulty: 3/5
Published: 6/26/2025
By: UnlockMCP Team

Beyond Claude: How to Use MCP with Any LLM

MCP is an open, model-agnostic protocol. This guide shows you how to connect your MCP servers to OpenAI, local Ollama models, and the wider AI ecosystem by building your own client.

What You'll Learn

  • Why MCP is a model-agnostic open protocol
  • How to build a basic Python MCP client from scratch
  • +3 more

Time & Difficulty

Time: 20 minutes

Level: Intermediate

What You'll Need

  • A running MCP server to test against (e.g., the weather server from the official quickstart)
  • A Python environment with `mcp`, `python-dotenv`, and `openai` or `ollama` packages installed

Prerequisites

  • Python 3.10+ installed
  • Basic knowledge of MCP servers
  • API key for OpenAI or access to a local Ollama instance
mcp openai ollama sdk client open-protocol

Beyond Claude: How to Use MCP with Any LLM

A common perception is that the Model Context Protocol (MCP) is tightly coupled with Anthropic’s Claude models. While many examples feature Claude, this overlooks the most powerful aspect of the protocol: MCP is completely open and model-agnostic.

New to MCP? Start with our foundational guide: What is MCP in Plain English? Unpacking the ‘USB-C for AI’ Analogy.

Think of MCP as the “USB-C port for AI.” The port doesn’t care if you plug in a Samsung phone, an Apple iPad, or a third-party accessory. Its job is to provide a standard connection. Similarly, an MCP server provides a standard way to expose tools and data. The client application is free to connect that server to any LLM it chooses—be it from OpenAI, Google, a local Ollama instance, or any other provider.

This guide will demystify the process by showing you exactly how to build your own simple Python client to connect your MCP servers to the wider AI ecosystem.

Why MCP is Truly Open: The Client-Server Separation

The magic of MCP lies in its architecture. Your MCP Server has one job: to define and expose its capabilities (Tools, Resources, Prompts) according to the protocol. It has no knowledge of, and no dependence on, which LLM will eventually use it.

The MCP Client is the bridge. It is responsible for:

  1. Connecting to your MCP server.
  2. Discovering the available tools.
  3. Communicating with an LLM of your choice.
  4. Translating the LLM’s request to use a tool into an MCP tools/call request.

This decoupling is the key to avoiding vendor lock-in. Your investment in building robust MCP servers is future-proof, as you can always swap the LLM on the client-side without changing your servers.

Practical Guide: Building a Python Client for OpenAI

Let’s build a command-line chat client that connects to a local MCP server (like the weather-server from the official quickstart) and uses OpenAI’s API to reason and call tools.

Step 1: Project Setup

First, set up your environment. We recommend using uv.

# Create and enter your project directory
uv init mcp-openai-client
cd mcp-openai-client

# Create and activate a virtual environment
uv venv
source .venv/bin/activate

# Install required packages
uv add "mcp[cli]" openai python-dotenv

#Next, create a .env file to securely store your OpenAI API key.\

# .env
OPENAI_API_KEY="sk-..."

Finally, create your client script file: client.py.

Step 2: The Python Client Code

This script will contain the full logic for connecting to an MCP server and an LLM.

# client.py
import asyncio
import os
import sys
from dotenv import load_dotenv
from openai import AsyncOpenAI

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from mcp.types import Tool

# Load API key from .env file
load_dotenv()

# Initialize the OpenAI client
llm_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

async def main(server_path: str):
    """
    Main function to run the MCP client and interact with an LLM.
    """
    print("--- MCP Client for OpenAI ---")
    print(f"Connecting to server: {server_path}")

    # Define how to start the local MCP server
    server_params = StdioServerParameters(command="python", args=[server_path])

    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # 1. Discover tools from the MCP server
            list_tools_result = await session.list_tools()
            available_tools: list[Tool] = list_tools_result.tools
            
            # Format tools for the OpenAI API
            openai_tools = [
                {
                    "type": "function",
                    "function": {
                        "name": tool.name,
                        "description": tool.description or "",
                        "parameters": tool.inputSchema,
                    },
                }
                for tool in available_tools
            ]

            print(f"✅ Server connected. Found tools: {[tool.name for tool in available_tools]}")
            print("Type your query or 'quit' to exit.")

            messages = []
            while True:
                query = input("> ")
                if query.lower() == 'quit':
                    break
                
                messages.append({"role": "user", "content": query})

                # 2. First call to the LLM with the user prompt and available tools
                response = await llm_client.chat.completions.create(
                    model="gpt-4o",
                    messages=messages,
                    tools=openai_tools,
                    tool_choice="auto",
                )
                
                response_message = response.choices[0].message
                tool_calls = response_message.tool_calls

                # 3. Check if the LLM wants to call any tools
                if tool_calls:
                    messages.append(response_message)  # Add assistant's turn to history
                    
                    for tool_call in tool_calls:
                        tool_name = tool_call.function.name
                        tool_args = json.loads(tool_call.function.arguments)

                        print(f"🤖 Calling tool: {tool_name} with args: {tool_args}")
                        
                        # 4. Execute the tool call via the MCP server
                        tool_result = await session.call_tool(tool_name, tool_args)
                        tool_output_text = "".join(c.text for c in tool_result.content if c.text)

                        # 5. Append tool result to message history
                        messages.append(
                            {
                                "tool_call_id": tool_call.id,
                                "role": "tool",
                                "name": tool_name,
                                "content": tool_output_text,
                            }
                        )
                    
                    # 6. Second call to the LLM with the tool results to get a final answer
                    second_response = await llm_client.chat.completions.create(
                        model="gpt-4o",
                        messages=messages,
                    )
                    final_answer = second_response.choices[0].message.content
                    print(f"💬 {final_answer}")
                    messages.append({"role": "assistant", "content": final_answer})
                else:
                    # No tool calls, just a direct answer
                    answer = response_message.content
                    print(f"💬 {answer}")
                    messages.append({"role": "assistant", "content": answer})

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: python client.py <path_to_mcp_server_script>")
        sys.exit(1)
    
    # We need to import json here for the tool_args parsing
    import json
    asyncio.run(main(sys.argv[1]))

How to Run It

Have your MCP server script ready (e.g., weather.py). Need to build an MCP server? Follow our guide: Building Your First MCP Server with Python.

Run the client from your terminal, passing the path to your server script:

python client.py /path/to/your/weather.py

Start asking questions! Try “What’s the weather like in Sacramento?”

Adapting the Client for Local Models (Ollama)

The beauty of this architecture is its flexibility. Want to use a local model with Ollama instead? The core MCP logic remains identical. You only need to change the LLM client initialization and the API call.

First, install the Ollama package:

uv add ollama

Then, modify these few lines in client.py:

# client.py (Ollama version)

# Replace the OpenAI import and client
# from openai import AsyncOpenAI
from ollama import AsyncClient
import json # Ensure json is imported

# ...

# Initialize the Ollama client instead
# llm_client = AsyncOpenAI(...)
llm_client = AsyncClient()

async def main(server_path: str):
    # ... (all MCP connection logic is the same) ...

            # ... (the chat loop) ...
            
                # The API call is slightly different but serves the same purpose
                response = await llm_client.chat(
                    model="llama3", # Or your preferred Ollama model
                    messages=messages,
                    tools=openai_tools, # Ollama uses the OpenAI tool format
                )
                
                response_message = response['message']
                tool_calls = response_message.get('tool_calls')

                # ... (the rest of the tool handling logic is the same) ...

That’s it! By changing just a few lines of code, you’ve swapped a powerful cloud-based LLM for a private, local one, all while your MCP server remains untouched.

Don’t Want to Build? Use a Community Client

If building a client from scratch isn’t for you, the MCP ecosystem is growing rapidly. You can find a list of community-built clients in our directory such as the Any Chat Completions MCP. These range from full-featured IDE integrations like VS Code to terminal tools and no-code platforms, many of which allow you to connect to any LLM provider.

Want to see MCP working with Claude for comparison? Check out our guide: Getting Started with MCP: Your First Integration.

Conclusion

MCP’s true strength is its role as a universal standard. It empowers you, the developer, to build reusable, powerful tools and data sources without tying yourself to a single AI provider. By decoupling your servers from the LLM, you gain the freedom to choose the best model for the job, whether it’s for performance, cost, privacy, or features.

So next time you build an MCP server, remember that you’re not just building for one AI - you’re building for all of them.

Related Guides

Want More Step-by-Step Guides?

Get weekly implementation guides and practical MCP tutorials delivered to your inbox.

Subscribe for Weekly Guides