So far, you've seen how to create a server and a client. The client have been able to call the server explicitly to list its tools, resources and prompts. However, it's not very practical approach. Your user lives in the agentic era and expects to use prompts and communicate with an LLM to do so. For your user, it doesn't care if you use MCP or not to store your capabilities but they do expect to use natural language to interact. So how do we solve this? The solution is about adding an LLM to the client.
In this lesson we focus on adding an LLM to do your client and show how this provides a much better experience for your user.
By the end of this lesson, you will be able to:
- Create a client with an LLM.
- Seamlessly interact with an MCP server using an LLM.
- Provide a better end user experience on the client side.
Let's try to understand the approach we need to take. Adding an LLM sounds simple, but will we actually do this?
Here's how the client will interact with the server:
-
Establish connection with server.
-
List capabilities, prompts, resources and tools, and save down their schema.
-
Add an LLM and pass the saved capabilities and their schema in a format the LLM understands.
-
Handle a user prompt by passing it to the LLM together with the tools listed by the client.
Great, now we understand how we can do this at high level, let's try this out in below exercise.
In this exercise, we will learn to add an LLM to our client.
Creating a GitHub token is a straightforward process. Here’s how you can do it:
- Go to GitHub Settings – Click on your profile picture in the top right corner and select Settings.
- Navigate to Developer Settings – Scroll down and click on Developer Settings.
- Select Personal Access Tokens – Click on Personal access tokens and then Generate new token.
- Configure Your Token – Add a note for reference, set an expiration date, and select the necessary scopes (permissions).
- Generate and Copy the Token – Click Generate token, and make sure to copy it immediately, as you won’t be able to see it again.
Let's create our client first:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js";
import OpenAI from "openai";
import { z } from "zod"; // Import zod for schema validation
class MCPClient {
private openai: OpenAI;
private client: Client;
constructor(){
this.openai = new OpenAI({
baseURL: "https://models.inference.ai.azure.com",
apiKey: process.env.GITHUB_TOKEN,
});
this.client = new Client(
{
name: "example-client",
version: "1.0.0"
},
{
capabilities: {
prompts: {},
resources: {},
tools: {}
}
}
);
}
}
In the preceding code we've:
- Imported the needed libraries
- Create a class with two members,
client
andopenai
that will help us manage a client and interact with an LLM respectively. - Configured our LLM instance to use GitHub Models by setting
baseUrl
to point to the inference API.
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="mcp", # Executable
args=["run", "server.py"], # Optional command line arguments
env=None, # Optional environment variables
)
async def run():
async with stdio_client(server_params) as (read, write):
async with ClientSession(
read, write
) as session:
# Initialize the connection
await session.initialize()
if __name__ == "__main__":
import asyncio
asyncio.run(run())
In the preceding code we've:
- Imported the needed libraries for MCP
- Created a client
using Azure;
using Azure.AI.Inference;
using Azure.Identity;
using System.Text.Json;
using ModelContextProtocol.Client;
using ModelContextProtocol.Protocol.Transport;
using System.Text.Json;
var clientTransport = new StdioClientTransport(new()
{
Name = "Demo Server",
Command = "/workspaces/mcp-for-beginners/03-GettingStarted/02-client/solution/server/bin/Debug/net8.0/server",
Arguments = [],
});
await using var mcpClient = await McpClientFactory.CreateAsync(clientTransport);
First, you'll need to add the LangChain4j dependencies to your pom.xml
file. Add these dependencies to enable MCP integration and GitHub Models support:
<properties>
<langchain4j.version>1.0.0-beta3</langchain4j.version>
</properties>
<dependencies>
<!-- LangChain4j MCP Integration -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-mcp</artifactId>
<version>${langchain4j.version}</version>
</dependency>
<!-- OpenAI Official API Client -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-official</artifactId>
<version>${langchain4j.version}</version>
</dependency>
<!-- GitHub Models Support -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-github-models</artifactId>
<version>${langchain4j.version}</version>
</dependency>
<!-- Spring Boot Starter (optional, for production apps) -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>
Then create your Java client class:
import dev.langchain4j.mcp.McpToolProvider;
import dev.langchain4j.mcp.client.DefaultMcpClient;
import dev.langchain4j.mcp.client.McpClient;
import dev.langchain4j.mcp.client.transport.McpTransport;
import dev.langchain4j.mcp.client.transport.http.HttpMcpTransport;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.openaiofficial.OpenAiOfficialChatModel;
import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.tool.ToolProvider;
import java.time.Duration;
import java.util.List;
public class LangChain4jClient {
public static void main(String[] args) throws Exception { // Configure the LLM to use GitHub Models
ChatLanguageModel model = OpenAiOfficialChatModel.builder()
.isGitHubModels(true)
.apiKey(System.getenv("GITHUB_TOKEN"))
.timeout(Duration.ofSeconds(60))
.modelName("gpt-4.1-nano")
.build();
// Create MCP transport for connecting to server
McpTransport transport = new HttpMcpTransport.Builder()
.sseUrl("http://localhost:8080/sse")
.timeout(Duration.ofSeconds(60))
.logRequests(true)
.logResponses(true)
.build();
// Create MCP client
McpClient mcpClient = new DefaultMcpClient.Builder()
.transport(transport)
.build();
}
}
In the preceding code we've:
- Added LangChain4j dependencies: Required for MCP integration, OpenAI official client, and GitHub Models support
- Imported the LangChain4j libraries: For MCP integration and OpenAI chat model functionality
- Created a
ChatLanguageModel
: Configured to use GitHub Models with your GitHub token - Set up HTTP transport: Using Server-Sent Events (SSE) to connect to the MCP server
- Created an MCP client: That will handle communication with the server
- Used LangChain4j's built-in MCP support: Which simplifies integration between LLMs and MCP servers
Great, for our next step, let's list the capabilities on the server.
Now we will connect to the server and ask for its capabilities:
In the same class, add the following methods:
async connectToServer(transport: Transport) {
await this.client.connect(transport);
this.run();
console.error("MCPClient started on stdin/stdout");
}
async run() {
console.log("Asking server for available tools");
// listing tools
const toolsResult = await this.client.listTools();
}
In the preceding code we've:
- Added code for connecting to the server,
connectToServer
. - Created a
run
method responsible for handling our app flow. So far it only lists the tools but we will add more to it shortly.
# List available resources
resources = await session.list_resources()
print("LISTING RESOURCES")
for resource in resources:
print("Resource: ", resource)
# List available tools
tools = await session.list_tools()
print("LISTING TOOLS")
for tool in tools.tools:
print("Tool: ", tool.name)
print("Tool", tool.inputSchema["properties"])
Here's what we added:
- Listing resources and tools and printed them. For tools we also list
inputSchema
which we use later.
async Task<List<ChatCompletionsToolDefinition>> GetMcpTools()
{
Console.WriteLine("Listing tools");
var tools = await mcpClient.ListToolsAsync();
List<ChatCompletionsToolDefinition> toolDefinitions = new List<ChatCompletionsToolDefinition>();
foreach (var tool in tools)
{
Console.WriteLine($"Connected to server with tools: {tool.Name}");
Console.WriteLine($"Tool description: {tool.Description}");
Console.WriteLine($"Tool parameters: {tool.JsonSchema}");
// TODO: convert tool definition from MCP tool to LLm tool
}
return toolDefinitions;
}
In the preceding code we've:
- Listed the tools available on the MCP Server
- For each tool, listed name, description and its schema. The latter is something we will use to call the tools shortly.
// Create a tool provider that automatically discovers MCP tools
ToolProvider toolProvider = McpToolProvider.builder()
.mcpClients(List.of(mcpClient))
.build();
// The MCP tool provider automatically handles:
// - Listing available tools from the MCP server
// - Converting MCP tool schemas to LangChain4j format
// - Managing tool execution and responses
In the preceding code we've:
- Created a
McpToolProvider
that automatically discovers and registers all tools from the MCP server - The tool provider handles the conversion between MCP tool schemas and LangChain4j's tool format internally
- This approach abstracts away the manual tool listing and conversion process
Next step after listing server capabilities is to convert them into a format that the LLM understands. Once we do that, we can provide these capabilities as tools to our LLM.
-
Add the following code to convert response from MCP Server to a tool format the LLM can use:
openAiToolAdapter(tool: { name: string; description?: string; input_schema: any; }) { // Create a zod schema based on the input_schema const schema = z.object(tool.input_schema); return { type: "function" as const, // Explicitly set type to "function" function: { name: tool.name, description: tool.description, parameters: { type: "object", properties: tool.input_schema.properties, required: tool.input_schema.required, }, }, }; }
The above code takes a response from the MCP Server and converts that to a tool definition format the LLM can understand.
-
Let's update the
run
method next to list server capabilities:async run() { console.log("Asking server for available tools"); const toolsResult = await this.client.listTools(); const tools = toolsResult.tools.map((tool) => { return this.openAiToolAdapter({ name: tool.name, description: tool.description, input_schema: tool.inputSchema, }); }); }
In the preceding code, we've update the
run
method to map through the result and for each entry callopenAiToolAdapter
.
-
First, let's create the following converter function
def convert_to_llm_tool(tool): tool_schema = { "type": "function", "function": { "name": tool.name, "description": tool.description, "type": "function", "parameters": { "type": "object", "properties": tool.inputSchema["properties"] } } } return tool_schema
In the function above
convert_to_llm_tools
we take an MCP tool response and convert it to a format that the LLM can understand. -
Next, let's update our client code to leverage this function like so:
for tool in tools.tools: print("Tool: ", tool.name) print("Tool", tool.inputSchema["properties"]) functions.append(convert_to_llm_tool(tool))
Here, we're adding a call to
convert_to_llm_tool
to convert the MCP tool response to something we can feed the LLM later.
- Let's add code to convert the MCP tool response to something the LLM can understand
ChatCompletionsToolDefinition ConvertFrom(string name, string description, JsonElement jsonElement)
{
// convert the tool to a function definition
FunctionDefinition functionDefinition = new FunctionDefinition(name)
{
Description = description,
Parameters = BinaryData.FromObjectAsJson(new
{
Type = "object",
Properties = jsonElement
},
new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase })
};
// create a tool definition
ChatCompletionsToolDefinition toolDefinition = new ChatCompletionsToolDefinition(functionDefinition);
return toolDefinition;
}
In the preceding code we've:
- Created a function
ConvertFrom
that takes, name, description and input schema. - Defined functionality that creates a FunctionDefinition that gets passed to a ChatCompletionsDefinition. The latter is something the LLM can understand.
-
Let's see how we can update some existing code to take advantage of this function above:
async Task<List<ChatCompletionsToolDefinition>> GetMcpTools() { Console.WriteLine("Listing tools"); var tools = await mcpClient.ListToolsAsync(); List<ChatCompletionsToolDefinition> toolDefinitions = new List<ChatCompletionsToolDefinition>(); foreach (var tool in tools) { Console.WriteLine($"Connected to server with tools: {tool.Name}"); Console.WriteLine($"Tool description: {tool.Description}"); Console.WriteLine($"Tool parameters: {tool.JsonSchema}"); JsonElement propertiesElement; tool.JsonSchema.TryGetProperty("properties", out propertiesElement); var def = ConvertFrom(tool.Name, tool.Description, propertiesElement); Console.WriteLine($"Tool definition: {def}"); toolDefinitions.Add(def); Console.WriteLine($"Properties: {propertiesElement}"); } return toolDefinitions; } ``` In the preceding code, we've: - Update the function to convert the MCP tool response to an LLm tool. Let's highlight the code we added: ```csharp JsonElement propertiesElement; tool.JsonSchema.TryGetProperty("properties", out propertiesElement); var def = ConvertFrom(tool.Name, tool.Description, propertiesElement); Console.WriteLine($"Tool definition: {def}"); toolDefinitions.Add(def); ``` The input schema is part of the tool response but on the "properties" attribute, so we need to extract. Furthermore, we now call `ConvertFrom` with the tool details. Now we've done the heavy lifting, let's see how it call comes together as we handle a user prompt next.
// Create a Bot interface for natural language interaction
public interface Bot {
String chat(String prompt);
}
// Configure the AI service with LLM and MCP tools
Bot bot = AiServices.builder(Bot.class)
.chatLanguageModel(model)
.toolProvider(toolProvider)
.build();
In the preceding code we've:
- Defined a simple
Bot
interface for natural language interactions - Used LangChain4j's
AiServices
to automatically bind the LLM with the MCP tool provider - The framework automatically handles tool schema conversion and function calling behind the scenes
- This approach eliminates manual tool conversion - LangChain4j handles all the complexity of converting MCP tools to LLM-compatible format
Great, we're not set up to handle any user requests, so let's tackle that next.
In this part of the code, we will handle user requests.
-
Add a method that will be used to call our LLM:
async callTools( tool_calls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[], toolResults: any[] ) { for (const tool_call of tool_calls) { const toolName = tool_call.function.name; const args = tool_call.function.arguments; console.log(`Calling tool ${toolName} with args ${JSON.stringify(args)}`); // 2. Call the server's tool const toolResult = await this.client.callTool({ name: toolName, arguments: JSON.parse(args), }); console.log("Tool result: ", toolResult); // 3. Do something with the result // TODO } }
In the preceding code we:
-
Added a method
callTools
. -
The method takes an LLM response and checks to see what tools have been called, if any:
for (const tool_call of tool_calls) { const toolName = tool_call.function.name; const args = tool_call.function.arguments; console.log(`Calling tool ${toolName} with args ${JSON.stringify(args)}`); // call tool }
-
Calls a tool, if LLM indicates it should be called:
// 2. Call the server's tool const toolResult = await this.client.callTool({ name: toolName, arguments: JSON.parse(args), }); console.log("Tool result: ", toolResult); // 3. Do something with the result // TODO
-
-
Update the
run
method to include calls to the LLM and callingcallTools
:// 1. Create messages that's input for the LLM const prompt = "What is the sum of 2 and 3?" const messages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] = [ { role: "user", content: prompt, }, ]; console.log("Querying LLM: ", messages[0].content); // 2. Calling the LLM let response = this.openai.chat.completions.create({ model: "gpt-4o-mini", max_tokens: 1000, messages, tools: tools, }); let results: any[] = []; // 3. Go through the LLM response,for each choice, check if it has tool calls (await response).choices.map(async (choice: { message: any; }) => { const message = choice.message; if (message.tool_calls) { console.log("Making tool call") await this.callTools(message.tool_calls, results); } });
Great, let's list the code in full:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js";
import OpenAI from "openai";
import { z } from "zod"; // Import zod for schema validation
class MyClient {
private openai: OpenAI;
private client: Client;
constructor(){
this.openai = new OpenAI({
baseURL: "https://models.inference.ai.azure.com", // might need to change to this url in the future: https://models.github.ai/inference
apiKey: process.env.GITHUB_TOKEN,
});
this.client = new Client(
{
name: "example-client",
version: "1.0.0"
},
{
capabilities: {
prompts: {},
resources: {},
tools: {}
}
}
);
}
async connectToServer(transport: Transport) {
await this.client.connect(transport);
this.run();
console.error("MCPClient started on stdin/stdout");
}
openAiToolAdapter(tool: {
name: string;
description?: string;
input_schema: any;
}) {
// Create a zod schema based on the input_schema
const schema = z.object(tool.input_schema);
return {
type: "function" as const, // Explicitly set type to "function"
function: {
name: tool.name,
description: tool.description,
parameters: {
type: "object",
properties: tool.input_schema.properties,
required: tool.input_schema.required,
},
},
};
}
async callTools(
tool_calls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[],
toolResults: any[]
) {
for (const tool_call of tool_calls) {
const toolName = tool_call.function.name;
const args = tool_call.function.arguments;
console.log(`Calling tool ${toolName} with args ${JSON.stringify(args)}`);
// 2. Call the server's tool
const toolResult = await this.client.callTool({
name: toolName,
arguments: JSON.parse(args),
});
console.log("Tool result: ", toolResult);
// 3. Do something with the result
// TODO
}
}
async run() {
console.log("Asking server for available tools");
const toolsResult = await this.client.listTools();
const tools = toolsResult.tools.map((tool) => {
return this.openAiToolAdapter({
name: tool.name,
description: tool.description,
input_schema: tool.inputSchema,
});
});
const prompt = "What is the sum of 2 and 3?";
const messages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] = [
{
role: "user",
content: prompt,
},
];
console.log("Querying LLM: ", messages[0].content);
let response = this.openai.chat.completions.create({
model: "gpt-4o-mini",
max_tokens: 1000,
messages,
tools: tools,
});
let results: any[] = [];
// 1. Go through the LLM response,for each choice, check if it has tool calls
(await response).choices.map(async (choice: { message: any; }) => {
const message = choice.message;
if (message.tool_calls) {
console.log("Making tool call")
await this.callTools(message.tool_calls, results);
}
});
}
}
let client = new MyClient();
const transport = new StdioClientTransport({
command: "node",
args: ["./build/index.js"]
});
client.connectToServer(transport);
-
Let's add some imports needed to call an LLM
# llm import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential import json
-
Next, let's add the function that will call the LLM:
# llm def call_llm(prompt, functions): token = os.environ["GITHUB_TOKEN"] endpoint = "https://models.inference.ai.azure.com" model_name = "gpt-4o" client = ChatCompletionsClient( endpoint=endpoint, credential=AzureKeyCredential(token), ) print("CALLING LLM") response = client.complete( messages=[ { "role": "system", "content": "You are a helpful assistant.", }, { "role": "user", "content": prompt, }, ], model=model_name, tools = functions, # Optional parameters temperature=1., max_tokens=1000, top_p=1. ) response_message = response.choices[0].message functions_to_call = [] if response_message.tool_calls: for tool_call in response_message.tool_calls: print("TOOL: ", tool_call) name = tool_call.function.name args = json.loads(tool_call.function.arguments) functions_to_call.append({ "name": name, "args": args }) return functions_to_call
In the preceding code we've:
- Passed our functions, that we found on the MCP server and converted, to the LLM.
- Then we called the LLM with said functions.
- Then, we're inspecting the result to see what functions we should call, if any.
- Finally, we pass an array of functions to call.
-
Final step, let's update our main code:
prompt = "Add 2 to 20" # ask LLM what tools to all, if any functions_to_call = call_llm(prompt, functions) # call suggested functions for f in functions_to_call: result = await session.call_tool(f["name"], arguments=f["args"]) print("TOOLS result: ", result.content)
There, that was the final step, in the code above we're:
- Calling an MCP tool via
call_tool
using a function that the LLM thought we should call based on our prompt. - Printing the result of the tool call to the MCP Server.
- Calling an MCP tool via
-
Let's show some code for doing an LLM prompt request:
var tools = await GetMcpTools(); for (int i = 0; i < tools.Count; i++) { var tool = tools[i]; Console.WriteLine($"MCP Tools def: {i}: {tool}"); } // 0. Define the chat history and the user message var userMessage = "add 2 and 4"; chatHistory.Add(new ChatRequestUserMessage(userMessage)); // 1. Define tools ChatCompletionsToolDefinition def = CreateToolDefinition(); // 2. Define options, including the tools var options = new ChatCompletionsOptions(chatHistory) { Model = "gpt-4o-mini", Tools = { tools[0] } }; // 3. Call the model ChatCompletions? response = await client.CompleteAsync(options); var content = response.Content;
In the preceding code we've:
- Fetched tools from the MCP server,
var tools = await GetMcpTools()
. - Defined a user prompt
userMessage
. - Constructor an options object specifying model and tools.
- Made a request towards the LLM.
- Fetched tools from the MCP server,
-
One last step, let's see if the LLM thinks we should call a function:
// 4. Check if the response contains a function call ChatCompletionsToolCall? calls = response.ToolCalls.FirstOrDefault(); for (int i = 0; i < response.ToolCalls.Count; i++) { var call = response.ToolCalls[i]; Console.WriteLine($"Tool call {i}: {call.Name} with arguments {call.Arguments}"); //Tool call 0: add with arguments {"a":2,"b":4} var dict = JsonSerializer.Deserialize<Dictionary<string, object>>(call.Arguments); var result = await mcpClient.CallToolAsync( call.Name, dict!, cancellationToken: CancellationToken.None ); Console.WriteLine(result.Content.First(c => c.Type == "text").Text); }
In the preceding code we've:
- Looped through a list of function calls.
- For each tool call, parse out name and arguments and call the tool on the MCP server using the MCP client. Finally we print the results.
Here's the code in full:
using Azure;
using Azure.AI.Inference;
using Azure.Identity;
using System.Text.Json;
using ModelContextProtocol.Client;
using ModelContextProtocol.Protocol.Transport;
using System.Text.Json;
var endpoint = "https://models.inference.ai.azure.com";
var token = Environment.GetEnvironmentVariable("GITHUB_TOKEN"); // Your GitHub Access Token
var client = new ChatCompletionsClient(new Uri(endpoint), new AzureKeyCredential(token));
var chatHistory = new List<ChatRequestMessage>
{
new ChatRequestSystemMessage("You are a helpful assistant that knows about AI")
};
var clientTransport = new StdioClientTransport(new()
{
Name = "Demo Server",
Command = "/workspaces/mcp-for-beginners/03-GettingStarted/02-client/solution/server/bin/Debug/net8.0/server",
Arguments = [],
});
Console.WriteLine("Setting up stdio transport");
await using var mcpClient = await McpClientFactory.CreateAsync(clientTransport);
ChatCompletionsToolDefinition ConvertFrom(string name, string description, JsonElement jsonElement)
{
// convert the tool to a function definition
FunctionDefinition functionDefinition = new FunctionDefinition(name)
{
Description = description,
Parameters = BinaryData.FromObjectAsJson(new
{
Type = "object",
Properties = jsonElement
},
new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase })
};
// create a tool definition
ChatCompletionsToolDefinition toolDefinition = new ChatCompletionsToolDefinition(functionDefinition);
return toolDefinition;
}
async Task<List<ChatCompletionsToolDefinition>> GetMcpTools()
{
Console.WriteLine("Listing tools");
var tools = await mcpClient.ListToolsAsync();
List<ChatCompletionsToolDefinition> toolDefinitions = new List<ChatCompletionsToolDefinition>();
foreach (var tool in tools)
{
Console.WriteLine($"Connected to server with tools: {tool.Name}");
Console.WriteLine($"Tool description: {tool.Description}");
Console.WriteLine($"Tool parameters: {tool.JsonSchema}");
JsonElement propertiesElement;
tool.JsonSchema.TryGetProperty("properties", out propertiesElement);
var def = ConvertFrom(tool.Name, tool.Description, propertiesElement);
Console.WriteLine($"Tool definition: {def}");
toolDefinitions.Add(def);
Console.WriteLine($"Properties: {propertiesElement}");
}
return toolDefinitions;
}
// 1. List tools on mcp server
var tools = await GetMcpTools();
for (int i = 0; i < tools.Count; i++)
{
var tool = tools[i];
Console.WriteLine($"MCP Tools def: {i}: {tool}");
}
// 2. Define the chat history and the user message
var userMessage = "add 2 and 4";
chatHistory.Add(new ChatRequestUserMessage(userMessage));
// 3. Define options, including the tools
var options = new ChatCompletionsOptions(chatHistory)
{
Model = "gpt-4o-mini",
Tools = { tools[0] }
};
// 4. Call the model
ChatCompletions? response = await client.CompleteAsync(options);
var content = response.Content;
// 5. Check if the response contains a function call
ChatCompletionsToolCall? calls = response.ToolCalls.FirstOrDefault();
for (int i = 0; i < response.ToolCalls.Count; i++)
{
var call = response.ToolCalls[i];
Console.WriteLine($"Tool call {i}: {call.Name} with arguments {call.Arguments}");
//Tool call 0: add with arguments {"a":2,"b":4}
var dict = JsonSerializer.Deserialize<Dictionary<string, object>>(call.Arguments);
var result = await mcpClient.CallToolAsync(
call.Name,
dict!,
cancellationToken: CancellationToken.None
);
Console.WriteLine(result.Content.First(c => c.Type == "text").Text);
}
// 5. Print the generic response
Console.WriteLine($"Assistant response: {content}");
try {
// Execute natural language requests that automatically use MCP tools
String response = bot.chat("Calculate the sum of 24.5 and 17.3 using the calculator service");
System.out.println(response);
response = bot.chat("What's the square root of 144?");
System.out.println(response);
response = bot.chat("Show me the help for the calculator service");
System.out.println(response);
} finally {
mcpClient.close();
}
In the preceding code we've:
- Used simple natural language prompts to interact with the MCP server tools
- The LangChain4j framework automatically handles:
- Converting user prompts to tool calls when needed
- Calling the appropriate MCP tools based on the LLM's decision
- Managing the conversation flow between the LLM and MCP server
- The
bot.chat()
method returns natural language responses that may include results from MCP tool executions - This approach provides a seamless user experience where users don't need to know about the underlying MCP implementation
Complete code example:
public class LangChain4jClient {
public static void main(String[] args) throws Exception { ChatLanguageModel model = OpenAiOfficialChatModel.builder()
.isGitHubModels(true)
.apiKey(System.getenv("GITHUB_TOKEN"))
.timeout(Duration.ofSeconds(60))
.modelName("gpt-4.1-nano")
.timeout(Duration.ofSeconds(60))
.build();
McpTransport transport = new HttpMcpTransport.Builder()
.sseUrl("http://localhost:8080/sse")
.timeout(Duration.ofSeconds(60))
.logRequests(true)
.logResponses(true)
.build();
McpClient mcpClient = new DefaultMcpClient.Builder()
.transport(transport)
.build();
ToolProvider toolProvider = McpToolProvider.builder()
.mcpClients(List.of(mcpClient))
.build();
Bot bot = AiServices.builder(Bot.class)
.chatLanguageModel(model)
.toolProvider(toolProvider)
.build();
try {
String response = bot.chat("Calculate the sum of 24.5 and 17.3 using the calculator service");
System.out.println(response);
response = bot.chat("What's the square root of 144?");
System.out.println(response);
response = bot.chat("Show me the help for the calculator service");
System.out.println(response);
} finally {
mcpClient.close();
}
}
}
Great, you did it!
Take the code from the exercise and build out the server with some more tools. Then create a client with an LLM, like in the exercise, and test it out with different prompts to make sure all your server tools gets called dynamically. This way of building a client means the end user will have a great user experience as they're able to use prompts, instead of exact client commands, and be oblivious to any MCP server being called.
- Adding an LLM to your client provides a better way for users to interact with MCP Servers.
- You need to convert the MCP Server response to something the LLM can understand.