Tools are a fundamental part of agentic AI, alongside these core capabilities:
While language models excel at understanding and generating text, tools extend their abilities by letting them interact with the real world: searching the web for current information, executing code for calculations, accessing databases, reading files, or connecting to external services through APIs. Think of tools as the hands and eyes of an AI agent. They transform a conversational system into an agent that can accomplish tasks by bridging the gap between reasoning and action. When an agent needs to check the weather, analyze a spreadsheet, or send an email, it invokes the appropriate tool, receives the result, and incorporates that information into its response. This moves AI beyond pure text generation toward practical, real-world problem solving.
Interested in how agents retain and use context over time? Explore our deep dive on agent memory.
Building AI agents that can actually do things locally has been surprisingly hard. You need:
Until now, most agentic frameworks forced a choice: powerful cloud-based agents with latency and privacy concerns, or limited local models without proper tool support. Today, that changes.
With LM-Kit's new tool calling capabilities, your local agents can:
Ground answers in real data. No more hallucinated weather forecasts or exchange rates. Agents fetch actual API responses and can cite sources.
Chain complex workflows. For example: check the weather, convert temperature to the user's preferred units, then suggest activities. All in one conversational turn.
Maintain full privacy. Everything runs on-device. Your users' queries, tool arguments, and results never leave their machines.
Stay deterministic and safe. Typed schemas, validated inputs, policy controls, and approval hooks prevent agents from going rogue.
Scale with your domain. Add business APIs, internal databases, or external MCP catalogs as tools. The model learns to use them from descriptions and schemas alone.
ITool
[LMFunction]
Here's a complete working example in under 20 lines:
using LMKit.Model; using LMKit.TextGeneration; using LMKit.Agents.Tools; using System.Text.Json; // 1) Load a local model from the catalog var model = LM.LoadFromModelID("gptoss:20b"); // OpenAI GPT-OSS 20B // Optional: confirm tool-calling capability if (!model.HasToolCalls) { /* choose a different model or fallback */ } // 2) Create a multi-turn conversation using var chat = new MultiTurnConversation(model); // 3) Register tools (see three options below) chat.Tools.Register(new WeatherTool()); // 4) Shape the behavior per turn chat.ToolPolicy.Choice = ToolChoice.Auto; // let the model decide chat.ToolPolicy.MaxCallsPerTurn = 3; // guard against loops // 5) Ask a question var reply = chat.Submit("Plan my weekend and check the weather in Toulouse."); Console.WriteLine(reply.Content);
The model catalog includes GPT-OSS and many other families. LM.LoadFromModelID
lets you pull a named card like gptoss:20b
. You can also check HasToolCalls
before you rely on tools.
See the Model Catalog documentation for details.
A production-ready console sample demonstrates multi-turn chat with tool calling (currency, weather, unit conversion), per-turn policies, progress feedback, and special commands. Jump to:
Create Multi-Turn Chatbot with Tools in .NET Applications
Best when you need clear contracts and custom validation.
This snippet demonstrates implementing the ITool
interface so an LLM can call your tool directly. It declares the tool contract (Name
, Description
, InputSchema
), parses JSON args, runs your logic, and returns structured JSON to the model.
public sealed class WeatherTool : ITool { public string Name => "get_weather"; public string Description => "Get current weather for a city. Returns temperature, conditions, and optional hourly forecast."; // JSON Schema defines expected arguments public string InputSchema => """ { "type": "object", "properties": { "city": {"type": "string", "description": "City name (e.g., 'Paris' or 'New York')"} }, "required": ["city"] } """; public async Task<string> InvokeAsync(string arguments, CancellationToken ct = default) { // Parse the model's JSON arguments var city = JsonDocument.Parse(arguments).RootElement.GetProperty("city").GetString(); // Call your weather API var weatherData = await FetchWeatherAsync(city); // Return structured JSON the model can understand var result = new { city, temp_c = weatherData.Temp, conditions = weatherData.Conditions }; return JsonSerializer.Serialize(result); } } // Register it chat.Tools.Register(new WeatherTool());
Why use ITool? Complete control over validation, async execution, error handling, and result formatting.
Best for rapid prototyping and simple synchronous tools.
What it does: Add [LMFunction(name, description)]
to public instance methods. LM-Kit discovers them and exposes each as an ITool
, generating a JSON schema from method parameters.
How it's wired: Reflect and bind with LMFunctionToolBinder.FromType<MyDomainTools>()
(or FromInstance
/FromAssembly
), then register the resulting tools via chat.Tools.Register(...)
.
public sealed class MyDomainTools { [LMFunction("search_docs", "Search internal documentation by keyword. Returns top 5 matches.")] public string SearchDocs(string query) { var results = _documentIndex.Search(query).Take(5); return JsonSerializer.Serialize(new { hits = results }); } [LMFunction("get_user_info", "Retrieve user profile and preferences.")] public string GetUserInfo(int userId) { var user = _database.GetUser(userId); return JsonSerializer.Serialize(user); } } // Automatically scan and register all annotated methods var tools = LMFunctionToolBinder.FromType<MyDomainTools>(); chat.Tools.Register(tools);
Why use [LMFunction]? Less boilerplate. The binder generates schemas from parameter types and registers everything in one line.
Best for connecting to third-party tool ecosystems via the Model Context Protocol.
What it does: Uses McpClient to establish a JSON-RPC session with an MCP server, fetch its tool catalog, and adapt those tools so your agent can call them.
How it's wired: Create new McpClient(uri, httpClient)
(optionally set a bearer token), then chat.Tools.Register(mcp, overwrite: false)
to import the catalog; LM-Kit manages tools/list
, tools/call
, retries, and session persistence.
using LMKit.Mcp.Client; // Connect to an MCP server var mcp = new McpClient( new Uri("https://mcp.example.com/api"), new HttpClient() ); // Import all available tools from the server int toolCount = chat.Tools.Register(mcp, overwrite: false); Console.WriteLine($"Imported {toolCount} tools from MCP server");
Why use MCP? Instant access to curated tool catalogs. The server handles tools/list
and tools/call
over JSON-RPC; LM-Kit validates schemas locally.
See McpClient documentation.
Choose the right policy for each conversational turn:
One tool, one answer.
chat.ToolPolicy.MaxCallsPerTurn = 1; chat.ToolPolicy.Choice = ToolChoice.Required; // force at least one call
Example: "What is the weather in Tokyo?" calls get_weather
once and answers.
Chain tools sequentially.
chat.ToolPolicy.MaxCallsPerTurn = 5; chat.ToolPolicy.Choice = ToolChoice.Auto;
Example: "Convert 75°F to Celsius, then tell me if I need a jacket."
convert_temperature(75, "F", "C")
and gets 23.9°Cget_weather("current_location")
and gets conditionsExecute multiple tools concurrently.
chat.ToolPolicy.AllowParallelCalls = true; chat.ToolPolicy.MaxCallsPerTurn = 10;
Example: "Compare weather in Paris, London, and Berlin."
get_weather("Paris")
, get_weather("London")
, get_weather("Berlin")
simultaneouslyOnly enable if your tools are idempotent and thread-safe.
Combine chaining and parallelism.
Example: "Check weather in 3 cities, convert all temps to Fahrenheit, and recommend which to visit."
See ToolCallPolicy documentation for all options including ToolChoice.Specific
and ForcedToolName
. Defaults are conservative: parallel off, max calls capped.
Configure safe defaults and per-turn limits. See ToolCallPolicy documentation.
chat.ToolPolicy = new ToolCallPolicy { Choice = ToolChoice.Auto, // let model decide MaxCallsPerTurn = 3, // prevent runaway loops AllowParallelCalls = false, // safe default: sequential only // Optional: force a specific tool first // Choice = ToolChoice.Specific, // ForcedToolName = "authenticate_user" };
Review, approve, or block tool execution. Hooks: BeforeToolInvocation, AfterToolInvocation, BeforeTokenSampling, MemoryRecall.
// Approve tool calls before execution chat.BeforeToolInvocation += (sender, e) => { Console.WriteLine($"About to call: {e.ToolCall.Name}"); Console.WriteLine($" Arguments: {e.ToolCall.ArgumentsJson}"); // Block sensitive operations if (e.ToolCall.Name == "delete_user" && !UserHasApproved()) { e.Cancel = true; Console.WriteLine(" Blocked by policy"); } }; // Audit results after execution chat.AfterToolInvocation += (sender, e) => { var result = e.ToolCallResult; Console.WriteLine($"{result.ToolName} completed"); Console.WriteLine($" Status: {result.Type}"); Console.WriteLine($" Result: {result.ResultJson}"); _telemetry.LogToolCall(result); // send to monitoring }; // Override token sampling in real time chat.BeforeTokenSampling += (sender, e) => { if (_needsDeterministicOutput) e.Sampling.Temperature = 0.1f; }; // Control memory injection chat.MemoryRecall += (sender, e) => { Console.WriteLine($"Injecting memory: {e.Text.Substring(0, 50)}..."); // e.Cancel = true; // optionally cancel };
Every call flows through a typed pipeline for reproducibility and clear logs.
ToolCall
with stable Id
and ArgumentsJson
.ToolCallResult
with ToolCallId
, ToolName
, ResultJson
, and Type
(Success or Error).Purpose: Demonstrates LM-Kit.NET's agentic tool-calling: during a conversation, the model can decide to call one or multiple tools to fetch data or run computations, pass JSON arguments that match each tool's InputSchema
, and use each tool's JSON result to produce a grounded reply while preserving full multi-turn context. Tools implement ITool
and are managed by a registry; per-turn behavior is shaped via ToolChoice
.
Why tools in chatbots?
ITool
for domain logic; keep code auditable.Target audience: Product and platform teams; DevOps and internal tools; B2B apps; educators and demos.
Problem solved: Actionable answers, deterministic conversions/quotes, multi-turn memory, easy extensibility.
Sample app:
Key features:
/reset
, /continue
, /regenerate
| Tool name | Purpose | Online? | Notes | |----|----|----|----| | convert_currency
| ECB rates via Frankfurter (latest or historical) plus optional trend | Yes | No API key; business days; rounding and date support | | get_weather
| Open-Meteo current weather plus optional short hourly forecast | Yes | No API key; geocoding plus metric/us/si | | convert_units
| Offline conversions (length, mass, temperature, speed, etc.) | No | Temperature is non-linear; can list supported units |
Tools implement ITool
: Name
, Description
, InputSchema
(JSON Schema), and InvokeAsync(string json)
returning JSON.
Extend with your own tool:
chat.Tools.Register(new MyCustomTool()); // implements ITool
Use unique, stable, lowercase snake_case
names.
/reset
- clear conversation/continue
- continue last assistant message/regenerate
- new answer for last user inputToolChoice.Auto
). You can Require / Forbid / Force a specific tool per turn.InputSchema
plus concise Description
improve argument construction.Prerequisites: .NET Framework 4.6.2 or .NET 6.0
Download:
git clone https://github.com/LM-Kit/lm-kit-net-samples.git cd lm-kit-net-samples/console_net/multi_turn_chat_with_tools
Run:
dotnet build dotnet run
Then pick a model or paste a custom URI. Chat naturally; the assistant will call one or multiple tools as needed. Use /reset
, /continue
, /regenerate
anytime.
Project link: GitHub Repository
// Load a capable local model var model = LM.LoadFromModelID("gptoss:20b"); using var chat = new MultiTurnConversation(model); // 1) ITool implementation chat.Tools.Register(new WeatherTool()); // 2) LMFunctionAttribute methods var tools = LMFunctionToolBinder.FromType<MyDomainTools>(); chat.Tools.Register(tools); // 3) MCP import var mcp = new McpClient(new Uri("https://mcp.example/api"), new HttpClient()); chat.Tools.Register(mcp); // Safety and behavior chat.ToolPolicy = new ToolCallPolicy { Choice = ToolChoice.Auto, MaxCallsPerTurn = 3, // AllowParallelCalls = true // enable only if tools are idempotent }; // Human-in-the-loop chat.BeforeToolInvocation += (_, e) => { /* approve or cancel */ }; chat.AfterToolInvocation += (_, e) => { /* log results */ }; // Run var answer = chat.Submit( "Find 3 relevant docs for 'safety policy' and summarize."); Console.WriteLine(answer.Content);
HasToolCalls
). git clone https://github.com/LM-Kit/lm-kit-net-samples.git cd lm-kit-net-samples/console_net/multi_turn_chat_with_tools
ITool
[LMFunction]
McpClient
MaxCallsPerTurn = 1
MaxCallsPerTurn = 10
with approval hooksStart building agentic workflows that respect user privacy, run anywhere, and stay under your control.
\