Function Calling Mechanisms
Introduction
Function Calling is the standardized interface through which LLMs interact with the external world. Major model providers (OpenAI, Anthropic, Google) all offer native function calling support, enabling LLMs to describe needed function calls and their parameters in a structured manner.
OpenAI Function Calling
Basic Workflow
User message → LLM decides whether to call a function → Outputs function name and arguments (JSON) → Application executes function → Result returned to LLM → LLM generates final answer
Tool Definition
Tools are described using JSON Schema:
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather information for a specified city. Call when the user asks about weather.",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name, e.g., 'Beijing', 'Shanghai'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit, defaults to Celsius"
}
},
"required": ["city"]
}
}
}
]
Complete Call Example
from openai import OpenAI
import json
client = OpenAI()
def get_weather(city, unit="celsius"):
"""Actual weather API call"""
# Simulated API response
return {"city": city, "temperature": 22, "unit": unit, "condition": "sunny"}
def run_conversation(user_message):
messages = [{"role": "user", "content": user_message}]
# Round 1: LLM decides whether to call a function
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
tool_choice="auto", # auto/none/required/specific
)
message = response.choices[0].message
if message.tool_calls:
# Execute function calls
messages.append(message)
for tool_call in message.tool_calls:
func_name = tool_call.function.name
func_args = json.loads(tool_call.function.arguments)
# Execute the corresponding function
if func_name == "get_weather":
result = get_weather(**func_args)
# Return the result to the LLM
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result, ensure_ascii=False)
})
# Round 2: LLM generates answer based on function results
final_response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
)
return final_response.choices[0].message.content
return message.content
# Usage
print(run_conversation("What's the weather like in Beijing today?"))
Parallel Function Calling
GPT-4o supports calling multiple functions in a single response:
# LLM may return multiple tool_calls simultaneously
# e.g., "What's the weather in Beijing and Shanghai respectively?"
# tool_calls: [
# {"function": {"name": "get_weather", "arguments": '{"city": "Beijing"}'}},
# {"function": {"name": "get_weather", "arguments": '{"city": "Shanghai"}'}},
# ]
Structured Outputs
Ensures function parameters strictly adhere to JSON Schema:
tools = [
{
"type": "function",
"function": {
"name": "create_task",
"description": "Create a new task",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"priority": {"type": "string", "enum": ["high", "medium", "low"]},
"due_date": {"type": "string", "format": "date"},
},
"required": ["title", "priority"],
"additionalProperties": False, # Strict mode
},
"strict": True # Enable Structured Outputs
}
}
]
Anthropic Tool Use
Tool Definition
tools = [
{
"name": "get_weather",
"description": "Get current weather information for a specified city",
"input_schema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["city"]
}
}
]
Call Example
import anthropic
client = anthropic.Anthropic()
def run_claude_tool_use(user_message):
messages = [{"role": "user", "content": user_message}]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=messages,
)
# Claude's response may contain both text and tool calls
tool_calls = [block for block in response.content if block.type == "tool_use"]
if tool_calls:
# Execute tool calls
messages.append({"role": "assistant", "content": response.content})
tool_results = []
for tc in tool_calls:
result = execute_tool(tc.name, tc.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": tc.id,
"content": json.dumps(result, ensure_ascii=False)
})
messages.append({"role": "user", "content": tool_results})
# Get final answer
final = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=messages,
)
return final.content[0].text
return response.content[0].text
Claude's Special Capabilities
- Thinking process: Uses extended thinking to reveal the reasoning behind tool selection
- Multi-turn tool use: Automatically performs multiple rounds of tool calls until sufficient information is obtained
- Interleaved text and tools: Mixes text and tool calls within a single response
Google Gemini Function Calling
Tool Definition
import google.generativeai as genai
def get_weather(city: str, unit: str = "celsius") -> dict:
"""Get weather information for a specified city.
Args:
city: City name
unit: Temperature unit (celsius or fahrenheit)
Returns:
Weather information dictionary
"""
return {"city": city, "temperature": 22, "unit": unit}
# Gemini can automatically generate tool definitions from Python function signatures and docstrings
model = genai.GenerativeModel(
model_name="gemini-1.5-pro",
tools=[get_weather]
)
chat = model.start_chat()
response = chat.send_message("What's the weather in Tokyo?")
# Automatically handle function calls
for part in response.parts:
if part.function_call:
result = get_weather(**part.function_call.args)
response = chat.send_message(
genai.protos.Content(
parts=[genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=part.function_call.name,
response={"result": result}
)
)]
)
)
Platform Comparison
| Feature | OpenAI | Anthropic | Google Gemini |
|---|---|---|---|
| Tool definition format | JSON Schema | JSON Schema | Python function / JSON |
| Parallel calls | Supported | Supported | Supported |
| Forced call | tool_choice: required |
tool_choice: any |
tool_config |
| Specify tool | tool_choice: {name} |
tool_choice: {name} |
allowed_function_names |
| Structured outputs | strict: true |
Not supported | Not supported |
| Streaming tool calls | Supported | Supported | Supported |
| Max tools | 128 | Hundreds | Hundreds |
| Nested objects | Supported | Supported | Supported |
Advanced Patterns
Tool Selection Strategies
# 1. Automatic selection (default)
tool_choice = "auto" # LLM decides
# 2. Force tool use
tool_choice = "required" # Must call at least one tool
# 3. Specify a tool
tool_choice = {"type": "function", "function": {"name": "get_weather"}}
# 4. Disable tools
tool_choice = "none" # Do not call any tools
Error Handling
def safe_tool_execution(tool_name, arguments, timeout=30):
"""Safe tool execution with error handling"""
try:
result = execute_with_timeout(tool_name, arguments, timeout)
return {"status": "success", "data": result}
except TimeoutError:
return {"status": "error", "message": f"Tool {tool_name} timed out ({timeout}s)"}
except ValidationError as e:
return {"status": "error", "message": f"Parameter validation failed: {e}"}
except Exception as e:
return {"status": "error", "message": f"Execution failed: {str(e)}"}
Formatting Tool Results
def format_tool_result(result, max_length=4000):
"""Format tool results to prevent excessive length"""
result_str = json.dumps(result, ensure_ascii=False, indent=2)
if len(result_str) > max_length:
# Truncate and add notice
truncated = result_str[:max_length]
return truncated + f"\n... [Result truncated, original length: {len(result_str)} chars]"
return result_str
Multi-Turn Tool Calling Loop
def agent_tool_loop(query, tools, llm, max_iterations=10):
"""Complete multi-turn tool calling loop"""
messages = [{"role": "user", "content": query}]
for i in range(max_iterations):
response = llm.chat(messages, tools=tools)
if not response.tool_calls:
return response.content # Final answer
messages.append(response.message)
for tool_call in response.tool_calls:
result = safe_tool_execution(
tool_call.function.name,
json.loads(tool_call.function.arguments)
)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": format_tool_result(result)
})
return "Maximum iterations reached; task incomplete."
Practical Recommendations
Tool Design Checklist
- [ ] Tool name is clear and unambiguous
- [ ] Description details when to use and when not to use
- [ ] Parameters have clear types and descriptions
- [ ] Use enums to constrain parameter ranges
- [ ] Required fields are marked
- [ ] Tool returns structured results
- [ ] Timeout and error handling are implemented
Common Pitfalls
- Vague tool descriptions: LLM cannot determine when to use the tool
- Overly complex parameters: Deep nesting leads to parameter construction errors
- Ignoring error handling: Agent cannot recover when tools fail
- Too many tools: Tool selection becomes difficult; keep to 10-20 tools
- Excessively long results: Untruncated large results waste context space
References
- OpenAI. "Function Calling" Documentation
- Anthropic. "Tool Use" Documentation
- Google. "Gemini Function Calling" Documentation
- Patil, S. G., et al. (2023). "Gorilla: Large Language Model Connected with Massive APIs"