Introduction
Large Language Models (LLMs) were initially used for text generation: answering questions, generating summaries, and translating text. But now, they`re evolved into agents that use external tools (APIs, functions, databases, calculators) to perform complex reasoning and take actions.
What is Function Calling?
LLMs generate structured outputs that call external tools or APIs. The LLM returns a JSON object with arguments that can be passes to a real function.
This JSON object contains name of the function, and the parameters that the function need to execute.
I am using python therefore I will use python function as example.
Example Code
Setup
pip install langchain langchain[groq]Environment Variable
Setup your GROQAPI key:
export GROQ_API_KEY="your-key"from langchain.chat_models import init_chat_model
from langchain.tools import tool
# Define a tool the LLM can call
@tool
def get_date(_: str) -> str:
""""Returns current Date"""
from datetime import datetime
return datetime.now().strftime("%Y-%m-%d")
@tool
def get_time(_: str="") -> str:
"""get current Time"""
from datetime import datetime
return datetime.now().strftime("%H:%M:%S")
# Set up the LLM
llm = init_chat_model("gemma2-9b-it", model_provider="groq")
llm_tools = llm.bind_tools([get_date, get_time])
# Use the agent
response = llm_tools.invoke([{"role": "user", "content": "What is the current time?"}])
# Response contain the tool call
print(response)
messages = []
tools_dict = {"get_date": get_date, "get_time": get_time}
# this will execute the function based on the tool call from the llm model
for tool_call in result .tool_calls:
selected_tool = tools_dict[tool_call["name"].lower()]
tool_msg = selected_tool.invoke(tool_call)
messages.append(tool_msg)
print(tool_msg)What’s Happening?
- LLM gets the prompt.
- It decides
get_dateorget_dateshould be called.
content='' additional_kwargs={} response_metadata={'model': 'llama3.1', 'created_at': '2025-04-25T16:52:08.0778404Z', 'done': True, 'done_reason': 'stop', 'total_duration': 4427471300, 'load_duration': 81652000, 'prompt_eval_count': 182, 'prompt_eval_duration': 295000000, 'eval_count': 13, 'eval_duration': 4049000000, 'model_name': 'llama3.1'} id='run-ba491667-285d-4c11-885b-e89ef0f02b31-0' tool_calls=[{'name': 'get_time', 'args': {}, 'id': '03b3bdb0-729b-4568-ba00-0539c3272f81', 'type': 'tool_call'}] usage_metadata={'input_tokens': 182, 'output_tokens': 13, 'total_tokens': 195}- It generates a JSON that LangChain converts to a function call.
- The result of the function is shown to the user.
content='21:52:08' name='get_time' tool_call_id='03b3bdb0-729b-4568-ba00-0539c3272f81'Let’s break it down clearly:
🔧 Tool 1: get_date
@tool
def get_date(_: str) -> str:
"""Returns current Date"""
from datetime import datetime
return datetime.now().strftime("%Y-%m-%d")- Tool name:
get_date - Docstring:
"Returns current Date" - This description is used by the LLM to decide:
- “If the user asks something related to today’s date, I should call this tool.”
- Even if the tool doesn’t need input, it must accept one argument (
_: str) for compatibility with LangChain’s tool signature.
⏰ Tool 2: get_time
@tool
def get_time(_: str = "") -> str:
"""Returns current Time"""
from datetime import datetime
return datetime.now().strftime("%H:%M:%S")- Tool name:
get_time - Docstring:
"Returns current Time" - So, when the user asks:
- “What’s the time now?”
the LLM will read the tool’s description and decide: “Sounds like I should callget_time.”
🧠Why Are These Descriptions So Crucial?
LangChain sends a structured schema of tools to the LLM like:
{
"name": "get_time",
"description": "Returns current Time",
"parameters": {...}
}So your tool’s docstring is the only thing the LLM has to understand the tool’s purpose.
If the docstring is too vague (e.g., "Returns a value"), the LLM won’t know when to use it. If it's too specific or inaccurate, it might mislead the LLM.
✅ Best Practice for Writing Tool Docstrings
- Describe clearly what the tool does.
- Include input/output expectations if needed.
- Use natural language — LLMs read this like any text.
🛠Use Cases
- Medical AI: Use tools for diagnosis and explanation.
- Finance Bots: Access real-time stock data and generate reports.
- Customer Service: Query product APIs and take actions.
- DevOps Copilot: Execute shell commands or query Grafana dashboards.
🧠Summary
- LLMs are no longer just language models — they’re tool-using reasoning machines.
- LangChain orchestrates tools, memory, and LLMs.
No comments:
Post a Comment