Large Language Models (LLMs) have completely transformed the way humans interact with technology. Once limited to generating text for human consumption, these models are now stepping into a much more powerful role—they can interact directly with software systems, making decisions, retrieving information, and even taking actions autonomously.
Imagine asking an AI to fetch real-time weather data, calculate a financial projection, or pull specific entries from a database—all without writing traditional code for each request. This is the new frontier opened by function calling, a capability recently introduced by OpenAI. By enabling LLMs to output structured data such as JSON, developers can now allow AI to invoke functions or APIs intelligently, bridging the gap between natural language understanding and actionable software execution.
I. Introduction
Enter LangChain
This is where LangChain shines. LangChain is an open-source library designed to connect traditional software infrastructure with LLMs. Whether you want to integrate multiple language models, connect with vector databases, or build interactive agents, LangChain offers over 500 integrations to simplify the process.
Key features of LangChain include:
Support for multiple LLMs: Use different models in the same workflow without complex code changes.
Memory chains: Enable context-aware conversations or multi-step reasoning.
Agents and tools: Allow LLMs to decide which actions or tools to use dynamically.
What You Will Learn
This course focuses on the two major advancements that are redefining what developers can do with LLMs:
Function Calling Made Simple
Function calling empowers LLMs to invoke external programs as subroutines. This not only simplifies building tools for LLMs but also makes interactions more reliable and structured. For example, an LLM can now fetch, parse, and return data in JSON format without manual intervention.Tagging and Data Extraction
Traditionally, LLMs struggled with structured or tabular data. With function calling integrated into LangChain, these models can now extract information efficiently. You’ll learn to tag data, parse entities, and handle structured outputs—skills critical for real-world AI applications.
Building a Conversational Agent
By combining function calling, tools, and LangChain agents, you’ll be able to create sophisticated conversational agents. These agents are capable of:
Understanding user queries.
Deciding which tool or function to invoke.
Maintaining context across multi-turn interactions.
This approach opens doors to building intelligent systems that do more than just chat—they act, compute, and retrieve information dynamically.
Final Thoughts
The integration of function calling with LangChain represents a paradigm shift in AI development. Instead of treating LLMs solely as text generators, we can now leverage them as intelligent orchestrators of software systems. By mastering these tools, you’ll not only understand the future of AI-powered applications but also gain hands-on experience building solutions that are smarter, faster, and more interactive.
-----------------------------------------------------------------------------------------------------------------------------------------------
II. Harnessing OpenAI Function Calling: A Beginner’s Guide
Large Language Models (LLMs) like OpenAI’s GPT series have transformed how we interact with computers, enabling human-like text generation. But what if these models could do more than just chat? What if they could call functions in your software, retrieve data, and act on it automatically? Enter OpenAI Function Calling—a game-changing feature recently added to the API.
What is OpenAI Function Calling?
Traditionally, LLMs generate text responses based on user input. Function calling changes the game by allowing the model to output structured data—usually JSON—indicating which function to call and with what arguments. This lets your AI interface directly with your software or external APIs, enabling actions like fetching real-time weather, querying databases, or triggering automated workflows.
How Does It Work?
OpenAI has fine-tuned some of its latest models to recognize when calling a function is relevant. You define your functions in a structured way, specifying:
Function name: The identifier used to call the function.
Description: Explains what the function does, helping the model decide when to use it.
Parameters: Defines what arguments the function requires, including types, descriptions, and optional enums (e.g., Celsius or Fahrenheit for temperature).
Required fields: Indicate which parameters must always be provided.
Once your function is defined, you pass it to the LLM through the functions
parameter in the API call.
A Practical Example: Getting the Current Weather
Let’s consider a simple example: getCurrentWeather(location, unit)
. Since the LLM cannot access live weather data on its own, we provide this function.
Define the function: Include a JSON object describing the function, parameters, and descriptions.
Send messages to the LLM: Include user prompts along with the function definitions.
Model decides whether to call the function:
By default, the model operates in auto mode, choosing whether to call the function.
You can also force the function call or prevent it entirely using the
function_call
parameter.
Receive structured function call: The response includes the function name and a JSON dictionary of arguments.
⚠️ Important: The model doesn’t execute the function. It only tells you which function to call and with what arguments. You handle the actual execution in your code.
Handling Function Outputs
Once the function executes, you can pass its output back to the LLM to generate a final, natural language response. This enables conversational agents that not only understand queries but also act on them and explain the results in plain language. For example:
“The current weather in Boston is 72°F with a sunny and windy forecast.”
Tips and Best Practices
Token usage: Function definitions and descriptions count toward the model’s token limit, so keep them concise.
Parameter validation: Although the model is trained to return JSON, it isn’t strictly enforced. Always validate the output.
Function chaining: You can combine multiple function calls, passing outputs from one as inputs to another, creating more complex workflows.
What’s Next?
While you can use function calling directly through the OpenAI SDK, integrating it with LangChain primitives makes building multi-step, tool-using agents easier and faster. LangChain provides ready-made abstractions for chaining functions, routing queries, and maintaining conversational memory.
By leveraging OpenAI function calling, developers can turn LLMs from passive text generators into active participants in software workflows, unlocking a world of possibilities for automation, real-time data access, and intelligent agent design.
II. Exploring OpenAI Function Calling with GPT-3.5-Turbo
Large Language Models (LLMs) like GPT have become remarkably adept at understanding and generating natural language. But their capabilities extend far beyond just chatting—they can now interact with software functions, enabling dynamic responses based on live data. This is made possible through OpenAI Function Calling, a feature that lets the model suggest which functions to call and with what arguments.
In this tutorial, we’ll explore how to implement OpenAI Function Calling using the OpenAI SDK, with practical Python examples.
Why Function Calling Matters
Traditionally, GPT models generate text for humans. But real-world applications often require more than just text—they need structured data, action triggers, or API calls.
Function Calling bridges this gap by allowing the LLM to output a JSON object specifying:
The function name it wants to call.
The arguments to pass to that function.
This approach enables intelligent software orchestration: the model decides what action is required, and your backend executes it.
Setting Up the Environment
Before getting started, ensure you have your OpenAI API key stored in a .env
file:
Defining a Function
For demonstration, we’ll use a dummy function get_current_weather
that returns the current weather. In production, this could be replaced with a real API call:
Passing Functions to the LLM
We define the function metadata for OpenAI using a JSON-like structure:
The description and parameters help the model decide when and how to call this function.
Calling the Model
Here’s a basic example where the model decides whether to use the function:
The output contains a function_call
object with:
name
: the function to call (get_current_weather
)arguments
: a JSON blob containing the required inputs.
You can then convert the arguments to a Python dictionary:
Handling Different Function Call Modes
OpenAI provides three modes for function calling:
Auto (default): Model decides if a function call is necessary.
None: Prevents the model from calling any function.
Force call: Ensures the function is always called.
Example: Forcing a function call:
Integrating Function Responses Back into the Model
After executing the function, you can pass the results back to the LLM to generate a final response:
Output:
"The current weather in Boston is 72°F with a sunny and windy forecast."
Tips and Best Practices
Token usage: Function definitions count toward your token limit. Keep descriptions concise.
Validation: The model suggests arguments but does not execute functions. Always validate before using.
Multi-step workflows: You can chain multiple function calls and feed outputs back into the model for complex logic.
Conclusion
OpenAI Function Calling transforms GPT from a passive text generator into an active decision-making agent that can interface with your applications. By combining function definitions, argument parsing, and model responses, you can create intelligent systems capable of real-world interactions.
Next, you can explore LangChain integration, which simplifies chaining multiple functions, routing queries, and maintaining conversation context for advanced applications.
III. Unlocking the Power of LangChain Expression Language (LCEL)
When working with large language models (LLMs), building robust applications often requires composing multiple components—like prompt templates, models, retrievers, and parsers—into coherent pipelines. This is where LangChain Expression Language (LCEL) comes in, providing a new syntax and framework for constructing these pipelines more transparently and efficiently.
What is LCEL?
LCEL is a syntax and interface introduced in LangChain that standardizes how components—called runnables—are combined and executed. Runnables can be anything from a language model call to a prompt template or an output parser. By using LCEL, developers can define inputs, outputs, and execution logic clearly, with built-in support for streaming, batching, and asynchronous execution.
Key Features of LCEL
Standardized Runnable Interface
Every runnable exposes common methods:invoke()
: Executes the runnable on a single input.stream()
: Executes the runnable on a single input and streams the response.batch()
: Executes the runnable on a list of inputs.
Each of these also has a corresponding asynchronous version.
Input and Output Schemas
Runnables define schemas for inputs and outputs, making pipelines predictable and debuggable.Fallback Mechanisms
LLMs can be unpredictable. LCEL allows attaching fallbacks not only to individual components but also to entire chains, improving reliability.Parallel Execution
LLM calls can take time. LCEL makes it easy to run components in parallel, reducing latency in complex pipelines.Built-in Logging
As pipelines grow, tracking inputs, outputs, and steps becomes crucial. LCEL natively logs this information, helping with debugging and monitoring.
Creating a Simple LCEL Chain
Let’s build a basic pipeline that generates a short joke:
Example output:
"Why don't bears ever get caught in traffic? Because they always take the beariest best routes."
Building a Retrieval-Augmented Chain
LCEL also supports retrieval-augmented generation (RAG). You can fetch relevant documents, then pass them into a language model:
This allows you to dynamically fetch context and feed it into your prompts.
Parameter Binding and Updating Components
LCEL makes it easy to bind or override parameters in a chain:
You can also attach fallbacks if a model fails to produce valid output:
This ensures that your pipeline continues to work even when a component fails.
Advanced Execution Patterns
Batch Execution: Call the chain on multiple inputs simultaneously.
Streaming Responses: Get partial results as they are generated.
Asynchronous Execution: Run multiple chains in parallel to speed up processing.
LCEL pipelines can scale to hundreds of components, enabling sophisticated multi-step reasoning and RAG workflows.
Why LCEL is a Game Changer
Transparency: Clear structure of inputs, outputs, and transformations.
Flexibility: Easy to swap components, bind parameters, or attach fallbacks.
Performance: Built-in support for parallelism and streaming.
Observability: Native logging of each step in the chain.
LCEL empowers developers to build robust, complex LLM applications with minimal boilerplate and maximum clarity.
Next Steps
In the next lesson, LCEL is combined with OpenAI Function Calling, enabling pipelines that not only generate text but also decide when to call external functions, creating fully interactive and intelligent agents.
------------------------------------------------------------------------------------------------------------------------------------------------
III. Mastering LangChain Expression Language (LCEL) – Hands-On Tutorial
LangChain Expression Language (LCEL) is a modern syntax for building pipelines and chains with LLMs, making it easier to compose prompts, models, retrievers, and output parsers in a transparent and flexible way. Let’s walk through it step by step.
1. Setting Up the Environment
Before diving in, we set up the environment and initialize the OpenAI API:
Tip: Always store your API key in
.env
to avoid hardcoding credentials.
2. Creating a Simple Chain
LCEL allows you to chain components together. Let’s start by creating a simple joke generator:
Output example:
"Why don't bears ever get caught in traffic? Because they always take the beariest best routes."
This simple chain demonstrates prompt → model → parser flow.
3. Building a More Complex Chain with a Retriever
Often, LLMs need context from external data. LCEL supports retrieval-augmented pipelines:
Next, we use a RunnableMap to fetch context dynamically:
The pipeline now fetches relevant documents and feeds them into the LLM automatically.
4. Using OpenAI Function Calls
LCEL can integrate OpenAI Function Calling, allowing your model to decide when to call external functions:
You can bind multiple functions and let the model pick which to call:
5. Handling Fallbacks
LLMs aren’t perfect. LCEL allows you to attach fallbacks to chains or components:
Fallbacks ensure that even if the model fails to return valid JSON, the chain continues to work reliably.
6. Exploring the Runnable Interface
LCEL chains expose several powerful methods:
invoke()
→ single synchronous inputbatch()
→ list of inputs, runs in parallelstream()
→ stream responses as they are generatedainvoke()
→ asynchronous invocation
Example:
These methods give you flexibility and performance for complex LLM applications.
Conclusion
With LCEL, you can:
Compose simple to complex chains
Integrate retrieval-augmented workflows
Bind OpenAI functions and allow dynamic function calling
Add fallbacks for reliability
Handle batch, streaming, and async execution
LCEL makes it easier than ever to build robust, maintainable, and transparent LLM applications.
------------------------------------------------------------------------------------------------------------------------------------------------
IV. OpenAI Function Calling in LangChain with Pydantic
LangChain Expression Language (LCEL) supports OpenAI Function Calls, and Pydantic makes defining those functions easier.
1. What is Pydantic?
Pydantic is a Python data validation library.
It lets you define schemas with type hints and automatically validates data.
Pydantic objects can be exported to JSON, making it easy to generate OpenAI function definitions.
Normal Python class vs Pydantic model:
Pydantic validates types automatically. Passing
age="bar"
would fail with a Pydantic model but not in a normal class.
2. Converting Pydantic Models to OpenAI Functions
You can use Pydantic models to define OpenAI functions without manually writing the JSON schema.
Resulting JSON function schema:
The docstring becomes the function description, and Field descriptions map to parameter descriptions.
3. Using OpenAI Functions with LCEL
Once you have a function schema, you can pass it to a model and invoke it.
Output example:
4. Binding Functions to Models
You can bind functions to the model so you don’t need to pass them every time:
Binding is useful when passing models around in chains or pipelines.
5. Using Multiple Functions
You can pass a list of functions, letting the LLM decide which function to call:
6. Summary
Using Pydantic + LCEL + OpenAI functions gives you:
Type safety and validation of inputs.
Automatic JSON function schemas from Python classes.
Dynamic function calling in chains.
Ability to bind functions to models for cleaner code.
This setup is ideal for structured extraction, tagging, and function orchestration in LangChain pipelines.
IV. 1. Pydantic Recap
Pydantic allows you to create validated data models in Python, providing type checking and structured data:
Nested structures are supported:
2. Pydantic → OpenAI Function
Use Pydantic models to define OpenAI function parameters.
Add docstrings for function descriptions.
Use Field for argument descriptions.
Important: Pydantic model must have a docstring for LangChain to generate a function; otherwise, it will fail:
3. Using Functions with ChatOpenAI
You can invoke a model with functions:
Bind a function to a model for repeated use:
Force a model to always call a specific function:
4. Using Functions in a Chain
This allows you to treat a function-bound model like a normal LCEL chain component.
5. Multiple Functions (Dynamic Selection)
Let the LLM choose which function to call based on input:
✅ Key Takeaways
Pydantic models = structured function arguments with validation.
Docstrings are mandatory to describe the function for OpenAI.
convert_pydantic_to_openai_function transforms models into OpenAI-compatible JSON functions.
Binding functions to models simplifies reuse in chains and LCEL pipelines.
Multiple functions let the model dynamically choose which function to call.
1. Concept Overview
Tagging:
Input: unstructured text + structured description (schema)
Goal: LLM outputs a single structured object based on the schema.
Example: Extract sentiment (
pos
,neg
,neutral
) and language (ISO 639-1) from text.
Extraction:
Input: unstructured text + structured description
Goal: LLM extracts multiple items/entities into a structured format.
Example: Extract a list of people mentioned in a text or list of papers cited in an article.
2. Setting Up Tagging
Define Pydantic model for tagging:
Convert Pydantic model to OpenAI function:
Set up model and chain:
Invoke tagging chain:
Use JSON output parser for convenience:
3. Setting Up Extraction
Define schemas for extraction:
Convert to OpenAI function:
Optional: Prompt to improve reasoning (e.g., don’t guess missing values):
Extract entities:
Use JSON key parser to extract specific field:
4. Real-World Example: Articles & Papers
Load article:
Define tagging for overview:
Define extraction for papers:
Chain with text splitting (for long articles):
Invoke extraction chain on all splits:
System prompt for accuracy:
"Extract only papers mentioned in the article. Do not guess or include the article itself. Return empty list if none."
✅ Key Takeaways
Tagging vs Extraction:
Tagging → structured object from text
Extraction → list of objects/entities from text
Pydantic models define the schema for the LLM to return.
Function binding allows deterministic, reusable function calls.
JSON output parsers simplify handling the structured data returned.
Text splitting + chaining handles long documents efficiently.
1. Concept Overview
Tagging:
Input: unstructured text + structured description (schema)
Goal: LLM outputs a single structured object based on the schema.
Example: Extract sentiment (
pos
,neg
,neutral
) and language (ISO 639-1) from text.
Extraction:
Input: unstructured text + structured description
Goal: LLM extracts multiple items/entities into a structured format.
Example: Extract a list of people mentioned in a text or list of papers cited in an article.
2. Setting Up Tagging
Define Pydantic model for tagging:
Convert Pydantic model to OpenAI function:
Set up model and chain:
Invoke tagging chain:
Use JSON output parser for convenience:
3. Setting Up Extraction
Define schemas for extraction:
Convert to OpenAI function:
Optional: Prompt to improve reasoning (e.g., don’t guess missing values):
Extract entities:
Use JSON key parser to extract specific field:
4. Real-World Example: Articles & Papers
Load article:
Define tagging for overview:
Define extraction for papers:
Chain with text splitting (for long articles):
Invoke extraction chain on all splits:
System prompt for accuracy:
"Extract only papers mentioned in the article. Do not guess or include the article itself. Return empty list if none."
✅ Key Takeaways
Tagging vs Extraction:
Tagging → structured object from text
Extraction → list of objects/entities from text
Pydantic models define the schema for the LLM to return.
Function binding allows deterministic, reusable function calls.
JSON output parsers simplify handling the structured data returned.
Text splitting + chaining handles long documents efficiently.
V. 1. Tagging
Schema Definition: Using Pydantic models to define the structure of the tags.
Function Conversion: Convert Pydantic model to OpenAI function.
Chain Setup: Combine prompt → model bound with functions → output parser.
Invoke:
2. Extraction
Schemas:
Function Conversion & Binding:
Prompt & Chain:
Invoke:
3. Real-World Example: Article Tagging & Extraction
Load Article:
Overview Tagging Schema:
Paper Extraction Schema:
Set Up Chains:
System Prompt for Accurate Extraction:
4. Handling Large Texts
Text Splitting:
Flatten Function:
Runnable Lambda for Preprocessing:
Chain Mapping & Flattening:
✅ Key Points
Tagging → single structured object.
Extraction → multiple structured objects.
JSON parsers simplify access to nested results.
For long texts → split → map extraction chain → flatten results.
Always provide a clear system prompt to avoid LLM hallucinations.
VI. LangChain tools and routing
1. Concept of a Tool
Tools in LangChain are Python functions that a language model can choose to call.
Two components:
LLM decides which tool to use and with what inputs.
The tool is actually invoked with those inputs.
LangChain comes with built-in tools: search, math, SQL, etc., but creating custom tools is often necessary.
2. Creating a Custom Tool
Define input schema using Pydantic to make it explicit for the LLM:
Define the function and decorate it:
Check args and description:
Convert to OpenAI function definition:
3. Example of Another Tool
Wikipedia search tool:
Can also convert to OpenAI function definition similarly.
4. Using OpenAPI Specs
APIs often have OpenAPI specs.
LangChain can convert OpenAPI specs to OpenAI function definitions and callables:
This gives a list of function definitions usable by the LLM and actual callables for execution.
5. Routing: Choosing Which Tool to Call
Bind multiple tools to an LLM:
LLM decides which function to call based on input:
Can add a prompt to customize LLM behavior:
6. Output Handling
Two main types of outputs:
Agent Action: LLM decides to call a tool.
Agent Finish: LLM returns a normal response.
Use OpenAI Functions Agent Output Parser to parse outputs into:
result.tool
→ which tool to callresult.toolInput
→ dictionary of argumentsresult.returnValues
→ LLM output when no tool is called
7. Routing Function
Defines what to do after LLM output:
Allows automatic execution of the selected tool.
8. Workflow Summary
Define tools (custom or from OpenAPI).
Convert tools to OpenAI function definitions.
Bind multiple tools to an LLM.
LLM receives user input → decides tool or response.
Parse LLM output using output parser.
Route the result → execute tool if needed.
Return final output.
9. Example Use
This is the core of LangChain routing: using the LLM to choose tools intelligently and execute them while handling outputs cleanly.
VI. 1. Setting Up Tools
A. Basic Search Tool
@tool
decorator wraps the function so it can be used by the LLM.You can access its metadata:
B. Search Tool with Pydantic Input
Pydantic schema gives the LLM a structured input description.
C. Weather Tool (Open-Meteo API)
Decorated as a tool.
Converts to OpenAI function definition with
format_tool_to_openai_function(get_current_temperature)
.Run it with:
D. Wikipedia Tool
Returns top 3 Wikipedia page summaries.
Also can be converted to OpenAI function format.
2. OpenAPI Spec to OpenAI Functions
Load OpenAPI JSON spec:
Bind functions to a model:
Automatically calls correct function based on input.
3. Routing Between Tools
Bind your custom tools:
LLM selects tool based on input:
4. Prompt + Routing Chain
Prompt defines assistant behavior:
Output parser interprets LLM decisions:
result
can be:AgentAction → tool to call + input
AgentFinish → regular response
5. Route Function
Automatically executes the chosen tool or returns a normal LLM response.
6. Final Chain
Returns actual weather, Wikipedia summaries, or simple greetings.
✅ Key Takeaways:
@tool
+ Pydantic schema → structured, callable functions for LLM.OpenAPI spec → automatically generate functions + callables.
Output parser → detects whether LLM wants to call a tool or just respond.
Route function → executes tool or returns LLM output.
Full workflow allows dynamic tool selection + execution based on LLM input.
VII. Building a Conversational Agent with LangChain
1. Basics of an Agent
Agents = Language Model + Code (tools).
LM role: Decide which steps/tools to take and provide input arguments.
Agent loop:
LM decides which tool to call.
Tool executes and returns observation.
Observation fed back to LM.
Repeat until stopping criteria.
Stopping criteria examples:
LM decides to stop (AgentFinish)
Maximum iterations
Custom rules
2. Tools
You can define multiple tools to expand agent capabilities:
Example Tools
Tools can be wrapped with
format_tool_to_openai_function()
to make them OpenAI function-compatible.Additional tools can be added, e.g., a simple string reversal:
3. Prompt Template with Agent Scratchpad
Include placeholders for:
messages
(chat history)agent_scratchpad
(intermediate tool outputs)
4. Running the Agent (Loop)
Create a loop that:
Calls the LM with input + scratchpad.
Gets
tool
+tool_input
.Executes the tool.
Updates scratchpad with observations.
Repeat until stopping criteria.
You can wrap this in a reusable
agent_executor
to handle:JSON parsing errors
Tool execution errors
Multi-step actions
5. Adding Chat Memory
Memory lets the agent remember previous interactions.
Create a
messages
placeholder in the prompt to maintain chat history:
Pass memory to
agent_executor
so it can recall previous messages:
6. Multi-Tool and Multi-Hop Capabilities
Agent can:
Decide between multiple tools automatically
Chain multiple tools for multi-step reasoning
Example:
Agent:
Calls
get_current_temperature
Calls
search_wikipedia
Returns combined results
7. Creating a Chatbot UI
Use Panel or Gradio for a simple dashboard:
User can chat with agent, and agent will:
Maintain conversation history
Use tools
Return responses dynamically
✅ Resulting Features
Conversational agent with multi-step reasoning.
Tool execution (Wikipedia, Weather, custom tools).
Chat memory (remembers previous messages).
Multi-hop actions.
Optionally, a dashboard UI for real-time interaction.
VII. Final Conversational Agent Architecture
1. Tools
You have three tools:
Weather Tool –
get_current_temperature
Uses Open-Meteo API to fetch current temperature given latitude & longitude.
Wikipedia Tool –
search_wikipedia
Searches Wikipedia and returns summaries of top 3 pages.
Custom Tool –
create_your_own
Currently reverses a string, but you can replace with any custom function.
2. Chat Model & Function Formatting
You wrap the tools as OpenAI functions using:
Then you bind them to a ChatOpenAI model:
3. Prompt Template
Includes:
system
message → sets assistant personality.chat_history
→ maintains conversation memory.agent_scratchpad
→ keeps track of intermediate tool calls.
4. Agent Chain
Uses RunnablePassthrough to pre-format intermediate steps for the scratchpad:
5. Memory
ConversationBufferMemory
stores previous messages to allow multi-turn chat:
6. Agent Executor
Combines the agent chain, tools, memory, and error handling.
Handles tool calls, multi-step reasoning, and returns the response:
Example usage:
7. Chatbot GUI with Panel
Panel UI binds user input to the agent executor.
Displays conversation as rows of User and ChatBot messages.
Your
cbfs
class orchestrates:Calling the agent executor
Updating memory
Updating the GUI
8. Features
Multi-turn conversation with memory.
Tool usage (weather, Wikipedia, custom tools).
Multi-step reasoning (can call multiple tools in sequence).
Interactive dashboard UI.
Extendable with new tools or custom functionality.
VIII. Conclusion
OpenAI Function Calling – You learned how to let the language model decide when and how to call functions, enabling structured interactions with external tools or APIs.
LangChain Expression Language (LEL) – You saw how to use expressions to transform, extract, and handle structured data programmatically.
Tool Usage & Selection – You explored how to define tools (like weather or Wikipedia queries) and have the agent dynamically select which to use based on user input.
Building a Conversational Agent – All the concepts came together to create a ChatGPT-like agent that can:
Maintain multi-turn conversations using memory
Call tools as needed
Display results in a GUI
With these skills, you can now build intelligent, tool-using chatbots, automation agents, or data extraction pipelines tailored to your own use cases.
It’s an exciting time in AI—these building blocks let you turn language models into actionable agents, not just conversational interfaces.
No comments:
Post a Comment