Panel
Panel is an open-source Python library designed to streamline the development of robust tools, dashboards, and complex applications entirely within Python. With a comprehensive philosophy, Panel integrates seamlessly with the PyData ecosystem, offering powerful, interactive data tables, visualizations, and much more, to unlock, visualize, share, and collaborate on your data for efficient workflows.
In this guide, we will go over how to setup the PanelCallbackHandler
. The PanelCallbackHandler
is useful for rendering and streaming the chain of thought from Langchain objects like Tools, Agents, and Chains. It inherits from Langchain’s BaseCallbackHandler.
Check out the panel-chat-examples docs to see more examples on how to use PanelCallbackHandler. If you have an example to demo, we’d love to add it to the panel-chat-examples gallery!
Installation and Setup
pip install langchain panel
See full instructions in Panel's Getting started documentation.
Basic chat with an LLM
To get started:
- Define a chat callback, like
respond
here. - Pass the instance of a
ChatFeed
orChatInterface
toPanelCallbackHandler
. - Pass the
callback_handler
as a list intocallbacks
when constructing or using Langchain objects likeChatOpenAI
here.
import panel as pn
from langchain_community.callbacks import PanelCallbackHandler
from langchain_openai import ChatOpenAI
pn.extension()
def respond(contents):
llm.invoke(contents)
chat_interface = pn.chat.ChatInterface(callback=respond)
callback = PanelCallbackHandler(chat_interface)
llm = ChatOpenAI(model_name="gpt-4o-mini", streaming=True, callbacks=[callback])
chat_interface
This example shows the response from the LLM only. A LLM by it self does not show any chain of thought. Later we will build an agent that uses tools. This will show chain of thought.
Async chat with an LLM
Using async
prevents blocking the main thread, enabling concurrent interactions with the app. This improves responsiveness and user experience.
To do so:
- Prefix the function with
async
- Prefix the call with
await
- Use
ainvoke
instead ofinvoke
import panel as pn
from langchain_community.callbacks import PanelCallbackHandler
from langchain_openai import ChatOpenAI
pn.extension()
async def respond(contents):
await llm.ainvoke(contents)
chat_interface = pn.chat.ChatInterface(callback=respond)
callback = PanelCallbackHandler(chat_interface)
llm = ChatOpenAI(model_name="gpt-4o-mini", streaming=True, callbacks=[callback])
chat_interface
Agents with Tools
Agents and tools can also be used. Simply pass callback to the AgentExecutor
and its invoke
method.
import panel as pn
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent, load_tools
from langchain_community.callbacks import PanelCallbackHandler
from langchain_openai import ChatOpenAI
pn.extension()
def respond(contents):
agent_executor.invoke({"input": contents}, {"callbacks": [callback]})
chat_interface = pn.chat.ChatInterface(callback=respond)
callback = PanelCallbackHandler(chat_interface)
llm = ChatOpenAI(model_name="gpt-4o-mini", streaming=True, callbacks=[callback])
tools = load_tools(["ddg-search"])
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, callbacks=[callback])
chat_interface
Chain with Retrievers
RAG is also possible; simply pass callback
again. Then ask it what the secret number is!
from uuid import uuid4
import panel as pn
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnablePassthrough
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain_community.callbacks import PanelCallbackHandler
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
TEXT = "The secret number is 888."
TEMPLATE = """Answer the question based only on the following context:
{context}
Question: {question}
"""
pn.extension(design="material")
@pn.cache
def get_vector_store():
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_text(TEXT)
embeddings = OpenAIEmbeddings()
db = Chroma.from_texts(texts, embeddings)
return db
def get_chain(callbacks):
retriever = db.as_retriever(callbacks=callbacks)
model = ChatOpenAI(callbacks=callbacks, streaming=True)
def format_docs(docs):
text = "\n\n".join([d.page_content for d in docs])
return text
def hack(docs):
# https://github.com/langchain-ai/langchain/issues/7290
for callback in callbacks:
callback.on_retriever_end(docs, run_id=uuid4())
return docs
return (
{"context": retriever | hack | format_docs, "question": RunnablePassthrough()}
| prompt
| model
)
async def respond(contents):
chain = get_chain(callbacks=[callback])
await chain.ainvoke(contents)
db = get_vector_store()
prompt = ChatPromptTemplate.from_template(TEMPLATE)
chat_interface = pn.chat.ChatInterface(callback=respond)
callback = PanelCallbackHandler(chat_interface)
chat_interface