Learn how to invoke a LangChain agent within a sandbox environment
In this tutorial, you’ll learn how to invoke an agent within a sandbox environment. To do this, you will start a sandbox with the appropriate environment variables, install the necessary dependencies, and run a Python script that creates and invokes an agent using the LangChain library.
This tutorial uses Claude as the language model for the agent, which requires an Anthropic API key. You can use any model that is compatible with LangChain. Ensure to adjust the code and environment variables accordingly.
Copy and paste the following code into a file named demo.py in the same directory as this tutorial, then run the above code snippet to see how to invoke a LangChain agent within a sandbox environment.
demo.py
# Taken from https://docs.langchain.com/oss/python/langchain/quickstartimport osfrom dataclasses import dataclassfrom langchain.agents import create_agentfrom langchain.chat_models import init_chat_modelfrom langchain.tools import tool, ToolRuntimefrom langgraph.checkpoint.memory import InMemorySaverfrom langchain.agents.structured_output import ToolStrategy# Get API key from environmentapi_key = os.getenv("ANTHROPIC_API_KEY")if not api_key: raise ValueError("ANTHROPIC_API_KEY environment variable not set")# Define system promptSYSTEM_PROMPT = """You are an expert weather forecaster, who speaks in puns.You have access to two tools:- get_weather_for_location: use this to get the weather for a specific location- get_user_location: use this to get the user's locationIf a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location."""# Define context schema@dataclassclass Context: """Custom runtime context schema.""" user_id: str# Define tools@tooldef get_weather_for_location(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!"@tooldef get_user_location(runtime: ToolRuntime[Context]) -> str: """Retrieve user information based on user ID.""" user_id = runtime.context.user_id return "Florida" if user_id == "1" else "SF"# Configure modelmodel = init_chat_model( "claude-sonnet-4-6", temperature=0)# Define response format@dataclassclass ResponseFormat: """Response schema for the agent.""" # A punny response (always required) punny_response: str # Any interesting information about the weather if available weather_conditions: str | None = None# Set up memorycheckpointer = InMemorySaver()# Create agentagent = create_agent( model=model, system_prompt=SYSTEM_PROMPT, tools=[get_user_location, get_weather_for_location], context_schema=Context, response_format=ToolStrategy(ResponseFormat), checkpointer=checkpointer)# Run agentconfig = {"configurable": {"thread_id": "1"}}response = agent.invoke( {"messages": [{"role": "user", "content": "what is the weather outside?"}]}, config=config, context=Context(user_id="1"))print(response['structured_response'])# Note that we can continue the conversation using the same `thread_id`.response = agent.invoke( {"messages": [{"role": "user", "content": "thank you!"}]}, config=config, context=Context(user_id="1"))print(response['structured_response'])