Foundation patterns for building robust agentic systems
Prompt Chaining is a pattern that breaks down complex tasks into a sequence of smaller, focused prompts. Each prompt in the chain builds upon the results of previous prompts, creating a structured flow of information and reasoning. This pattern is particularly useful for complex tasks that require multiple steps of processing or reasoning.
There are several approaches to implementing prompt chaining, each with its own trade-offs:
from openai import OpenAI
# Initialize client
client = OpenAI()
def analyze_sales_call(transcript: str) -> dict:
"""Analyze a sales call transcript using prompt chaining."""
# Step 1: Summarize the call
summary = client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[{"role": "user", "content": f"Summarize this sales call:\n{transcript}"}]
).choices[0].message.content
# Step 2: Extract key insights
insights = client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[{"role": "user", "content": f"Extract key insights from this summary:\n{summary}"}]
).choices[0].message.content
# Step 3: Generate next steps
next_steps = client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[{"role": "user", "content": f"Based on these insights, suggest next steps:\n{insights}"}]
).choices[0].message.content
return {
"summary": summary,
"insights": insights,
"next_steps": next_steps
}
# Example usage
if __name__ == "__main__":
# This would typically come from an audio file
sample_transcript = """
[Sales Rep] Thanks for joining us today. How can we help you?
[Customer] We're looking to improve our team's productivity...
"""
result = analyze_sales_call(sample_transcript)
print(f"Summary: {result['summary']}")
print(f"\nKey Insights: {result['insights']}")
print(f"\nNext Steps: {result['next_steps']}")
Routing is a pattern that directs inputs to different specialized agents or chains based on their content or purpose. It's like a traffic controller for AI agents, ensuring each query reaches the most appropriate handler. This pattern is particularly useful when you have multiple specialized agents and need to determine which one should handle a particular request.
There are several approaches to implementing routing:
from typing import Dict, List, Any, TypedDict, Annotated
# 1. Define agent functions
def support_agent(state):
return {"messages": [{"role": "assistant", "content": "I'll help with your support request."}]}
def order_agent(state):
return {"messages": [{"role": "assistant", "content": "Let me check your order status..."}]}
def product_agent(state):
return {"messages": [{"role": "assistant", "content": "Here's information about the product..."}]}
# 2. Router function
def router(state):
"""Routes messages to the appropriate agent"""
last_message = state["messages"][-1]["content"]
if "order" in last_message.lower():
return "order_agent"
elif "product" in last_message.lower():
return "product_agent"
return "support_agent"
# 3. Example usage
app = create_workflow()
result = app.invoke({
"messages": [{"role": "user", "content": "What's my order status?"}],
"next": ""
})
Parallelization is a pattern that enables concurrent execution of multiple tasks or agents, significantly improving performance and throughput. This pattern is particularly useful for tasks that can be executed independently or when dealing with multiple data sources or processing streams.
There are several approaches to implementing parallelization:
# 1. Define processing chains
summary_chain = ChatPromptTemplate.from_template(
"Summarize this: {text}"
) | model
sentiment_chain = ChatPromptTemplate.from_template(
"Analyze sentiment: {text}"
) | model
# 2. Run in parallel
parallel_chain = RunnableParallel(
summary=summary_chain,
sentiment=sentiment_chain
)
# 3. Execute with input
results = parallel_chain.invoke({"text": "Sample text"})
Reflection is a pattern that enables AI agents to evaluate their own performance, learn from past experiences, and improve their future actions. This pattern is crucial for building self-improving systems that can adapt to new situations and optimize their behavior over time.
There are several approaches to implementing reflection:
# Pseudocode for a reflection loop
response = generate_answer(user_query)
for _ in range(max_reflections):
critique = reflect_on(response) # could be same or different LLM
if critique.suggests_improvement:
response = improve_answer(response, critique)
else:
break # No more improvements, or max attempts reached
return response
# Pseudocode for Memory System with Reflection
# 1. Basic Memory Storage
memory = [
{
'input': "How to reset password?",
'output': "Click 'Forgot Password'",
'feedback': "Worked!"
},
{
'input': "Help me login",
'output': "Enter username/password",
'feedback': "Too vague"
}
]
def find_similar(query):
"""Find most relevant memory for the query"""
# In practice, use vector similarity or LLM
return memory[0] # Simplified for example
def reflect(query):
"""Analyze past interactions to improve responses"""
similar = find_similar(query)
return f"""
Based on similar past interaction:
Input: {similar['input']}
Response: {similar['output']}
Feedback: {similar['feedback']}
Suggestion: Be more specific in responses
"""
# Example Usage
user_query = "I forgot my password"
print("Similar response:", find_similar(user_query)['output'])
print("\nReflection:", reflect(user_query))
Planning is a pattern that enables AI agents to create and execute sequences of actions to achieve specific goals. By breaking down complex tasks into manageable steps and considering dependencies and constraints, this pattern enables systematic problem-solving and goal achievement.
There are several approaches to implementing planning systems:
# Simple LLM client mock
llm = LLM()
# Define actions: what they need and what they produce
actions = [
{
'name': 'research_market',
'needs': [],
'gives': ['market_knowledge'],
'description': 'Research market conditions and opportunities'
},
{
'name': 'develop_strategy',
'needs': ['market_knowledge'],
'gives': ['business_strategy'],
'description': 'Develop business strategy based on market research'
}
]
def get_action(name):
"""Helper to get action by name"""
return next((a for a in actions if a['name'] == name), None)
def suggest_next_action(state, goal_conditions):
"""Use LLM to suggest the next best action"""
prompt = f"""Current state: {state}
Goal conditions: {goal_conditions}
Available actions: {[a['name'] for a in actions]}
Suggest the most appropriate next action:"""
action_name = llm.generate(f"suggest_action:{prompt}")
return get_action(action_name)
def plan_actions(goal_conditions, actions, initial_state):
plan = []
state = set(initial_state)
# Let LLM analyze the goal first
analysis = llm.generate(f"analyze_state:Goal is {goal_conditions}. Current state is {state}")
print(f"LLM Analysis: {analysis}")
while not all(cond in state for cond in goal_conditions):
# Get LLM suggestion for next action
suggested_action = suggest_next_action(state, goal_conditions)
if not suggested_action:
print("No valid action found")
break
# Check prerequisites
prereqs_met = all(p in state for p in suggested_action['needs'])
if prereqs_met:
plan.append(suggested_action['name'])
state.update(suggested_action['gives'])
print(f"✅ Added action: {suggested_action['name']}")
else:
# If prerequisites not met, plan those first
print(f"🔠Planning prerequisites for: {suggested_action['name']}")
for prereq in suggested_action['needs']:
if prereq not in state:
# Find action that provides this prerequisite
for action in actions:
if prereq in action['gives']:
subplan = plan_actions([prereq], actions, state)
plan.extend(subplan)
state.update(prereq for a in subplan for prereq in get_action(a)['gives'] if get_action(a))
return plan
# Example usage
initial_state = []
goal_conditions = ['business_strategy']
print("Starting planning process...")
plan = plan_actions(goal_conditions, actions, initial_state)
print("\nFinal Plan:", plan)