Agentic Core Patterns

Foundation patterns for building robust agentic systems

Prompt Chaining

Description

Prompt Chaining is a pattern that breaks down complex tasks into a sequence of smaller, focused prompts. Each prompt in the chain builds upon the results of previous prompts, creating a structured flow of information and reasoning. This pattern is particularly useful for complex tasks that require multiple steps of processing or reasoning.

There are several approaches to implementing prompt chaining, each with its own trade-offs:

  • Basic Python Functions: The most straightforward approach, chaining functions that call an LLM API. This method is easy to understand and implement, but lacks the advanced features of specialized libraries, such as automatic optimization and state management.
  • DSPy: A framework from Stanford that provides a more declarative way to define and optimize prompt chains. DSPy separates the logic of the program (the modules) from the parameters (the prompts and model configurations), which allows for automatic optimization of the prompts.
  • LangChain: A popular library for building applications with LLMs. LangChain provides a comprehensive set of tools for building complex chains, including state management, memory, and integrations with other services.

Basic Python Implementation

from openai import OpenAI

# Initialize client
client = OpenAI()

def analyze_sales_call(transcript: str) -> dict:
    """Analyze a sales call transcript using prompt chaining."""
    # Step 1: Summarize the call
    summary = client.chat.completions.create(
        model="gpt-4-turbo-preview",
        messages=[{"role": "user", "content": f"Summarize this sales call:\n{transcript}"}]
    ).choices[0].message.content
    
    # Step 2: Extract key insights
    insights = client.chat.completions.create(
        model="gpt-4-turbo-preview",
        messages=[{"role": "user", "content": f"Extract key insights from this summary:\n{summary}"}]
    ).choices[0].message.content
    
    # Step 3: Generate next steps
    next_steps = client.chat.completions.create(
        model="gpt-4-turbo-preview",
        messages=[{"role": "user", "content": f"Based on these insights, suggest next steps:\n{insights}"}]
    ).choices[0].message.content
    
    return {
        "summary": summary,
        "insights": insights,
        "next_steps": next_steps
    }

# Example usage
if __name__ == "__main__":
    # This would typically come from an audio file
    sample_transcript = """
    [Sales Rep] Thanks for joining us today. How can we help you?
    [Customer] We're looking to improve our team's productivity...
    """
    
    result = analyze_sales_call(sample_transcript)
    print(f"Summary: {result['summary']}")
    print(f"\nKey Insights: {result['insights']}")
    print(f"\nNext Steps: {result['next_steps']}")

Routing

Description

Routing is a pattern that directs inputs to different specialized agents or chains based on their content or purpose. It's like a traffic controller for AI agents, ensuring each query reaches the most appropriate handler. This pattern is particularly useful when you have multiple specialized agents and need to determine which one should handle a particular request.

There are several approaches to implementing routing:

  • Rule-based Routing: Uses predefined rules or conditions to direct inputs to specific handlers.
  • LLM-based Classification: Uses a language model to analyze the input and determine the appropriate handler.
  • OpenAI Function Calling: While primarily for tool use, the underlying mechanism of choosing which function to call based on a prompt is a form of routing. Read about OpenAI Function Calling.

Implementation using LangGraph

from typing import Dict, List, Any, TypedDict, Annotated
# 1. Define agent functions
def support_agent(state):
    return {"messages": [{"role": "assistant", "content": "I'll help with your support request."}]}

def order_agent(state):
    return {"messages": [{"role": "assistant", "content": "Let me check your order status..."}]}

def product_agent(state):
    return {"messages": [{"role": "assistant", "content": "Here's information about the product..."}]}

# 2. Router function
def router(state):
    """Routes messages to the appropriate agent"""
    last_message = state["messages"][-1]["content"]
    if "order" in last_message.lower():
        return "order_agent"
    elif "product" in last_message.lower():
        return "product_agent"
    return "support_agent"

# 3. Example usage
app = create_workflow()
result = app.invoke({
    "messages": [{"role": "user", "content": "What's my order status?"}],
    "next": ""
})

Parallelization

Description

Parallelization is a pattern that enables concurrent execution of multiple tasks or agents, significantly improving performance and throughput. This pattern is particularly useful for tasks that can be executed independently or when dealing with multiple data sources or processing streams.

There are several approaches to implementing parallelization:

  • LangChain's RunnableParallel: Simplifies running multiple chains in parallel and combining their outputs, ideal for independent AI operations. Read about RunnableParallel
  • Async/Await: Python's asyncio for concurrent execution of I/O-bound tasks. Python asyncio docs
  • Threading: For CPU-bound tasks that can benefit from parallel execution. Python threading docs

Implementation using LangChain's RunnableParallel

# 1. Define processing chains
summary_chain = ChatPromptTemplate.from_template(
    "Summarize this: {text}"
) | model

sentiment_chain = ChatPromptTemplate.from_template(
    "Analyze sentiment: {text}"
) | model

# 2. Run in parallel
parallel_chain = RunnableParallel(
    summary=summary_chain,
    sentiment=sentiment_chain
)

# 3. Execute with input
results = parallel_chain.invoke({"text": "Sample text"})

Reflection

Description

Reflection is a pattern that enables AI agents to evaluate their own performance, learn from past experiences, and improve their future actions. This pattern is crucial for building self-improving systems that can adapt to new situations and optimize their behavior over time.

There are several approaches to implementing reflection:

  • Self-Evaluation: Agents analyze their own outputs and decisions for quality and correctness.
  • Feedback Integration: Incorporate external feedback to improve future responses.
  • Memory-Based Learning: Store and learn from past interactions and outcomes.
  • Meta-Cognitive Analysis: Evaluate the reasoning process and decision-making strategies.

Implementation Pseudocode

# Pseudocode for a reflection loop
response = generate_answer(user_query)

for _ in range(max_reflections):
    critique = reflect_on(response)  # could be same or different LLM
    if critique.suggests_improvement:
        response = improve_answer(response, critique)
    else:
        break  # No more improvements, or max attempts reached

return response

Memory-Based Learning Example

# Pseudocode for Memory System with Reflection

# 1. Basic Memory Storage
memory = [
    {
        'input': "How to reset password?",
        'output': "Click 'Forgot Password'",
        'feedback': "Worked!"
    },
    {
        'input': "Help me login",
        'output': "Enter username/password",
        'feedback': "Too vague"
    }
]

def find_similar(query):
    """Find most relevant memory for the query"""
    # In practice, use vector similarity or LLM
    return memory[0]  # Simplified for example

def reflect(query):
    """Analyze past interactions to improve responses"""
    similar = find_similar(query)
    return f"""
    Based on similar past interaction:
    Input: {similar['input']}
    Response: {similar['output']}
    Feedback: {similar['feedback']}
    
    Suggestion: Be more specific in responses
    """

# Example Usage
user_query = "I forgot my password"
print("Similar response:", find_similar(user_query)['output'])
print("\nReflection:", reflect(user_query))

Planning

Description

Planning is a pattern that enables AI agents to create and execute sequences of actions to achieve specific goals. By breaking down complex tasks into manageable steps and considering dependencies and constraints, this pattern enables systematic problem-solving and goal achievement.

There are several approaches to implementing planning systems:

  • Hierarchical Planning: Breaking down goals into sub-goals and creating hierarchical action plans.
  • Reactive Planning: Adapting plans based on real-time feedback and changing conditions.
  • Constraint-Based Planning: Considering various constraints and dependencies while creating plans.
  • Goal-Oriented Planning: Focusing on achieving specific goals through systematic action sequences.

Basic Planning Pseudocode

# Simple LLM client mock
llm = LLM()

# Define actions: what they need and what they produce
actions = [
    {
        'name': 'research_market',
        'needs': [],
        'gives': ['market_knowledge'],
        'description': 'Research market conditions and opportunities'
    },
    {
        'name': 'develop_strategy',
        'needs': ['market_knowledge'],
        'gives': ['business_strategy'],
        'description': 'Develop business strategy based on market research'
    }
]

def get_action(name):
    """Helper to get action by name"""
    return next((a for a in actions if a['name'] == name), None)

def suggest_next_action(state, goal_conditions):
    """Use LLM to suggest the next best action"""
    prompt = f"""Current state: {state}
    Goal conditions: {goal_conditions}
    Available actions: {[a['name'] for a in actions]}
    Suggest the most appropriate next action:"""
    
    action_name = llm.generate(f"suggest_action:{prompt}")
    return get_action(action_name)

def plan_actions(goal_conditions, actions, initial_state):
    plan = []
    state = set(initial_state)
    
    # Let LLM analyze the goal first
    analysis = llm.generate(f"analyze_state:Goal is {goal_conditions}. Current state is {state}")
    print(f"LLM Analysis: {analysis}")

    while not all(cond in state for cond in goal_conditions):
        # Get LLM suggestion for next action
        suggested_action = suggest_next_action(state, goal_conditions)
        if not suggested_action:
            print("No valid action found")
            break
            
        # Check prerequisites
        prereqs_met = all(p in state for p in suggested_action['needs'])
        
        if prereqs_met:
            plan.append(suggested_action['name'])
            state.update(suggested_action['gives'])
            print(f"✅ Added action: {suggested_action['name']}")
        else:
            # If prerequisites not met, plan those first
            print(f"🔍 Planning prerequisites for: {suggested_action['name']}")
            for prereq in suggested_action['needs']:
                if prereq not in state:
                    # Find action that provides this prerequisite
                    for action in actions:
                        if prereq in action['gives']:
                            subplan = plan_actions([prereq], actions, state)
                            plan.extend(subplan)
                            state.update(prereq for a in subplan for prereq in get_action(a)['gives'] if get_action(a))
    
    return plan

# Example usage
initial_state = []
goal_conditions = ['business_strategy']

print("Starting planning process...")
plan = plan_actions(goal_conditions, actions, initial_state)
print("\nFinal Plan:", plan)