Decision patterns for agentic systems: enabling LLM agents to make robust, adaptive, and explainable choices for complex problem-solving.
Rule Engine is a pattern that implements structured decision-making through a collection of explicit rules and conditions. In LLM-based agentic systems, a rule engine can be used after the LLM extracts facts or structured data, enabling deterministic business logic or compliance checks. Each rule consists of conditions that, when met, trigger specific actions or decisions. This pattern is particularly useful for implementing complex business logic and deterministic decision-making processes in agentic systems.
There are several approaches to implementing rule engines:
# Python Pseudocode for Rule Engine
class LoanApplication:
def __init__(self):
self.status = None
self.reason = None
def approve(self, reason):
self.status = "APPROVED"
self.reason = reason
def reject(self, reason):
self.status = "REJECTED"
self.reason = reason
def rule_engine(facts, application):
if facts["creditScore"] < 600:
application.reject("Low credit score")
elif facts["creditScore"] >= 700 and facts["amount"] <= 100000:
application.approve("Good credit score and reasonable amount")
facts = llm_agent.extract_facts(user_input) # LLM agent extracts facts
application = LoanApplication()
rule_engine(facts, application)
print(application.status, application.reason)
Reinforcement Learning is a pattern that enables agents—including LLM-based agents—to learn optimal decision-making strategies through interaction with their environment or tool use. The agent receives feedback in the form of rewards or penalties, which it uses to improve its decision-making over time. This pattern is particularly valuable for learning complex behaviors and optimizing long-term outcomes in agentic systems, such as tool selection or dialogue strategies.
There are several approaches to implementing reinforcement learning:
# Python Pseudocode for Reinforcement Learning
class Agent:
def select_action(self, state):
# LLM agent could use RL to select next tool or response
pass
def learn(self, state, action, reward, next_state, done):
# Update policy based on feedback
pass
def save(self, path):
# Save model
pass
env = create_environment()
agent = Agent()
num_episodes = 100
for episode in range(num_episodes):
state = env.reset()
done = False
while not done:
action = agent.select_action(state)
next_state, reward, done, info = env.step(action)
agent.learn(state, action, reward, next_state, done)
state = next_state
agent.save('model_path')
Expert System is a pattern that emulates human decision-making by using a knowledge base of expert rules and facts. In LLM-based agentic systems, an expert system can be queried by the agent to provide grounded, explainable decisions or to supplement generative reasoning with symbolic inference. The system uses an inference engine to apply these rules to specific situations and reach conclusions. This pattern is particularly useful for implementing domain-specific decision-making capabilities in agentic systems.
There are several approaches to implementing expert systems:
# Python Pseudocode for Case-Based Reasoning in an Expert System
# 1. Store past cases (problems and solutions)
past_cases = [
{"symptoms": ["fever", "cough"], "diagnosis": "flu"},
{"symptoms": ["cough"], "diagnosis": "cold"}
]
# 2. LLM agent extracts current symptoms from user input
current_symptoms = llm_agent.extract_facts(user_input)
# 3. Find the most similar past case
best_match = find_best_match(past_cases, current_symptoms)
# 4. Suggest diagnosis from the best matching case
if best_match:
print(f"Suggested diagnosis: {best_match['diagnosis']}")
else:
print("No similar case found.")