๐ŸŸข Technician Track
Tutorial 4 of 10
๐ŸŸข TECHNICIAN TRACK โ€ข BEGINNER

Tutorial #4: Agent Memory and State Persistence

Building Context Across Multiple Observations

โœ… CORE MISSION OF THIS TUTORIAL

By the end of this tutorial, the reader will be able to:

  • โœ… Understand the difference between stateless LLM calls and stateful agents
  • โœ… Implement simple conversation memory to maintain context
  • โœ… See how agents build understanding over multiple cycles
  • โœ… Recognize why memory is critical for industrial diagnostics
  • โœ… Prepare for tool-using agents that need context

This tutorial establishes the foundation for agents that learn from observation over time.

โš ๏ธ

โš ๏ธ SAFETY BOUNDARY REMINDER

This tutorial uses simulation only.

It must never be connected to:

  • Live PLCs
  • Production deployment pipelines
  • Safety-rated controllers
  • Motion or power systems

> All outputs are advisory-only and always require explicit human approval before any real-world action.

๐ŸŒ VENDOR-AGNOSTIC ENGINEERING NOTE

This tutorial uses:

  • โ–ธ Generic alarm/event scenarios
  • โ–ธ Standard Python with OpenAI
  • โ–ธ No PLC connections or vendor SDKs

These patterns apply to any industrial system with recurring events or alarms.

1๏ธโƒฃ THE PROBLEM WITH STATELESS LLM CALLS

In Tutorial #3, we learned that state is one of the three things that make something an agent (goals, state, reasoning).

But what does "state" actually mean in practice?

A stateless system has no memory of what happened before.
An agent with state remembers previous observations and builds context over time.

Why This Matters for Industrial Systems

Consider a technician diagnosing a conveyor jam:

  • โ–ธ
    First alarm: "Conveyor motor current high"
  • โ–ธ
    Second alarm (5 min later): "Proximity sensor timeout"
  • โ–ธ
    Third alarm (2 min later): "Emergency stop triggered"

A human technician sees the pattern: high current โ†’ timeout โ†’ E-stop = likely jam.

But a stateless LLM analyzing each alarm independently cannot connect them.

2๏ธโƒฃ REFERENCE ARCHITECTURE โ€” STATELESS VS STATEFUL

Stateless LLM Call

Observation 1 โ†’ Analysis โ†’ Forget
Observation 2 โ†’ Analysis โ†’ Forget
Observation 3 โ†’ Analysis โ†’ Forget

No connection between observations

Stateful Agent with Memory

Observation 1 โ†’ Analysis โ†’ Remember
Observation 2 + History โ†’ Analysis โ†’ Remember
Observation 3 + History โ†’ Pattern Detected

Builds context over time

How Memory Works in Practice

  • โœ“ Conversation history: Each LLM call includes previous messages
  • โœ“ Context window: LLMs can "see" thousands of tokens of history
  • โœ“ Pattern recognition: Agent builds understanding across observations

3๏ธโƒฃ CLEAN EDUCATIONAL SCENARIO

We'll simulate a simple conveyor monitoring scenario:

  • โ–ธ
    Scenario: Three sequential alarms from a conveyor system
  • โ–ธ
    Goal: Detect the pattern and identify likely root cause
  • โ–ธ
    Comparison: Stateless vs stateful analysis

We will demonstrate:

  1. Stateless approach: Each alarm analyzed independently
  2. Stateful approach: Memory builds context across alarms

4๏ธโƒฃ PRACTICAL EXPERIMENTS

๐Ÿงช Experiment 1: Stateless LLM Analysis (No Memory)

Objective

Demonstrate how analyzing alarms independently prevents pattern detection.

Python Code

Python
from openai import OpenAI

client = OpenAI()

# Three sequential alarms from a conveyor
alarms = [
    "ALARM 1: Motor current 15.2A (threshold 12A)",
    "ALARM 2: Proximity sensor timeout after 8 seconds",
    "ALARM 3: Emergency stop button pressed by operator"
]

def analyze_alarm_stateless(alarm_text):
    """Analyze a single alarm without any context"""
    try:
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            temperature=0,  # Deterministic output
            messages=[
                {"role": "system", "content": "You are an industrial alarm analyst. Analyze this alarm and suggest a likely cause."},
                {"role": "user", "content": alarm_text}
            ]
        )
        return response.choices[0].message.content
    except Exception as e:
        return f"Error analyzing alarm: {str(e)}"

# Analyze each alarm independently (no memory)
for i, alarm in enumerate(alarms, 1):
    print(f"\n=== ALARM {i} ===")
    print(f"Input: {alarm}")
    analysis = analyze_alarm_stateless(alarm)
    print(f"Analysis: {analysis}")
    print("-" * 60)

Expected Output

Example output (actual results will vary):

=== ALARM 1 ===
Input: ALARM 1: Motor current 15.2A (threshold 12A)
Analysis: High motor current suggests the motor is working harder than normal. Possible causes: mechanical binding, overload, or worn bearings. Recommend checking for obstructions and inspecting motor condition.
------------------------------------------------------------

=== ALARM 2 ===
Input: ALARM 2: Proximity sensor timeout after 8 seconds
Analysis: Sensor timeout indicates an object is not reaching the expected position within the time limit. Possible causes: slow conveyor speed, sensor misalignment, or missing object. Check sensor positioning and conveyor operation.
------------------------------------------------------------

=== ALARM 3 ===
Input: ALARM 3: Emergency stop button pressed by operator
Analysis: Operator initiated E-stop suggests they observed an unsafe condition. This is a manual safety intervention. Determine what condition prompted the operator to stop the system.
------------------------------------------------------------

Interpretation

  • โ–ธ โŒ Each alarm analyzed in isolation
  • โ–ธ โŒ No connection made between events
  • โ–ธ โŒ Pattern not recognized
  • โ–ธ โŒ Root cause unclear
  • โ–ธ Cost: Varies by model/pricing | Runtime: ~3-5 seconds

๐Ÿงช Experiment 2: Stateful Agent Analysis (With Memory)

Objective

Demonstrate how maintaining conversation history enables pattern recognition across sequential observations.

Python Code

Python
from openai import OpenAI

client = OpenAI()

# Same three alarms
alarms = [
    "ALARM 1: Motor current 15.2A (threshold 12A)",
    "ALARM 2: Proximity sensor timeout after 8 seconds",
    "ALARM 3: Emergency stop button pressed by operator"
]

def analyze_with_memory(alarm_text, history):
    """
    Analyze alarm while maintaining conversation history.

    Args:
        alarm_text: New alarm to analyze
        history: Conversation history (list of messages)

    Returns:
        tuple: (analysis_text, updated_history)
    """
    # Add new observation to history
    history.append({
        "role": "user",
        "content": f"New alarm observed: {alarm_text}\n\nProvide your analysis and note any patterns with previous alarms."
    })

    try:
        # Get analysis with full conversation context
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            temperature=0,  # Deterministic output
            messages=history
        )

        # Extract and store assistant's response
        assistant_message = response.choices[0].message.content
        history.append({
            "role": "assistant",
            "content": assistant_message
        })

        return assistant_message, history

    except Exception as e:
        return f"Error analyzing alarm: {str(e)}", history

# Initialize conversation memory (state boundary is explicit)
conversation_history = [
    {"role": "system", "content": "You are an industrial diagnostic agent. Analyze alarms sequentially and build your understanding over time. Look for patterns across multiple alarms."}
]

# Analyze each alarm with memory
for i, alarm in enumerate(alarms, 1):
    print(f"\n=== ALARM {i} (with context) ===")
    print(f"Input: {alarm}")
    analysis, conversation_history = analyze_with_memory(alarm, conversation_history)
    print(f"Analysis: {analysis}")
    print("-" * 60)

print(f"\n[Memory Status: {len(conversation_history)} messages in history]")

Expected Output

Example output (actual results will vary):

=== ALARM 1 (with context) ===
Input: ALARM 1: Motor current 15.2A (threshold 12A)
Analysis: Motor is drawing 27% more current than normal. This indicates increased mechanical resistance. Possible jam forming. Monitoring for follow-up alarms.
------------------------------------------------------------

=== ALARM 2 (with context) ===
Input: ALARM 2: Proximity sensor timeout after 8 seconds
Analysis: PATTERN EMERGING: High motor current (Alarm 1) + sensor timeout suggests material blockage. The motor is working harder AND material isn't moving through. Classic jam signature. Expecting operator intervention.
------------------------------------------------------------

=== ALARM 3 (with context) ===
Input: ALARM 3: Emergency stop button pressed by operator
Analysis: PATTERN CONFIRMED: This is a conveyor jam event sequence:
1. Motor overload (jam forming)
2. Material flow stopped (jam confirmed)
3. Operator E-stop (safety response)

ROOT CAUSE: Material jam in conveyor. Recommend clearing obstruction, inspecting for foreign objects, and checking belt tension.
------------------------------------------------------------

[Memory Status: 7 messages in history]

Interpretation

  • โ–ธ โœ… Context maintained across observations
  • โ–ธ โœ… Pattern recognized after 2nd alarm
  • โ–ธ โœ… Root cause identified by 3rd alarm
  • โ–ธ โœ… Diagnostic quality improved dramatically
  • โ–ธ Cost: Varies by model/history length | Runtime: ~4-6 seconds

๐Ÿ” THE CRITICAL DIFFERENCE

Experiment 1 (Stateless)

  • ร— 3 independent analyses
  • ร— No pattern recognition
  • ร— Vague, generic conclusions

Experiment 2 (Stateful)

  • โœ“ Builds context incrementally
  • โœ“ Recognizes jam pattern
  • โœ“ Specific root cause identified

5๏ธโƒฃ HOW MEMORY WORKS IN PRACTICE

The key difference in Experiment 2 is the conversation_history list:

conversation_history = [
    {"role": "system", "content": "You are an agent..."},
    {"role": "user", "content": "ALARM 1..."},
    {"role": "assistant", "content": "Motor current high..."},
    {"role": "user", "content": "ALARM 2..."},
    {"role": "assistant", "content": "PATTERN EMERGING..."},
    {"role": "user", "content": "ALARM 3..."},
    {"role": "assistant", "content": "ROOT CAUSE: jam"}
]

Every LLM call in Experiment 2 receives this full conversation history.

  • โ–ธ
    First alarm: LLM sees only system prompt + Alarm 1
  • โ–ธ
    Second alarm: LLM sees system prompt + Alarm 1 + Response 1 + Alarm 2
  • โ–ธ
    Third alarm: LLM sees entire conversation thread

โš ๏ธ Memory Management in Production

Conversation history grows with every exchange. In production, you'll need strategies like: summary/compression, sliding windows, or vector-based memory. We'll cover this in the Developer track.

6๏ธโƒฃ INDUSTRIAL APPLICATIONS OF STATEFUL AGENTS

Memory enables agents to:

Pattern Recognition

  • โ†’ Detect recurring fault sequences
  • โ†’ Identify degradation trends
  • โ†’ Correlate alarms across time

Context Building

  • โ†’ Track technician conversations
  • โ†’ Remember previous diagnoses
  • โ†’ Build equipment knowledge

Without memory, you have a smart tool.
With memory, you have an agent that learns.

โœ… KEY TAKEAWAYS

  • โœ… Stateless LLM calls analyze each input independently with no memory
  • โœ… Stateful agents maintain conversation history to build context over time
  • โœ… Memory enables pattern recognition that is impossible with stateless calls
  • โœ… Conversation history is the simplest form of agent memory
  • โœ… For industrial diagnostics, memory is not optional โ€” it's essential

๐Ÿงญ ENGINEERING POSTURE

This tutorial enforced:

  • โ–ธ Memory transforms an LLM from a one-shot analyzer into an agent that builds understanding incrementally
  • โ–ธ Stateless calls are appropriate for independent analyses, stateful agents for sequential diagnostics
  • โ–ธ Context must be preserved across observations for pattern recognition
  • โ–ธ Memory is managed and scoped โ€” not infinite or unbounded

๐Ÿ”œ NEXT TUTORIAL

T5 โ€” Prompt Engineering for IEC 61131-3 ST Code Generation

Learn how to control AI code generation through precise prompts. Generate reviewable, deterministic PLC code drafts safely.