Tutorial #3: Autonomous Agents vs Rule-Based Systems
Understanding Goals, State, and Reasoning
โ CORE MISSION OF THIS TUTORIAL
By the end of this tutorial, the reader will be able to:
- โ Distinguish rule-based automation from autonomous agent behavior
- โ Understand how goals, state, and reasoning differentiate agents from PLC logic
- โ Simulate a PLC scan cycle as a deterministic rule system
- โ Simulate an agent decision cycle as a goal-driven advisory system
- โ Prepare conceptually for Agent Memory and State Persistence in Tutorial #4
This tutorial establishes the conceptual definition of an autonomous agent, before any tools or I/O are introduced.
โ ๏ธ SAFETY BOUNDARY REMINDER
This tutorial uses simulation only.
It must never be connected to:
- Live PLCs
- Production deployment pipelines
- Safety-rated controllers
- Motion or power systems
> All outputs are advisory-only and always require explicit human approval before any real-world action.
๐ VENDOR-AGNOSTIC ENGINEERING NOTE
This tutorial uses:
- โธ Generic control logic
- โธ Generic scan-cycle simulation
- โธ Generic goal-driven reasoning
- โธ Standard Python only
No vendor PLC API, runtime, or hardware is required. The concepts apply equally to TwinCAT, TIA Portal, CODESYS, Rockwell, and IEC-based systems.
1๏ธโฃ CONCEPT OVERVIEW โ RULES VS AGENTS
Rule-Based System (PLC-style)
A rule-based system:
- โธ Executes predefined logic
- โธ Has no goals
- โธ Has no self-evaluation
- โธ Has no planning
This is why PLC programs are:
- โ Deterministic
- โ Verifiable
- โ Stable
- โ Predictable
But also:
- โ Non-adaptive
- โ Non-explanatory
- โ Non-goal-seeking
Autonomous Agent
An autonomous agent:
- โธ Has an explicit goal
- โธ Observes the current state
- โธ Reasons about how to achieve the goal
- โธ Re-evaluates when conditions change
A PLC executes what it is told.
An agent reasons about what it should do next.
2๏ธโฃ REFERENCE ARCHITECTURE โ TWO DECISION MODELS
Rule System (PLC)
Agent System
PLC Rule-Based Flow
graph LR
A[Read Inputs] --> B{IF-THEN Rules}
B --> C[Write Outputs]
C --> D[Next Scan]
D --> A
style A fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style B fill:#1a1a1e,stroke:#1f75ff,stroke-width:2px,color:#fff
style C fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style D fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff Deterministic, repeatable, no goals or reasoning
Agent Autonomous Flow
graph LR
A[Perceive State] --> B[Check Goal]
B --> C{Reasoning}
C --> D[Evaluate Options]
D --> E[Recommend Action]
E --> A
style A fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style B fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style C fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style D fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style E fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f Goal-driven, adaptive reasoning, advisory only
Boundaries in This Tutorial
- โ PLC = deterministic simulation only
- โ Agent = advisory reasoning only
- โ Human = final authority
3๏ธโฃ CLEAN EDUCATIONAL SCENARIO
We simulate a very simple industrial situation:
- โธ Motor states: OFF, RUNNING, FAULTED
- โธ Inputs: Start button, Stop button, Fault signal
We will now implement this twice:
- As a rule-based PLC scan
- As a goal-driven advisory agent
4๏ธโฃ PRACTICAL EXPERIMENTS
๐งช Experiment 1: Deterministic PLC Scan Simulation (Rule-Based)
Objective
Demonstrate how a scan-based rule system reacts without goals or reasoning.
Python Code
# Simulated PLC Scan Cycle (Rule-Based)
def plc_scan(motor_running, start_button, stop_button, fault):
"""Apply PLC logic rules to determine motor state."""
if fault:
motor_running = False
elif stop_button:
motor_running = False
elif start_button:
motor_running = True
return motor_running
# Track motor state across scan cycles
motor_running = False
# Simulated input sequence
inputs = [
{"start": True, "stop": False, "fault": False},
{"start": False, "stop": False, "fault": False},
{"start": False, "stop": False, "fault": True},
]
for cycle, i in enumerate(inputs, 1):
motor_running = plc_scan(motor_running, i["start"], i["stop"], i["fault"])
print(f"Scan {cycle}: MotorRunning = {motor_running}") Expected Output
Scan 1: MotorRunning = True Scan 2: MotorRunning = True Scan 3: MotorRunning = False
Interpretation
- โธ โ Pure simulation
- โธ โ No goals
- โธ โ No reasoning
- โธ โ Fully deterministic
- โธ Cost: $0.00 | Runtime: <1 second
๐งช Experiment 2: Goal-Driven Advisory Agent (Autonomous Reasoning)
Objective
Demonstrate how an agent reasons about a goal, instead of executing fixed rules.
Python Code
from openai import OpenAI
import json
client = OpenAI()
current_state = {
"motor_running": False,
"fault": True,
"start_button_pressed": True
}
goal = "Keep the motor running safely without violating fault conditions."
prompt = f"""
You are an autonomous industrial reasoning assistant.
You MUST:
- Output REASONING first
- Then output DECISION
- Then output EXPLANATION
- You are NOT allowed to control hardware
- You are NOT allowed to issue commands
Current state:
{json.dumps(current_state, indent=2)}
Goal:
{goal}
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": prompt}
]
)
print(response.choices[0].message.content) Expected Output
REASONING: The motor is currently not running and a fault is active. Running the motor while a fault is present would violate safety. DECISION: Do not recommend starting the motor. EXPLANATION: Although the start button is pressed, the active fault prevents safe operation. The goal of safe operation overrides the start request.
Interpretation
- โธ โ Advisory only
- โธ โ No actuation
- โธ โ No PLC write-back
- โธ โ Human-only execution authority
- โธ Cost: ~$0.01-$0.02 | Runtime: <1 second
๐ EXPLICIT OPERATIONAL PROHIBITIONS
- โ Writing PLC memory
- โ Generating control logic
- โ Issuing start/stop commands
- โ Overriding safety faults
- โ Auto-acknowledging alarms
- โ Connecting to live equipment
โ KEY TAKEAWAYS
- โ A PLC executes rules without goals or reflection
- โ An agent reasons about goals and states
- โ Agents can explain why a decision is recommended
- โ Rule-based systems cannot self-justify
- โ Autonomy begins with goals, not tools
๐ NEXT TUTORIAL
T4 โ Agent Memory and State Persistence
Learn how conversation memory enables agents to detect patterns across observations and maintain context for industrial diagnostics.
๐งญ ENGINEERING POSTURE
This tutorial enforced:
- โธ Conceptual clarity over premature tooling
- โธ Deterministic rules vs goal-driven reasoning
- โธ Advisory intelligence over control authority
- โธ Human responsibility over machine autonomy
โ END OF TUTORIAL #3