๐ŸŸข Technician Track
Tutorial 3 of 10
๐ŸŸข TECHNICIAN TRACK โ€ข BEGINNER

Tutorial #3: Autonomous Agents vs Rule-Based Systems

Understanding Goals, State, and Reasoning

โœ… CORE MISSION OF THIS TUTORIAL

By the end of this tutorial, the reader will be able to:

  • โœ… Distinguish rule-based automation from autonomous agent behavior
  • โœ… Understand how goals, state, and reasoning differentiate agents from PLC logic
  • โœ… Simulate a PLC scan cycle as a deterministic rule system
  • โœ… Simulate an agent decision cycle as a goal-driven advisory system
  • โœ… Prepare conceptually for Agent Memory and State Persistence in Tutorial #4

This tutorial establishes the conceptual definition of an autonomous agent, before any tools or I/O are introduced.

โš ๏ธ

โš ๏ธ SAFETY BOUNDARY REMINDER

This tutorial uses simulation only.

It must never be connected to:

  • Live PLCs
  • Production deployment pipelines
  • Safety-rated controllers
  • Motion or power systems

> All outputs are advisory-only and always require explicit human approval before any real-world action.

๐ŸŒ VENDOR-AGNOSTIC ENGINEERING NOTE

This tutorial uses:

  • โ–ธ Generic control logic
  • โ–ธ Generic scan-cycle simulation
  • โ–ธ Generic goal-driven reasoning
  • โ–ธ Standard Python only

No vendor PLC API, runtime, or hardware is required. The concepts apply equally to TwinCAT, TIA Portal, CODESYS, Rockwell, and IEC-based systems.

1๏ธโƒฃ CONCEPT OVERVIEW โ€” RULES VS AGENTS

Rule-Based System (PLC-style)

A rule-based system:

  • โ–ธ Executes predefined logic
  • โ–ธ Has no goals
  • โ–ธ Has no self-evaluation
  • โ–ธ Has no planning
Current Inputs โ†’ Current Logic โ†’ Current Outputs

This is why PLC programs are:

  • โœ… Deterministic
  • โœ… Verifiable
  • โœ… Stable
  • โœ… Predictable

But also:

  • โŒ Non-adaptive
  • โŒ Non-explanatory
  • โŒ Non-goal-seeking

Autonomous Agent

An autonomous agent:

  • โ–ธ Has an explicit goal
  • โ–ธ Observes the current state
  • โ–ธ Reasons about how to achieve the goal
  • โ–ธ Re-evaluates when conditions change
Observe โ†’ Reason โ†’ Decide โ†’ Observe Again

A PLC executes what it is told.
An agent reasons about what it should do next.

2๏ธโƒฃ REFERENCE ARCHITECTURE โ€” TWO DECISION MODELS

Rule System (PLC)

Inputs โ†’ Logic โ†’ Outputs โ†’ Next Scan

Agent System

State โ†’ Goal โ†’ Reasoning โ†’ Advisory Decision โ†’ Re-Evaluate

PLC Rule-Based Flow

graph LR
    A[Read Inputs] --> B{IF-THEN Rules}
    B --> C[Write Outputs]
    C --> D[Next Scan]
    D --> A

    style A fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
    style B fill:#1a1a1e,stroke:#1f75ff,stroke-width:2px,color:#fff
    style C fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
    style D fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff

Deterministic, repeatable, no goals or reasoning

Agent Autonomous Flow

graph LR
    A[Perceive State] --> B[Check Goal]
    B --> C{Reasoning}
    C --> D[Evaluate Options]
    D --> E[Recommend Action]
    E --> A

    style A fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style B fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style C fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style D fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style E fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f

Goal-driven, adaptive reasoning, advisory only

Boundaries in This Tutorial

  • โœ… PLC = deterministic simulation only
  • โœ… Agent = advisory reasoning only
  • โœ… Human = final authority

3๏ธโƒฃ CLEAN EDUCATIONAL SCENARIO

We simulate a very simple industrial situation:

  • โ–ธ
    Motor states: OFF, RUNNING, FAULTED
  • โ–ธ
    Inputs: Start button, Stop button, Fault signal

We will now implement this twice:

  1. As a rule-based PLC scan
  2. As a goal-driven advisory agent

4๏ธโƒฃ PRACTICAL EXPERIMENTS

๐Ÿงช Experiment 1: Deterministic PLC Scan Simulation (Rule-Based)

Objective

Demonstrate how a scan-based rule system reacts without goals or reasoning.

Python Code

Python
# Simulated PLC Scan Cycle (Rule-Based)

def plc_scan(motor_running, start_button, stop_button, fault):
    """Apply PLC logic rules to determine motor state."""
    if fault:
        motor_running = False
    elif stop_button:
        motor_running = False
    elif start_button:
        motor_running = True
    return motor_running

# Track motor state across scan cycles
motor_running = False

# Simulated input sequence
inputs = [
    {"start": True,  "stop": False, "fault": False},
    {"start": False, "stop": False, "fault": False},
    {"start": False, "stop": False, "fault": True},
]

for cycle, i in enumerate(inputs, 1):
    motor_running = plc_scan(motor_running, i["start"], i["stop"], i["fault"])
    print(f"Scan {cycle}: MotorRunning = {motor_running}")

Expected Output

Scan 1: MotorRunning = True
Scan 2: MotorRunning = True
Scan 3: MotorRunning = False

Interpretation

  • โ–ธ โœ… Pure simulation
  • โ–ธ โœ… No goals
  • โ–ธ โœ… No reasoning
  • โ–ธ โœ… Fully deterministic
  • โ–ธ Cost: $0.00 | Runtime: <1 second

๐Ÿงช Experiment 2: Goal-Driven Advisory Agent (Autonomous Reasoning)

Objective

Demonstrate how an agent reasons about a goal, instead of executing fixed rules.

Python Code

Python
from openai import OpenAI
import json

client = OpenAI()

current_state = {
    "motor_running": False,
    "fault": True,
    "start_button_pressed": True
}

goal = "Keep the motor running safely without violating fault conditions."

prompt = f"""
You are an autonomous industrial reasoning assistant.

You MUST:
- Output REASONING first
- Then output DECISION
- Then output EXPLANATION
- You are NOT allowed to control hardware
- You are NOT allowed to issue commands

Current state:
{json.dumps(current_state, indent=2)}

Goal:
{goal}
"""

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": prompt}
    ]
)

print(response.choices[0].message.content)

Expected Output

REASONING:
The motor is currently not running and a fault is active.
Running the motor while a fault is present would violate safety.

DECISION:
Do not recommend starting the motor.

EXPLANATION:
Although the start button is pressed, the active fault prevents safe operation. The goal of safe operation overrides the start request.

Interpretation

  • โ–ธ โœ… Advisory only
  • โ–ธ โœ… No actuation
  • โ–ธ โœ… No PLC write-back
  • โ–ธ โœ… Human-only execution authority
  • โ–ธ Cost: ~$0.01-$0.02 | Runtime: <1 second

๐Ÿ”’ EXPLICIT OPERATIONAL PROHIBITIONS

  • โŒ Writing PLC memory
  • โŒ Generating control logic
  • โŒ Issuing start/stop commands
  • โŒ Overriding safety faults
  • โŒ Auto-acknowledging alarms
  • โŒ Connecting to live equipment

โœ… KEY TAKEAWAYS

  • โœ… A PLC executes rules without goals or reflection
  • โœ… An agent reasons about goals and states
  • โœ… Agents can explain why a decision is recommended
  • โœ… Rule-based systems cannot self-justify
  • โœ… Autonomy begins with goals, not tools

๐Ÿ”œ NEXT TUTORIAL

T4 โ€” Agent Memory and State Persistence

Learn how conversation memory enables agents to detect patterns across observations and maintain context for industrial diagnostics.

๐Ÿงญ ENGINEERING POSTURE

This tutorial enforced:

  • โ–ธ Conceptual clarity over premature tooling
  • โ–ธ Deterministic rules vs goal-driven reasoning
  • โ–ธ Advisory intelligence over control authority
  • โ–ธ Human responsibility over machine autonomy

โœ… END OF TUTORIAL #3