Tutorial #1: Agentic AI vs Traditional Automation
From Execution to Reasoning
โ CORE MISSION OF THIS TUTORIAL
By the end of this tutorial, the reader will be able to:
- โ Clearly distinguish traditional automation from agentic AI
- โ Understand the difference between execution and reasoning
- โ Identify where LLMs stop and where agents begin
- โ Recognize why uncertainty handling is the core limitation of PLC logic
- โ Safely simulate both models using advisory-only Python examples
This tutorial is the conceptual foundation of the entire Technician Track.
โ ๏ธ SAFETY BOUNDARY REMINDER
This tutorial uses simulation only.
It must never be connected to:
- Live PLCs
- Production deployment pipelines
- Safety-rated controllers
- Motion or power systems
> All outputs are advisory-only and always require explicit human approval before any real-world action.
๐ VENDOR-AGNOSTIC ENGINEERING NOTE
This tutorial uses:
- โธ Generic automation concepts
- โธ Generic agent concepts
- โธ Standard Python only
No PLC runtime, vendor SDK, hardware interface, or fieldbus integration is used. The concepts apply to all IEC 61131-3 environments.
1๏ธโฃ CONCEPT OVERVIEW โ WHAT IS TRADITIONAL AUTOMATION?
Traditional automation is built on:
- โธ Deterministic logic
- โธ Predefined rules
- โธ Full predictability
- โธ Full test coverage (in principle)
The core loop of automation is:
graph LR
A[Read Inputs] --> B{Execute Rigid Logic}
B --> C[Write Outputs]
C --> A In PLC terms:
- โธ Read Inputs
- โธ Execute Code
- โธ Write Outputs
- โธ Next Scan
This model is:
- โ Fast
- โ Deterministic
- โ Verifiable
- โ Stable
But it has a critical limitation:
It cannot reason about situations it was not explicitly programmed for.
2๏ธโฃ CONCEPT OVERVIEW โ WHAT IS AGENTIC AI?
Agentic AI introduces:
- โธ Goals
- โธ State awareness
- โธ Interpretation
- โธ Reasoning
- โธ Self-evaluation
Its core loop is:
graph LR
A[Observe State] --> B{Reasoning Engine}
B --> G[Goal]
B --> C[Formulate Plan]
C --> D[Suggest Action]
D --> A Key Differences
| Automation | Agent |
|---|---|
| Executes rules | Reasons about goals |
| No interpretation | Interprets context |
| No uncertainty handling | Handles ambiguity |
| No self-reflection | Can justify decisions |
An agent does not replace automation. It augments it with cognition.
3๏ธโฃ LLM VS AGENT โ CRITICAL DISTINCTION
An LLM by itself:
- โธ Predicts the next token
- โธ Has no goals
- โธ Has no memory (beyond context)
- โธ Has no control loop
- โธ Cannot act in the world
An Agent is:
LLM + Goals + State + Reasoning Loop + Safety Boundaries
graph TB
LLM[LLM Core<br/>Language Model]
Goals[Goals<br/>Objectives]
State[State<br/>Memory & Context]
Loop[Reasoning Loop<br/>Think-Act-Observe]
Safety[Safety Boundaries<br/>Constraints]
LLM --> Agent[AGENT<br/>Cognitive System]
Goals --> Agent
State --> Agent
Loop --> Agent
Safety --> Agent
style LLM fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style Goals fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style State fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style Loop fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style Safety fill:#1a1a1e,stroke:#ff4fd8,stroke-width:2px,color:#fff
style Agent fill:#1a1a1e,stroke:#00ff7f,stroke-width:3px,color:#00ff7f,font-weight:bold LLM = Brain tissue
Agent = Cognitive system
4๏ธโฃ CLEAN EDUCATIONAL SCENARIO
We simulate a simple motor situation with uncertainty:
- โธ Start button is pressed
- โธ A fault might be present
- โธ The system must decide what should happen
We will compare:
- A rule-based automation response
- An agentic advisory response
5๏ธโฃ PRACTICAL EXPERIMENTS
๐งช Experiment 1: Traditional Rule-Based Automation
Objective
Demonstrate how deterministic logic works without reasoning.
Python Code
start_button = True
fault = False
motor_running = False
if fault:
motor_running = False
elif start_button:
motor_running = True
print("Motor running:", motor_running) Expected Output
Motor running: True
Interpretation
- โธ Does not question conditions
- โธ Does not evaluate risk
- โธ Does not explain itself
- โธ Simply executes
๐งช Experiment 2: Agentic Advisory Reasoning (No Control Authority)
Objective
Demonstrate how an agent reasons about goals and uncertainty without acting.
Python Code
from openai import OpenAI
import json
client = OpenAI()
state = {
"start_button": True,
"fault": False,
"motor_running": False
}
goal = "Ensure safe motor operation at all times."
prompt = f'''
You are an industrial advisory agent.
You MUST:
- Output REASONING first
- Then output ADVISORY_DECISION
- Then output JUSTIFICATION
- You must NOT issue commands
- You must NOT generate PLC code
Current state:
{json.dumps(state, indent=2)}
Goal:
{goal}
'''
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": prompt}
]
)
print(response.choices[0].message.content) Expected Output
REASONING: The start button is pressed and no fault is present. ADVISORY_DECISION: It is safe to allow the motor to run. JUSTIFICATION: No fault conditions prevent safe operation, and the operational goal can be satisfied.
Interpretation
- โธ Automation executes
- โธ Agent evaluates and explains
๐ EXPLICIT OPERATIONAL PROHIBITIONS
- โ Using agents for safety-rated control
- โ Allowing agents to actuate hardware
- โ Bypassing PLC logic with AI
- โ Letting AI override emergency systems
- โ Running AI inside hard real-time loops
โ KEY TAKEAWAYS
- โ Automation = execution
- โ Agents = reasoning
- โ LLMs alone are not agents
- โ Agents must always remain advisory at this stage
- โ Cognition and control must stay separated by design
๐ NEXT TUTORIAL
#2 โ The ReAct Pattern for PLC Code Analysis
You will now formalize how agents structure reasoning safely.
๐งญ ENGINEERING POSTURE
This tutorial enforced:
- โธ Cognition before execution
- โธ Reasoning before tooling
- โธ Human authority before autonomy
- โธ Safety before convenience
โ END OF TUTORIAL #1