🟢 Technician Track
Tutorial 1 of 10
🟢 TECHNICIAN TRACK BEGINNER

Tutorial #1: Agentic AI vs Traditional Automation

From Execution to Reasoning

✅ CORE MISSION OF THIS TUTORIAL

By the end of this tutorial, the reader will be able to:

  • Clearly distinguish traditional automation from agentic AI
  • Understand the difference between execution and reasoning
  • Identify where LLMs stop and where agents begin
  • Recognize why uncertainty handling is the core limitation of PLC logic
  • Safely simulate both models using advisory-only Python examples

This tutorial is the conceptual foundation of the entire Technician Track.

⚠️

⚠️ SAFETY BOUNDARY REMINDER

This tutorial uses simulation only.

It must never be connected to:

  • Live PLCs
  • Production deployment pipelines
  • Safety-rated controllers
  • Motion or power systems

> All outputs are advisory-only and always require explicit human approval before any real-world action.

🌍 VENDOR-AGNOSTIC ENGINEERING NOTE

This tutorial uses:

  • Generic automation concepts
  • Generic agent concepts
  • Standard Python only

No PLC runtime, vendor SDK, hardware interface, or fieldbus integration is used. The concepts apply to all IEC 61131-3 environments.

1️⃣ CONCEPT OVERVIEW — WHAT IS TRADITIONAL AUTOMATION?

Traditional automation is built on:

  • Deterministic logic
  • Predefined rules
  • Full predictability
  • Full test coverage (in principle)

The core loop of automation is:

graph LR
    A[Read Inputs] --> B{Execute Rigid Logic}
    B --> C[Write Outputs]
    C --> A

In PLC terms:

  • Read Inputs
  • Execute Code
  • Write Outputs
  • Next Scan

This model is:

  • Fast
  • Deterministic
  • Verifiable
  • Stable

But it has a critical limitation:

It cannot reason about situations it was not explicitly programmed for.

2️⃣ CONCEPT OVERVIEW — WHAT IS AGENTIC AI?

Agentic AI introduces:

  • Goals
  • State awareness
  • Interpretation
  • Reasoning
  • Self-evaluation

Its core loop is:

graph LR
    A[Observe State] --> B{Reasoning Engine}
    B --> G[Goal]
    B --> C[Formulate Plan]
    C --> D[Suggest Action]
    D --> A

Key Differences

Automation Agent
Executes rules Reasons about goals
No interpretation Interprets context
No uncertainty handling Handles ambiguity
No self-reflection Can justify decisions

An agent does not replace automation. It augments it with cognition.

3️⃣ LLM VS AGENT — CRITICAL DISTINCTION

An LLM by itself:

  • Predicts the next token
  • Has no goals
  • Has no memory (beyond context)
  • Has no control loop
  • Cannot act in the world

An Agent is:

LLM + Goals + State + Reasoning Loop + Safety Boundaries

graph TB
    LLM[LLM Core<br/>Language Model]
    Goals[Goals<br/>Objectives]
    State[State<br/>Memory & Context]
    Loop[Reasoning Loop<br/>Think-Act-Observe]
    Safety[Safety Boundaries<br/>Constraints]

    LLM --> Agent[AGENT<br/>Cognitive System]
    Goals --> Agent
    State --> Agent
    Loop --> Agent
    Safety --> Agent

    style LLM fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
    style Goals fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style State fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style Loop fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style Safety fill:#1a1a1e,stroke:#ff4fd8,stroke-width:2px,color:#fff
    style Agent fill:#1a1a1e,stroke:#00ff7f,stroke-width:3px,color:#00ff7f,font-weight:bold

LLM = Brain tissue

Agent = Cognitive system

4️⃣ CLEAN EDUCATIONAL SCENARIO

We simulate a simple motor situation with uncertainty:

  • Start button is pressed
  • A fault might be present
  • The system must decide what should happen

We will compare:

  1. A rule-based automation response
  2. An agentic advisory response

5️⃣ PRACTICAL EXPERIMENTS

🧪 Experiment 1: Traditional Rule-Based Automation

Objective

Demonstrate how deterministic logic works without reasoning.

Python Code

Python
start_button = True
fault = False

motor_running = False

if fault:
    motor_running = False
elif start_button:
    motor_running = True

print("Motor running:", motor_running)

Expected Output

Motor running: True

Interpretation

  • Does not question conditions
  • Does not evaluate risk
  • Does not explain itself
  • Simply executes

🧪 Experiment 2: Agentic Advisory Reasoning (No Control Authority)

Objective

Demonstrate how an agent reasons about goals and uncertainty without acting.

Python Code

Python
from openai import OpenAI
import json

client = OpenAI()

state = {
    "start_button": True,
    "fault": False,
    "motor_running": False
}

goal = "Ensure safe motor operation at all times."

prompt = f'''
You are an industrial advisory agent.

You MUST:
- Output REASONING first
- Then output ADVISORY_DECISION
- Then output JUSTIFICATION
- You must NOT issue commands
- You must NOT generate PLC code

Current state:
{json.dumps(state, indent=2)}

Goal:
{goal}
'''

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": prompt}
    ]
)

print(response.choices[0].message.content)

Expected Output

REASONING:
The start button is pressed and no fault is present.

ADVISORY_DECISION:
It is safe to allow the motor to run.

JUSTIFICATION:
No fault conditions prevent safe operation, and the operational goal can be satisfied.

Interpretation

  • Automation executes
  • Agent evaluates and explains

🔒 EXPLICIT OPERATIONAL PROHIBITIONS

  • Using agents for safety-rated control
  • Allowing agents to actuate hardware
  • Bypassing PLC logic with AI
  • Letting AI override emergency systems
  • Running AI inside hard real-time loops

✅ KEY TAKEAWAYS

  • Automation = execution
  • Agents = reasoning
  • LLMs alone are not agents
  • Agents must always remain advisory at this stage
  • Cognition and control must stay separated by design

🔜 NEXT TUTORIAL

#2 — The ReAct Pattern for PLC Code Analysis

You will now formalize how agents structure reasoning safely.

🧭 ENGINEERING POSTURE

This tutorial enforced:

  • Cognition before execution
  • Reasoning before tooling
  • Human authority before autonomy
  • Safety before convenience

✅ END OF TUTORIAL #1