๐ŸŸข Technician Track
Tutorial 1 of 10
๐ŸŸข TECHNICIAN TRACK โ€ข BEGINNER

Tutorial #1: Agentic AI vs Traditional Automation

From Execution to Reasoning

โœ… CORE MISSION OF THIS TUTORIAL

By the end of this tutorial, the reader will be able to:

  • โœ… Clearly distinguish traditional automation from agentic AI
  • โœ… Understand the difference between execution and reasoning
  • โœ… Identify where LLMs stop and where agents begin
  • โœ… Recognize why uncertainty handling is the core limitation of PLC logic
  • โœ… Safely simulate both models using advisory-only Python examples

This tutorial is the conceptual foundation of the entire Technician Track.

โš ๏ธ

โš ๏ธ SAFETY BOUNDARY REMINDER

This tutorial uses simulation only.

It must never be connected to:

  • Live PLCs
  • Production deployment pipelines
  • Safety-rated controllers
  • Motion or power systems

> All outputs are advisory-only and always require explicit human approval before any real-world action.

๐ŸŒ VENDOR-AGNOSTIC ENGINEERING NOTE

This tutorial uses:

  • โ–ธ Generic automation concepts
  • โ–ธ Generic agent concepts
  • โ–ธ Standard Python only

No PLC runtime, vendor SDK, hardware interface, or fieldbus integration is used. The concepts apply to all IEC 61131-3 environments.

1๏ธโƒฃ CONCEPT OVERVIEW โ€” WHAT IS TRADITIONAL AUTOMATION?

Traditional automation is built on:

  • โ–ธ Deterministic logic
  • โ–ธ Predefined rules
  • โ–ธ Full predictability
  • โ–ธ Full test coverage (in principle)

The core loop of automation is:

graph LR
    A[Read Inputs] --> B{Execute Rigid Logic}
    B --> C[Write Outputs]
    C --> A

In PLC terms:

  • โ–ธ Read Inputs
  • โ–ธ Execute Code
  • โ–ธ Write Outputs
  • โ–ธ Next Scan

This model is:

  • โœ… Fast
  • โœ… Deterministic
  • โœ… Verifiable
  • โœ… Stable

But it has a critical limitation:

It cannot reason about situations it was not explicitly programmed for.

2๏ธโƒฃ CONCEPT OVERVIEW โ€” WHAT IS AGENTIC AI?

Agentic AI introduces:

  • โ–ธ Goals
  • โ–ธ State awareness
  • โ–ธ Interpretation
  • โ–ธ Reasoning
  • โ–ธ Self-evaluation

Its core loop is:

graph LR
    A[Observe State] --> B{Reasoning Engine}
    B --> G[Goal]
    B --> C[Formulate Plan]
    C --> D[Suggest Action]
    D --> A

Key Differences

Automation Agent
Executes rules Reasons about goals
No interpretation Interprets context
No uncertainty handling Handles ambiguity
No self-reflection Can justify decisions

An agent does not replace automation. It augments it with cognition.

3๏ธโƒฃ LLM VS AGENT โ€” CRITICAL DISTINCTION

An LLM by itself:

  • โ–ธ Predicts the next token
  • โ–ธ Has no goals
  • โ–ธ Has no memory (beyond context)
  • โ–ธ Has no control loop
  • โ–ธ Cannot act in the world

An Agent is:

LLM + Goals + State + Reasoning Loop + Safety Boundaries

graph TB
    LLM[LLM Core<br/>Language Model]
    Goals[Goals<br/>Objectives]
    State[State<br/>Memory & Context]
    Loop[Reasoning Loop<br/>Think-Act-Observe]
    Safety[Safety Boundaries<br/>Constraints]

    LLM --> Agent[AGENT<br/>Cognitive System]
    Goals --> Agent
    State --> Agent
    Loop --> Agent
    Safety --> Agent

    style LLM fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
    style Goals fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style State fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style Loop fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style Safety fill:#1a1a1e,stroke:#ff4fd8,stroke-width:2px,color:#fff
    style Agent fill:#1a1a1e,stroke:#00ff7f,stroke-width:3px,color:#00ff7f,font-weight:bold

LLM = Brain tissue

Agent = Cognitive system

4๏ธโƒฃ CLEAN EDUCATIONAL SCENARIO

We simulate a simple motor situation with uncertainty:

  • โ–ธ Start button is pressed
  • โ–ธ A fault might be present
  • โ–ธ The system must decide what should happen

We will compare:

  1. A rule-based automation response
  2. An agentic advisory response

5๏ธโƒฃ PRACTICAL EXPERIMENTS

๐Ÿงช Experiment 1: Traditional Rule-Based Automation

Objective

Demonstrate how deterministic logic works without reasoning.

Python Code

Python
start_button = True
fault = False

motor_running = False

if fault:
    motor_running = False
elif start_button:
    motor_running = True

print("Motor running:", motor_running)

Expected Output

Motor running: True

Interpretation

  • โ–ธ Does not question conditions
  • โ–ธ Does not evaluate risk
  • โ–ธ Does not explain itself
  • โ–ธ Simply executes

๐Ÿงช Experiment 2: Agentic Advisory Reasoning (No Control Authority)

Objective

Demonstrate how an agent reasons about goals and uncertainty without acting.

Python Code

Python
from openai import OpenAI
import json

client = OpenAI()

state = {
    "start_button": True,
    "fault": False,
    "motor_running": False
}

goal = "Ensure safe motor operation at all times."

prompt = f'''
You are an industrial advisory agent.

You MUST:
- Output REASONING first
- Then output ADVISORY_DECISION
- Then output JUSTIFICATION
- You must NOT issue commands
- You must NOT generate PLC code

Current state:
{json.dumps(state, indent=2)}

Goal:
{goal}
'''

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": prompt}
    ]
)

print(response.choices[0].message.content)

Expected Output

REASONING:
The start button is pressed and no fault is present.

ADVISORY_DECISION:
It is safe to allow the motor to run.

JUSTIFICATION:
No fault conditions prevent safe operation, and the operational goal can be satisfied.

Interpretation

  • โ–ธ Automation executes
  • โ–ธ Agent evaluates and explains

๐Ÿ”’ EXPLICIT OPERATIONAL PROHIBITIONS

  • โŒ Using agents for safety-rated control
  • โŒ Allowing agents to actuate hardware
  • โŒ Bypassing PLC logic with AI
  • โŒ Letting AI override emergency systems
  • โŒ Running AI inside hard real-time loops

โœ… KEY TAKEAWAYS

  • โœ… Automation = execution
  • โœ… Agents = reasoning
  • โœ… LLMs alone are not agents
  • โœ… Agents must always remain advisory at this stage
  • โœ… Cognition and control must stay separated by design

๐Ÿ”œ NEXT TUTORIAL

#2 โ€” The ReAct Pattern for PLC Code Analysis

You will now formalize how agents structure reasoning safely.

๐Ÿงญ ENGINEERING POSTURE

This tutorial enforced:

  • โ–ธ Cognition before execution
  • โ–ธ Reasoning before tooling
  • โ–ธ Human authority before autonomy
  • โ–ธ Safety before convenience

โœ… END OF TUTORIAL #1