🟒 Technician Track
Tutorial 7 of 10
🟒 TECHNICIAN TRACK β€’ BEGINNER

Tutorial #7: Chain-of-Thought for Logic Review

Transparent Step-by-Step Reasoning

βœ… CORE MISSION OF THIS TUTORIAL

By the end of this tutorial, the reader will be able to:

  • βœ… Understand what chain-of-thought (CoT) means in practical engineering terms
  • βœ… Ask AI to explain its reasoning, not just give a verdict
  • βœ… Use structured reasoning formats for PLC logic review
  • βœ… Spot subtle logic issues more easily by making assumptions explicit
  • βœ… Prepare for future agent behavior that is transparent and auditable

This tutorial focuses on logic review, not safety certification.

⚠️

⚠️ SAFETY BOUNDARY REMINDER

This tutorial performs analysis only.

It must never be connected to:

  • Live PLCs
  • Production deployment pipelines
  • Safety-rated controllers
  • Motion or power systems

> All outputs are advisory-only and always require explicit human approval before any real-world action.

🌍 VENDOR-AGNOSTIC ENGINEERING NOTE

This tutorial uses:

  • β–Έ Generic IEC 61131-3 Structured Text (ST)
  • β–Έ TwinCAT, Siemens TIA Portal, CODESYS
  • β–Έ Allen-Bradley ST
  • β–Έ Any IEC-based runtime

No vendor-specific libraries, no runtime access, no PLC connections.

1️⃣ WHAT IS CHAIN-OF-THOUGHT IN ENGINEERING TERMS?

In this context:

Practical β€œchain-of-thought” = a structured rationale (assumptions + checks) that explains how a conclusion was reached.

❌ Without Chain-of-Thought

"Yes, the logic is correct."

Not auditable, hard to verify

βœ… With Chain-of-Thought

Step 1: When StartButton is TRUE...
Step 2: When StopButton is TRUE...
Step 3: Therefore the logic behaves as specified.

Transparent, inspectable reasoning

Chain-of-thought does not make the model smarter by itself.
It makes parts of the model's rationale inspectable by you.

Benefits for PLC Engineers

πŸ”Ή See Where AI Might Be Wrong

Inspect each reasoning step

πŸ”Ή Challenge Assumptions

Correct flawed logic immediately

πŸ”Ή Reuse Reasoning Patterns

Apply across similar reviews

πŸ”Ή Build Trust Through Transparency

Not blind faith

2️⃣ REFERENCE SCENARIO β€” REVIEWING SIMPLE MOTOR LOGIC

We reuse a familiar IEC ST snippet:

PASCAL
IF StartButton THEN
    MotorRunning := TRUE;
END_IF;

IF StopButton THEN
    MotorRunning := FALSE;
END_IF;

Specification (Informal)

  • β–Έ When StartButton is TRUE β†’ MotorRunning should become TRUE
  • β–Έ When StopButton is TRUE β†’ MotorRunning should become FALSE
  • β–Έ When neither button is pressed β†’ MotorRunning should keep its last state

We want the AI to:

  • β–Έ Explain whether the logic matches the spec
  • β–Έ Show intermediate reasoning steps
  • β–Έ Output a clear VERDICT

3️⃣ CONCEPT: VERDICT-ONLY VS CHAIN-OF-THOUGHT OUTPUT

Two modes of AI behavior:

❌ Verdict-Only Mode

graph LR
    A[Code + Spec] --> B[AI Analysis]
    B --> C[Verdict Only]

    style A fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
    style B fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style C fill:#1a1a1e,stroke:#ff4fd8,stroke-width:2px,color:#ff4fd8
  • ❌ Hard to verify
  • ❌ Hard to challenge
  • ❌ Black box decision

βœ… Chain-of-Thought Mode

graph LR
    A[Code + Spec] --> B[Assumptions]
    B --> C[Step-by-Step]
    C --> D[Verdict]

    style A fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
    style B fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style C fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
    style D fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f
  • βœ… Lists assumptions
  • βœ… Walks through conditions
  • βœ… Compares to spec
  • βœ… Clear verdict

Prefer requesting the structured rationale mode for logic review (but still validate with testing/simulation).

4️⃣ PRACTICAL EXPERIMENTS

πŸ§ͺ Experiment 1: Verdict-Only vs Explicit Chain-of-Thought

Objective

See the difference between a shallow answer and a structured reasoning trace.

Python Code

Python
from openai import OpenAI

client = OpenAI()

iec_code = """
IF StartButton THEN
    MotorRunning := TRUE;
END_IF;

IF StopButton THEN
    MotorRunning := FALSE;
END_IF;
"""

spec = """
Specification:
- When StartButton is TRUE, MotorRunning should become TRUE.
- When StopButton is TRUE, MotorRunning should become FALSE.
- When neither button is pressed, MotorRunning should retain its last state.
"""

# --- Part A: Verdict-only style prompt ---

prompt_verdict_only = f"""
You are a PLC logic reviewer.

Here is the code (IEC 61131-3 ST):

{iec_code}

{spec}

Question:
Does this logic match the specification? Answer briefly with YES or NO and one short sentence.
"""

response_verdict = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": prompt_verdict_only}
    ]
)

print("=== VERDICT-ONLY ===")
print(response_verdict.choices[0].message.content)

# --- Part B: Structured rationale prompt ---

prompt_rationale = f"""
You are a PLC logic reviewer.

Here is the code (IEC 61131-3 ST):

{iec_code}

{spec}

Follow this format exactly:

ASSUMPTIONS:
- ...

CHECKS (brief, human-auditable):
1. ...
2. ...
3. ...

VERDICT:
- PASS or FAIL, with one short justification.

Do not provide hidden internal chain-of-thought. Provide only a concise rationale and checks that a human can audit.
"""

response_rationale = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": prompt_rationale}
    ]
)

print("\n=== STRUCTURED RATIONALE ===")
print(response_rationale.choices[0].message.content)

Expected Output

=== VERDICT-ONLY ===
YES. The logic matches the specification.

=== STRUCTURED RATIONALE ===
ASSUMPTIONS:
- MotorRunning keeps its previous value unless explicitly set.
- StartButton and StopButton are momentary signals.

CHECKS (brief, human-auditable):
1. When StartButton is TRUE, MotorRunning is set to TRUE.
2. When StopButton is TRUE, MotorRunning is set to FALSE.
3. When both are FALSE, MotorRunning is not reassigned and keeps its last state.

VERDICT:
- PASS. The logic behaves according to the given specification.

Interpretation

  • β–Έ ❌ Verdict-only is not auditable
  • β–Έ βœ… Structured rationale exposes assumptions and checks
  • β–Έ βœ… You can now agree or disagree with concrete steps
  • β–Έ Cost/runtime vary by model, pricing, and system load

πŸ§ͺ Experiment 2: Chain-of-Thought for Detecting a Logic Bug

Objective

Use structured reasoning to catch a subtle behavior issue.

Python Code

Python
from openai import OpenAI

client = OpenAI()

# Deliberately faulty code with ELSIF
iec_faulty = """
IF StartButton THEN
    MotorRunning := TRUE;
ELSIF StopButton THEN
    MotorRunning := FALSE;
END_IF;
"""

spec = """
Specification:
- When StartButton is TRUE, MotorRunning should become TRUE.
- When StopButton is TRUE, MotorRunning should become FALSE.
- If both StartButton and StopButton are TRUE at the same time,
  StopButton must take priority and MotorRunning should become FALSE.
- When neither button is pressed, MotorRunning should retain its last state.
"""

prompt_faulty_cot = f"""
You are a PLC logic reviewer.

Here is the code (IEC 61131-3 ST):

{iec_faulty}

{spec}

Follow this format exactly:

ASSUMPTIONS:
- ...

STEP_BY_STEP_REASONING:
1. ...
2. ...
3. ...

EDGE_CASE_ANALYSIS:
- Describe what happens when both StartButton and StopButton are TRUE.
- Compare this to the specification.

VERDICT:
- PASS or FAIL, with a short justification.

Do not provide hidden internal chain-of-thought. Provide only a concise rationale and checks that a human can audit.
"""

response_faulty = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": prompt_faulty_cot}
    ]
)

print(response_faulty.choices[0].message.content)

Expected Output

ASSUMPTIONS:
- MotorRunning keeps its previous value unless assigned.
- StartButton and StopButton can be TRUE at the same time.

STEP_BY_STEP_REASONING:
1. When StartButton is TRUE and StopButton is FALSE, MotorRunning is set to TRUE.
2. When StartButton is FALSE and StopButton is TRUE, MotorRunning is set to FALSE.
3. When both are FALSE, MotorRunning holds its last state.

EDGE_CASE_ANALYSIS:
- When both StartButton and StopButton are TRUE, only the StartButton branch executes due to ELSIF.
- This sets MotorRunning to TRUE, while the specification requires StopButton to take priority and force MotorRunning to FALSE.

VERDICT:
- FAIL. The ELSIF structure prioritizes StartButton over StopButton, which violates the specification.

Interpretation

  • β–Έ βœ… Surfaces edge case behavior clearly
  • β–Έ βœ… Maps behavior back to the written specification
  • β–Έ βœ… Provides a clear, auditable FAIL verdict
  • β–Έ βœ… ELSIF bug caught through explicit reasoning
  • β–Έ Cost/runtime vary by model, pricing, and system load

⚠️ THE INTEGRATION CHALLENGE

In production: reasoning traces need to be aggregated, searched, and checked for completeness. Free-form CoT text makes this hard to do reliably at scale.

Chain-of-thought provides transparency, but not processability:

Example: Programmatically checking if edge cases were analyzed

Python
cot_output = """
ASSUMPTIONS:
- MotorRunning keeps its previous value unless assigned.
- StartButton and StopButton can be TRUE at the same time.

STEP_BY_STEP_REASONING:
1. When StartButton is TRUE...
2. When StartButton is FALSE and StopButton is TRUE...
3. When both are FALSE...

EDGE_CASE_ANALYSIS:
- When both buttons TRUE, only StartButton branch executes...

VERDICT:
- FAIL. The ELSIF structure prioritizes StartButton...
"""

# How do you automatically detect if edge case analysis was performed?
# How do you extract which specific edge cases were considered?
# How do you validate that all required reasoning steps are present?

# String parsing is fragile:
has_edge_analysis = "EDGE_CASE_ANALYSIS:" in cot_output  # Brittle
verdict = "FAIL" if "FAIL" in cot_output else "PASS"    # Unreliable

# Problem: Can't reliably build automation on top of prose

βœ… What CoT Provides

  • βœ… Human-readable reasoning
  • βœ… Auditable step-by-step logic
  • βœ… Transparent assumptions

⚠️ What CoT Doesn't Provide

  • ⚠️ Machine-parseable fields
  • ⚠️ Reliable extraction of verdicts
  • ⚠️ Automated validation of completeness
  • ⚠️ Multi-agent interoperability

Tutorial #9 covers schema-first design + field-level validation + audit logging β€” converting rationale into structured JSON with explicit fields for assumptions, edge cases, and verdicts. This makes reasoning checkable: you can validate completeness, extract specific findings, and aggregate across multiple analyses.

Important: Structured outputs solve format problems, not correctness problems. An agent can return perfectly valid JSON with wrong reasoning. Structured outputs make reasoning checkable (constraints, cross-checks, validators), not correct.

Note: In production, prefer brief rationale summaries over full reasoning transcripts. The audit trail is the structured output + evidence + validation results, not verbatim internal reasoning.

πŸ”’ EXPLICIT OPERATIONAL PROHIBITIONS

❌ Never Use Chain-of-Thought For:

  • ❌ Treating chain-of-thought outputs as formal verification
  • ❌ Using AI reasoning as a replacement for testing or simulation
  • ❌ Letting AI approve or merge PLC code changes automatically
  • ❌ Using this process for safety certification or compliance

Chain-of-thought is a review aid, not a formal method.

βœ… KEY TAKEAWAYS

  • βœ… Chain-of-thought = showing the reasoning, not just verdicts
  • βœ… It makes AI outputs auditable, challengeable, and reusable
  • βœ… You can catch subtle logic bugs by forcing edge case analysis
  • βœ… The engineer remains the final decision-maker
  • βœ… This pattern will later power transparent agent behavior in higher tracks

πŸ”œ NEXT TUTORIAL

#8 β€” Building Your First Tool-Using Agent

Extend your skills by connecting reasoning to a single, strictly read-only tool in a controlled way.

🧭 ENGINEERING POSTURE

This tutorial enforced:

  • β–Έ Transparency over black-box answers
  • β–Έ Structured reasoning over intuition
  • β–Έ Human authority over all conclusions
  • β–Έ Advisory tooling over autonomous control