Tutorial #6: Few-Shot Learning for PLC Validation
Teaching AI Patterns, Not Rules
โ CORE MISSION OF THIS TUTORIAL
By the end of this tutorial, the reader will be able to:
- โ Understand what few-shot learning is in practical engineering terms
- โ Provide high-quality PLC examples that guide the model toward consistency
- โ Validate PLC logic using controlled, auditable prompts
- โ Prevent "creative" or unsafe reinterpretations of logic
- โ Prepare the foundation for structured, advisory logic reviews (Tutorial #7)
Few-shot learning is how you teach AI patterns, not rules โ safely.
โ ๏ธ SAFETY BOUNDARY REMINDER
This tutorial performs analysis only.
It must never be connected to:
- Live PLCs
- Production deployment pipelines
- Safety-rated controllers
- Motion or power systems
> All outputs are advisory-only and always require explicit human approval before any real-world action.
๐ VENDOR-AGNOSTIC ENGINEERING NOTE
This tutorial uses:
- โธ Generic IEC 61131-3 Structured Text (ST)
- โธ TwinCAT, Siemens TIA Portal, CODESYS
- โธ Allen-Bradley ST
- โธ Any IEC-based runtime
No vendor APIs or libraries are required. No online PLC interactions.
1๏ธโฃ WHAT IS FEW-SHOT LEARNING?
In engineering terms:
Few-shot learning is pattern teaching through examples, not parameter training.
We are not fine-tuning the model.
We are not training weights.
Instead:
- โธ We show the LLM how a good validation looks
- โธ It imitates the format, structure, and tone
- โธ It becomes far more predictable
Few-shot learning directly increases:
๐น Consistency
Same format every time
๐น Accuracy
Better pattern matching
๐น Traceability
Predictable outputs
๐น Engineer Trust
Reliable behavior
2๏ธโฃ REFERENCE SCENARIO โ VALIDATING SIMPLE MOTOR LOGIC
We define a "golden example" for validating a minimal motor control block:
IF StartButton THEN
MotorRunning := TRUE;
END_IF;
IF StopButton THEN
MotorRunning := FALSE;
END_IF; Our goal: Build an AI validator that analyzes logic against expected behavior without producing code.
3๏ธโฃ CONSTRUCTING A PROPER FEW-SHOT PROMPT
A few-shot validator prompt contains:
Prompt Components
- 1. Rules
Clear validation guidelines
- 2. Examples
One or more golden examples
- 3. Code to Validate
Target PLC logic
- 4. Format Template
Expected output structure
Few-Shot Flow
graph LR
A[Rules] --> B[Examples]
B --> C[Code Input]
C --> D[Validation Output]
style A fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style B fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style C fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style D fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f Examples guide the model toward consistent format
This tutorial includes two experiments demonstrating how each layer improves reliability.
4๏ธโฃ PRACTICAL EXPERIMENTS
๐งช Experiment 1: Single Example Few-Shot Validator
Objective
Show how even one good example stabilizes the validator output.
Python Code
from openai import OpenAI
client = OpenAI()
example_validation = """
### EXAMPLE_VALIDATION
INPUT_CODE:
IF StartButton THEN
MotorRunning := TRUE;
END_IF;
IF StopButton THEN
MotorRunning := FALSE;
END_IF;
VALIDATION_RESULT:
- Logic correctly sets MotorRunning TRUE on StartButton.
- Logic correctly resets MotorRunning on StopButton.
- No latches, timers, or safety logic present.
- Behavior matches expected specification.
### END_EXAMPLE
"""
code_to_validate = """
IF StartButton THEN
MotorRunning := TRUE;
END_IF;
IF StopButton THEN
MotorRunning := FALSE;
END_IF;
"""
prompt = f"""
You are a PLC logic validator. Follow the example exactly.
{example_validation}
### VALIDATE_THIS
INPUT_CODE:
{code_to_validate}
VALIDATION_RESULT:
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": prompt}
]
)
print(response.choices[0].message.content) Expected Output
- Logic correctly sets MotorRunning TRUE on StartButton. - Logic correctly resets MotorRunning on StopButton. - No latches, timers, or unexpected constructs detected. - Logic matches the expected specification.
Interpretation
- โธ โ AI begins to imitate the pattern
- โธ โ Output becomes auditable and stable
- โธ โ No hallucinated features
- โธ Cost: ~$0.01 | Runtime: <1 second
๐งช Experiment 2: Multi-Example Few-Shot Validator
Objective
Improve robustness by supplying two examples, including one with a mistake.
Python Code
from openai import OpenAI
client = OpenAI()
multi_examples = """
### EXAMPLE_GOOD
INPUT_CODE:
IF StartButton THEN
MotorRunning := TRUE;
END_IF;
IF StopButton THEN
MotorRunning := FALSE;
END_IF;
VALIDATION_RESULT:
- Behavior follows the expected start/stop specification.
- No timers, latches, or unexpected constructs detected.
### END_EXAMPLE
### EXAMPLE_FAULTY
INPUT_CODE:
MotorRunning := StartButton AND NOT StopButton;
VALIDATION_RESULT:
- Logic incorrectly ties output directly to input expression.
- MotorRunning loses state when both inputs are FALSE.
- Expected behavior: MotorRunning should latch TRUE until StopButton.
### END_EXAMPLE
"""
code_to_validate = """
MotorRunning := StartButton AND NOT StopButton;
"""
prompt = f"""
You are a PLC logic validator. Use the examples as patterns.
Be concise, deterministic, and avoid suggesting code.
{multi_examples}
### VALIDATE_THIS
INPUT_CODE:
{code_to_validate}
VALIDATION_RESULT:
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": prompt}
]
)
print(response.choices[0].message.content) Expected Output
- Logic directly binds MotorRunning to input expression. - This causes loss of state when both inputs are FALSE. - Expected behavior requires preserving MotorRunning until StopButton activates. - Pattern matches the faulty example case.
Interpretation
- โธ โ More accurate with two examples
- โธ โ More specific error detection
- โธ โ More robust to variations
- โธ โ Better at identifying faulty patterns
- โธ Cost: ~$0.02 | Runtime: <1 second
โ ๏ธ THE INTEGRATION CHALLENGE
In production: validation outputs need to feed dashboards, trigger alerts, and drive automated workflows. Free-form text breaks all of these.
Few-shot learning improves consistency, but the output is still free-form text:
โ What Few-Shot Solves
- โ More consistent tone
- โ More predictable structure
- โ Better pattern matching
โ ๏ธ What Few-Shot Doesn't Solve
- โ ๏ธ Output is still free-form prose
- โ ๏ธ Hard to parse programmatically
- โ ๏ธ Can't reliably extract fields
- โ ๏ธ Breaks downstream automation
Example: Trying to programmatically check if "safety logic is missing"
# Run 1:
validation_output_1 = """
- Logic correctly sets MotorRunning TRUE on StartButton.
- Logic correctly resets MotorRunning on StopButton.
- No latches, timers, or safety logic present.
- Behavior matches expected specification.
"""
# Run 2 (same logic, slight rewording):
validation_output_2 = """
- StartButton correctly sets MotorRunning to TRUE.
- StopButton correctly resets MotorRunning to FALSE.
- Safety interlocks: none detected.
- Specification compliance: confirmed.
"""
# Run 3 (extra commentary):
validation_output_3 = """
Analysis complete. The logic behaves as expected:
- MotorRunning becomes TRUE when StartButton activates
- MotorRunning becomes FALSE when StopButton activates
No safety logic or fault handling was found in this block.
"""
# Now try to parse them:
def has_safety_logic(text):
if "no safety logic" in text.lower():
return False # Works for Run 1
if "safety interlocks: none" in text.lower():
return False # Works for Run 2
if "no safety logic or fault handling" in text.lower():
return False # Works for Run 3
return None # Can't determine
# Change one word โ parser breaks
# Reorder bullets โ parser breaks
# Add commentary โ parser breaks
# Whitespace changes โ parser breaks
# Problem: Free text requires fragile, unmaintainable string parsing Tutorial #9 will teach you schema-first design + validation + retry loops โ forcing AI outputs into machine-readable JSON that can be reliably parsed, validated against constraints, and logged. This makes correctness checkable (not guaranteed, but testable).
You'll combine few-shot learning (T6) + structured outputs (T9) to get the best of both: consistent patterns AND reliable parsing.
๐ EXPLICIT OPERATIONAL PROHIBITIONS
โ Never Use Few-Shot Validation For:
- โ Using few-shot validation as final approval
- โ Using AI to sign off safety-rated logic
- โ Automatically fixing or rewriting PLC code
- โ Passing AI validation as compliance documentation
Few-shot validation is guidance, not authorization.
โ KEY TAKEAWAYS
- โ Few-shot learning teaches AI by example, not training
- โ Examples drastically improve consistency and trust
- โ Multiple examples strengthen error detection
- โ Validation stays advisory only
- โ This forms the basis for Tutorial #7 โ Chain-of-Thought Safety Review
๐ NEXT TUTORIAL
#7 โ Chain-of-Thought for Logic Review (Advisory Only)
Learn how to perform structured, transparent reasoning on PLC logic.
๐งญ ENGINEERING POSTURE
This tutorial enforced:
- โธ Pattern-guided validation
- โธ Controlled, predictable output
- โธ Human authority over all decisions
- โธ Safety-first educational design