Tutorial D3: LangChain Essentials for Industrial Control Systems
From raw API calls to composable, swappable, production-ready LLM chains.
β CORE MISSION OF THIS TUTORIAL
By the end of this tutorial, the reader will be able to:
- β Understand why LangChain exists and when it adds value over raw OpenAI calls.
- β
Use
ChatOpenAI/ChatAnthropicas vendor-agnostic model wrappers. - β
Build reusable
ChatPromptTemplateobjects for industrial scenarios. - β
Parse LLM responses into typed Python objects with
PydanticOutputParser. - β
Compose multi-step workflows with
RunnableSequenceβ the building block that LangGraph nodes use internally.
This tutorial gives you the LangChain vocabulary that every later Developer Track tutorial assumes β especially LangGraph (D4), where each chain step becomes a graph node.
π VENDOR-AGNOSTIC ENGINEERING NOTE
This tutorial uses:
- βΈ OpenAI-compatible APIs (gpt-4o-mini shown; provider wrapper swaps while chain logic stays the same)
- βΈ Generic IEC 61131-3 alarm codes and tag patterns
- βΈ Simulated PLC data only β no live connections required
- βΈ All code tested with langchain-openai 0.1.x and langchain-core 0.2.x
You can swap ChatOpenAI for ChatAnthropic, ChatGroq, or another LangChain-compatible model while keeping the same prompt, parser, and chain structure. Provider setup still changes: package install, API key, and occasionally model-specific kwargs.
1οΈβ£ CONCEPT OVERVIEW β WHY NOT JUST CALL THE API DIRECTLY?
In the Technician track you called openai.chat.completions.create() directly. That works perfectly for single-shot prompts. But when you start building multi-step agent workflows β parse the alarm, fetch the context, classify severity, format the recommendation β raw API calls become brittle glue code.
Consider what changes when you move from a demo to a real deployment:
- βΈ Your plant standardizes on Anthropic instead of OpenAI β you rewrite every API call.
- βΈ You need to test the classifier with different prompts β they're buried inside functions, not separated from logic.
- βΈ The output parser breaks when the model adds an unexpected sentence β you have ad-hoc string parsing scattered everywhere.
- βΈ You want to add a fallback model when the primary is rate-limited β now you're rebuilding retry logic from scratch.
Key Principle: LangChain gives you standardized interfaces, not magic.
Think of it like IEC 61131-3 function blocks: TON, CTU, and MOVE don't make your ladder logic smarter β they make it composable and vendor-portable. LangChain does the same for LLM calls.
ARCHITECTURE COMPARISON
graph LR
subgraph Raw["Raw API calls"]
direction LR
R1[openai.create<br/>prompt hardcoded]:::pink --> R2[string.split<br/>brittle parsing]:::pink --> R3[if 'ERROR' in<br/>ad-hoc logic]:::pink
end
subgraph LC["LangChain Runnables"]
direction LR
L1[ChatPromptTemplate<br/>reusable & testable]:::cyan --> L2[ChatOpenAI<br/>swappable model]:::purple --> L3[PydanticOutputParser<br/>typed & validated]:::green
end
classDef cyan fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#04d9ff;
classDef purple fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#9e4aff;
classDef green fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f;
classDef pink fill:#1a1a1e,stroke:#ff4fd8,stroke-width:2px,color:#ff4fd8;
Each LangChain component implements the Runnable interface β which means they can be composed with the pipe operator (|), wrapped with fallbacks, batched, and streamed. This composability is what D4 (LangGraph) relies on: every node in a StateGraph is a Runnable.
2οΈβ£ CHATMODEL β THE UNIVERSAL LLM INTERFACE
BaseChatModel is the interface that every LangChain model wraps. You configure it once β temperature, timeout, retry policy β and swap providers without touching your chain logic.
Notice the connection to D2: LangChain's model wrappers include built-in retry with exponential backoff. The max_retries parameter uses the same strategy you built manually in D2 β so you are not giving up control, you are delegating a known pattern to a tested library.
SETUP CELL
ChatModel setup β vendor-agnostic model wrapper
Initialise a ChatOpenAI model with production-appropriate settings and verify it responds to a basic industrial prompt.
# Install: pip install langchain-openai langchain-core
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
# --- Model configuration ---
# max_retries: built-in exponential backoff (same pattern as D2, handled for you)
# request_timeout: fail fast if the API hangs β don't block the worker loop
# temperature=0: deterministic outputs for industrial classification tasks
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0,
max_retries=3,
request_timeout=30,
api_key=os.environ["OPENAI_API_KEY"],
)
# --- Basic invocation with structured messages ---
messages = [
SystemMessage(content=(
"You are an industrial AI assistant. "
"Respond concisely and factually. "
"Advisory only β never recommend direct PLC writes."
)),
HumanMessage(content=(
"Alarm E-421 on Filling Line 3: motor overtemperature. "
"What is the most likely root cause in one sentence?"
)),
]
response = llm.invoke(messages)
print(response.content)
# usage_metadata is provider/model dependent, so guard it in a portability example
total_tokens = (response.usage_metadata or {}).get("total_tokens")
if total_tokens is not None:
print(f"\nTokens used: {total_tokens}")
# --- Swapping providers keeps the same message and invoke flow. ---
# from langchain_anthropic import ChatAnthropic
# llm = ChatAnthropic(
# model="claude-3-5-haiku-20241022",
# temperature=0,
# max_retries=3,
# api_key=os.environ["ANTHROPIC_API_KEY"],
# ) Expected output
Likely root cause: sustained overload or blocked ventilation on the motor shaft, causing heat buildup beyond the thermal protection threshold. Tokens used: 87
3οΈβ£ PROMPTTEMPLATE β REUSABLE, TESTABLE PROMPTS
In the Technician track (T5), you learned that well-structured prompts produce better outputs than ad-hoc strings. ChatPromptTemplate takes that discipline further: prompts become parameterised objects that you can test independently, version-control, and reuse across multiple chains.
Think of a prompt template like a function block with input parameters in IEC 61131-3: you define the interface once, then call it with different arguments without rewriting the body.
CONCEPT CELL
ChatPromptTemplate β parameterised alarm triage prompt
Build a reusable alarm triage prompt that accepts plant-specific context as variables, then render and inspect it before attaching a model.
from langchain_core.prompts import ChatPromptTemplate
# --- Define the prompt template ---
# Variables in {curly_braces} are injected at call time
# System message sets the agent's role and boundaries
# Human message carries the dynamic alarm context
alarm_triage_prompt = ChatPromptTemplate.from_messages([
("system", """You are an industrial fault-triage assistant.
Rules:
- Advisory only. Never recommend direct PLC writes.
- Classify severity as: CRITICAL | HIGH | MEDIUM | LOW
- Cite the specific tag or alarm code that supports your classification.
- If data is insufficient, state that explicitly."""),
("human", """Alarm report:
Alarm code : {alarm_code}
PLC tags : {plc_tags}
Shift : {shift}
Prior alarms (last 1h): {recent_alarms}
Classify severity and give one-sentence root cause."""),
])
# --- Inspect the rendered prompt (no LLM call yet) ---
# This is useful for unit-testing prompts without spending tokens
rendered = alarm_triage_prompt.invoke({
"alarm_code": "E-421",
"plc_tags": "MotorTemp=92Β°C, AmbientTemp=28Β°C, LoadCurrent=18A (rated 15A)",
"shift": "Night shift, 02:15",
"recent_alarms": "E-419 (overcurrent, 01:50), E-421 (overtemp, 01:55)",
})
# Print each message to see the fully rendered prompt
for msg in rendered.messages:
print(f"[{msg.type.upper()}]\n{msg.content}\n") Expected output
[SYSTEM] You are an industrial fault-triage assistant. Rules: - Advisory only. Never recommend direct PLC writes. - Classify severity as: CRITICAL | HIGH | MEDIUM | LOW - Cite the specific tag or alarm code that supports your classification. - If data is insufficient, state that explicitly. [HUMAN] Alarm report: Alarm code : E-421 PLC tags : MotorTemp=92Β°C, AmbientTemp=28Β°C, LoadCurrent=18A (rated 15A) Shift : Night shift, 02:15 Prior alarms (last 1h): E-419 (overcurrent, 01:50), E-421 (overtemp, 01:55) Classify severity and give one-sentence root cause.
4οΈβ£ OUTPUT PARSERS β FROM FREE TEXT TO TYPED DATA
Raw LLM responses are strings. Industrial workflows need typed, validated data β the same way a PLC function block outputs a structured type, not a raw byte stream. PydanticOutputParser bridges the gap: it injects format instructions into your prompt and validates the response against a Pydantic schema.
β‘ INSTRUCTOR vs PYDANTIC OUTPUT PARSER
In Technician T9, you used Instructor which forces JSON mode at the API level β the model is constrained to emit valid JSON by the API itself. PydanticOutputParser works differently: it adds format instructions to the prompt and parses the model's text output.
- βΈ Use Instructor when: the model supports JSON mode and schema compliance is non-negotiable.
- βΈ Use PydanticOutputParser when: you need to compose parsing inside a LangChain chain or the model doesn't support JSON mode.
CONCEPT CELL
PydanticOutputParser β typed fault diagnosis from LLM response
Define a FaultDiagnosis schema, inject format instructions into the prompt, and validate the LLM output against the schema.
from enum import Enum
from pydantic import BaseModel, Field
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os
# --- Define the output schema ---
class Severity(str, Enum):
CRITICAL = "CRITICAL"
HIGH = "HIGH"
MEDIUM = "MEDIUM"
LOW = "LOW"
class FaultDiagnosis(BaseModel):
alarm_code: str = Field(description="The alarm code being analysed")
severity: Severity = Field(description="Classified severity level")
root_cause: str = Field(description="One-sentence root cause explanation")
affected_tags: list[str] = Field(description="PLC tags that support the diagnosis")
recommended_action: str = Field(description="Advisory action for the engineer (no PLC writes)")
confidence: float = Field(description="Confidence score 0.0-1.0", ge=0.0, le=1.0)
# --- Wire up the parser ---
parser = PydanticOutputParser(pydantic_object=FaultDiagnosis)
# The parser generates format instructions automatically
# Inject them into your prompt with {format_instructions}
prompt = ChatPromptTemplate.from_messages([
("system", "You are an industrial fault-triage assistant. Advisory only.\n{format_instructions}"),
("human", (
"Alarm E-421 on Filling Line 3.\n"
"Tags: MotorTemp=92Β°C, LoadCurrent=18A (rated 15A), AmbientTemp=28Β°C.\n"
"Prior alarms last hour: E-419 overcurrent at 01:50, E-421 overtemp at 01:55."
)),
]).partial(format_instructions=parser.get_format_instructions())
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0, api_key=os.environ["OPENAI_API_KEY"])
# --- Run the chain ---
raw_response = llm.invoke(prompt.invoke({}))
diagnosis = parser.invoke(raw_response)
print(f"Alarm : {diagnosis.alarm_code}")
print(f"Severity : {diagnosis.severity.value}")
print(f"Root cause: {diagnosis.root_cause}")
print(f"Tags : {', '.join(diagnosis.affected_tags)}")
print(f"Action : {diagnosis.recommended_action}")
print(f"Confidence: {diagnosis.confidence:.0%}") Expected output
Alarm : E-421
Severity : HIGH
Root cause: Sustained overcurrent (E-419) caused motor overtemperature
(E-421) within 5 minutes, indicating persistent overload
or cooling failure.
Tags : MotorTemp, LoadCurrent
Action : Inspect motor load and ventilation; reduce throughput or
stop line until motor cools below 70Β°C before restarting.
Confidence: 85% 5οΈβ£ RUNNABLESEQUENCE β COMPOSING STEPS WITH THE PIPE OPERATOR
In Cells 1-3 you manually wired steps together: render the prompt, call the model, invoke the parser. RunnableSequence collapses that into a single composable object using the | pipe operator.
This is the most important concept in this tutorial.
Every step in a LangChain chain maps directly to a node in a LangGraph StateGraph. In D4, you will take this exact chain and replace the linear pipe with a graph that can branch, loop, and recover from failures. The chain is the foundation β the graph is the structure around it.
EXPERIMENT CELL
RunnableSequence β full alarm triage chain with a custom transform step
Compose prompt β model β parser into a single chain, then add a RunnableLambda transform step to normalise PLC tag data before the LLM call.
from enum import Enum
from pydantic import BaseModel, Field
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda
from langchain_openai import ChatOpenAI
import os
# --- Schema (same as Cell 3) ---
class Severity(str, Enum):
CRITICAL = "CRITICAL"
HIGH = "HIGH"
MEDIUM = "MEDIUM"
LOW = "LOW"
class FaultDiagnosis(BaseModel):
alarm_code: str = Field(description="The alarm code being analysed")
severity: Severity = Field(description="Classified severity level")
root_cause: str = Field(description="One-sentence root cause explanation")
affected_tags: list[str] = Field(description="PLC tags that support the diagnosis")
recommended_action: str = Field(description="Advisory action β no PLC writes")
confidence: float = Field(description="Confidence score 0.0-1.0", ge=0.0, le=1.0)
parser = PydanticOutputParser(pydantic_object=FaultDiagnosis)
# --- Custom transform: normalise raw tag dict into a readable string ---
# This is a RunnableLambda β a plain Python function wrapped as a Runnable
# In LangGraph D4, this becomes its own node in the graph
def normalise_tags(inputs: dict) -> dict:
"""Convert raw tag dict to a formatted string for the prompt."""
tags = inputs.get("raw_tags", {})
tag_str = ", ".join(f"{k}={v}" for k, v in tags.items())
return {**inputs, "plc_tags": tag_str}
normalise = RunnableLambda(normalise_tags)
# --- Prompt ---
prompt = ChatPromptTemplate.from_messages([
("system", "You are an industrial fault-triage assistant. Advisory only.\n{format_instructions}"),
("human", (
"Alarm {alarm_code} on {line}.\n"
"Tags: {plc_tags}.\n"
"Prior alarms (last 1h): {recent_alarms}"
)),
]).partial(format_instructions=parser.get_format_instructions())
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0, api_key=os.environ["OPENAI_API_KEY"])
# --- Build the chain with | ---
# normalise β prompt β llm β parser
# Each step receives the output of the previous step
chain = normalise | prompt | llm | parser
# --- Invoke with raw inputs ---
alarm_context = {
"alarm_code": "E-421",
"line": "Filling Line 3",
"raw_tags": {
"MotorTemp": "92Β°C",
"LoadCurrent": "18A (rated 15A)",
"AmbientTemp": "28Β°C",
"RunHours": "4,821h",
},
"recent_alarms": "E-419 overcurrent 01:50, E-421 overtemp 01:55",
}
result = chain.invoke(alarm_context)
print(f"Severity : {result.severity.value}")
print(f"Root cause: {result.root_cause}")
print(f"Action : {result.recommended_action}")
print(f"Confidence: {result.confidence:.0%}") Expected output
Severity : HIGH
Root cause: Sustained overcurrent followed by overtemperature on
Filling Line 3 motor suggests overload or cooling
degradation after 4,821 operating hours.
Action : Reduce line throughput, inspect motor ventilation and
load, and schedule bearing/cooling inspection before
next shift start.
Confidence: 88% CHAIN STRUCTURE β EACH STEP IS A RUNNABLE
graph LR
IN[Raw alarm<br/>inputs dict]:::cyan
N[normalise_tags<br/>RunnableLambda]:::purple
P[ChatPromptTemplate<br/>renders messages]:::purple
M[ChatOpenAI<br/>model call]:::purple
PA[PydanticOutputParser<br/>validates schema]:::green
OUT[FaultDiagnosis<br/>typed object]:::green
IN --> N --> P --> M --> PA --> OUT
classDef cyan fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#04d9ff;
classDef purple fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#9e4aff;
classDef green fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f;
Every box is a Runnable. In LangGraph D4, each box becomes a node β and you add conditional edges between them.
6οΈβ£ FALLBACKS AND THE LIMITS OF CHAINS
LangChain chains can handle one class of failure gracefully: model-level fallbacks. If your primary model is unavailable, with_fallbacks() routes to a backup automatically.
CONCEPT CELL
with_fallbacks() β automatic model-level failover
Wrap a complete chain with a fallback so a rate-limited or unavailable primary model automatically retries with a backup.
from enum import Enum
from pydantic import BaseModel, Field
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic # pip install langchain-anthropic
import os
# --- Minimal self-contained setup reused from earlier cells ---
class Severity(str, Enum):
CRITICAL = "CRITICAL"
HIGH = "HIGH"
MEDIUM = "MEDIUM"
LOW = "LOW"
class FaultDiagnosis(BaseModel):
alarm_code: str = Field(description="The alarm code being analysed")
severity: Severity = Field(description="Classified severity level")
root_cause: str = Field(description="One-sentence root cause explanation")
affected_tags: list[str] = Field(description="PLC tags that support the diagnosis")
recommended_action: str = Field(description="Advisory action β no PLC writes")
confidence: float = Field(description="Confidence score 0.0-1.0", ge=0.0, le=1.0)
parser = PydanticOutputParser(pydantic_object=FaultDiagnosis)
def normalise_tags(inputs: dict) -> dict:
tags = inputs.get("raw_tags", {})
tag_str = ", ".join(f"{k}={v}" for k, v in tags.items())
return {**inputs, "plc_tags": tag_str}
normalise = RunnableLambda(normalise_tags)
prompt = ChatPromptTemplate.from_messages([
("system", "You are an industrial fault-triage assistant. Advisory only.\n{format_instructions}"),
("human", (
"Alarm {alarm_code} on {line}.\n"
"Tags: {plc_tags}.\n"
"Prior alarms (last 1h): {recent_alarms}"
)),
]).partial(format_instructions=parser.get_format_instructions())
# Primary model (production-grade, more expensive)
primary_llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
request_timeout=20,
api_key=os.environ["OPENAI_API_KEY"],
)
# Fallback model (different provider β kicks in if primary fails)
fallback_llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
temperature=0,
timeout=10,
api_key=os.environ["ANTHROPIC_API_KEY"],
)
# Chain with automatic failover
# If primary raises RateLimitError or TimeoutError, fallback_llm is tried
llm_with_fallback = primary_llm.with_fallbacks([fallback_llm])
# The rest of the chain is unchanged β swap llm for llm_with_fallback
chain = normalise | prompt | llm_with_fallback | parser
alarm_context = {
"alarm_code": "E-421",
"line": "Filling Line 3",
"raw_tags": {
"MotorTemp": "92Β°C",
"LoadCurrent": "18A (rated 15A)",
"AmbientTemp": "28Β°C",
},
"recent_alarms": "E-419 overcurrent 01:50, E-421 overtemp 01:55",
}
# --- Where chains stop working ---
# Fallbacks handle: model unavailable, rate limit, timeout
# Fallbacks do NOT handle:
# - "if alarm is CRITICAL, run a deeper analysis" (conditional routing)
# - "retry the diagnosis with more tag context" (cycle back to a previous step)
# - "all three agents agreed on HIGH β proceed" (shared consensus state)
#
# For those patterns, you need LangGraph β coming in D4.
print("Chain with fallback configured. Primary: gpt-4o (OpenAI), Fallback: claude-sonnet-4-20250514 (Anthropic)")
print("Invoke it the same way: chain.invoke(alarm_context)") Expected output
Chain with fallback configured. Primary: gpt-4o (OpenAI), Fallback: claude-sonnet-4-20250514 (Anthropic) Invoke it the same way: chain.invoke(alarm_context)
Three things chains cannot do β and why that matters for industrial workflows
β CONDITIONAL ROUTING
Chains always run every step in order. But industrial alarms need different analysis paths: a CRITICAL alarm should trigger a deep multi-tag analysis; a LOW alarm needs a one-line summary.
β LangGraph solves with conditional edges
β‘ CYCLES
A chain runs once, left to right. But agents sometimes need to gather more data and loop: "I need the last 24h trend before I can classify this." Chains can't loop back to an earlier step.
β LangGraph solves with cycle edges
β’ SHARED STATE
Multi-agent workflows need a shared scratchpad β one agent reads tags, another classifies severity, a third formats the report. Chains pass outputs serially; they have no concept of shared mutable state.
β LangGraph solves with TypedDict state
CHAIN (D3) β linear, no branching
graph LR
A[normalise]:::purple --> B[prompt]:::purple --> C[llm]:::purple --> D[parser]:::green
classDef purple fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#9e4aff;
classDef green fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f;
Always runs every step in order. Crashes if any step fails.
STATEGRAPH (D4) β conditional, recoverable
graph LR
A[normalise<br/>node]:::purple --> B[triage<br/>node]:::purple
B -->|CRITICAL| C[deep analysis<br/>node]:::cyan
B -->|LOW| D[summary<br/>node]:::green
C --> E[report<br/>node]:::green
D --> E
classDef purple fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#9e4aff;
classDef cyan fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#04d9ff;
classDef green fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f;
Routes by severity. Each box is the same Runnable you built in D3.
β KEY TAKEAWAYS
- β LangChain provides standardised interfaces (ChatModel, PromptTemplate, OutputParser) β swap vendors without rewriting chains.
- β ChatPromptTemplate separates prompt logic from business logic β test prompts cheaply without LLM calls.
- β PydanticOutputParser bridges LLM text to typed Python objects β same goal as Instructor (T9) but composable inside chains.
- β The | pipe operator composes Runnables into chains β each step is independently testable and replaceable.
- β Chains are linear. For branching, cycles, and shared state β the three things industrial workflows need β you graduate to LangGraph in D4.
π NEXT TUTORIAL
#4 β LangGraph Fundamentals (StateGraph)
Take the chain you built here and restructure it as a fault-tolerant StateGraph with conditional edges, shared state, and cycle support.