Tutorial #8: Building Your First Tool-Using Agent
Read-Only, Industrial Safe
โ CORE MISSION OF THIS TUTORIAL
By the end of this tutorial, the reader will be able to:
- โ Understand what makes an agent tool-using
- โ Distinguish reasoning from tool execution
- โ Design a single, read-only tool boundary
- โ Safely connect AI reasoning to external data
- โ Prevent accidental control or actuation
This tutorial introduces controlled capability expansion.
โ ๏ธ SAFETY BOUNDARY REMINDER
This tutorial performs analysis only.
It must never be connected to:
- Live PLCs
- Production deployment pipelines
- Safety-rated controllers
- Motion or power systems
> All outputs are advisory-only and always require explicit human approval before any real-world action.
๐ VENDOR-AGNOSTIC ENGINEERING NOTE
This tutorial uses:
- โธ Generic IEC 61131-3 Structured Text (ST)
- โธ Python-based tooling
- โธ No PLC runtimes
- โธ No vendor SDKs
Patterns apply to all industrial environments.
1๏ธโฃ WHAT IS A TOOL-USING AGENT?
A tool-using agent is: An agent that can invoke external functions to gather information, then reason about the results.
Key Distinction
โ Tool โ Control
Tools are not for commanding, writing, or actuating systems
โ Tool = Observation
Tools are for reading, parsing, and gathering information
In Industrial Contexts
โ Tools Read Logs
Access historical data
โ Tools Parse Code
Extract structured information
โ Tools Fetch Specifications
Retrieve documentation
โ Tools Never Actuate
No control authority
2๏ธโฃ REFERENCE SCENARIO โ READING PLC LOGIC FROM A FILE
We simulate a very common scenario:
- โธ PLC logic exists as a text file
- โ The agent may read and analyze it
- โ The agent may not modify it
This is a perfect first tool boundary.
3๏ธโฃ DESIGNING A READ-ONLY TOOL
Our tool will:
Tool Design Pattern
- 1. Accept a filename
Input parameter
- 2. Return its contents
Pure function output
- 3. Perform no side effects
Read-only guarantee
Tool-Using Agent Flow
graph LR
A[User Request] --> B[Agent Reasoning]
B --> C[Call Tool]
C --> D[Read File]
D --> E[Return Data]
E --> B
B --> F[Analysis Result]
style A fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style B fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style C fill:#1a1a1e,stroke:#9e4aff,stroke-width:2px,color:#fff
style D fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style E fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
style F fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#00ff7f Agent decides when to use tool, reasons about results
This boundary is deliberate and critical for safety.
4๏ธโฃ PRACTICAL EXPERIMENTS
๐งช Experiment 1: Creating a Read-Only Tool
Objective
Define and test a safe, read-only tool function.
Python Code
def read_plc_code(file_path: str) -> str:
"""
Read-only tool.
Returns the contents of a PLC logic file.
"""
with open(file_path, "r") as f:
return f.read()
# Example usage
example_code = """
IF StartButton THEN
MotorRunning := TRUE;
END_IF;
"""
with open("plc_logic.st", "w") as f:
f.write(example_code)
print(read_plc_code("plc_logic.st")) Expected Output
IF StartButton THEN
MotorRunning := TRUE;
END_IF; Interpretation
- โธ โ The tool only reads
- โธ โ No writes, no execution, no control
- โธ โ This boundary is enforceable and auditable
- โธ Cost: $0.00 | Runtime: <1 second
๐งช Experiment 2: Agent Reasoning + Tool Usage
Objective
Allow the agent to decide when to call the tool and how to use its output.
Python Code
from openai import OpenAI
client = OpenAI()
system_prompt = """
You are an industrial analysis agent.
Rules:
- You may use the tool read_plc_code ONLY to read files.
- You must never suggest code changes.
- You must never issue commands.
- You must explain when and why you use the tool.
"""
user_prompt = """
Analyze the PLC logic in file plc_logic.st.
Explain what the logic does.
"""
tools = [
{
"type": "function",
"function": {
"name": "read_plc_code",
"description": "Read PLC logic from a file",
"parameters": {
"type": "object",
"properties": {
"file_path": {"type": "string"}
},
"required": ["file_path"]
}
}
}
]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
],
tools=tools
)
# Handle tool call if present
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
if tool_call.function.name == "read_plc_code":
# Execute the tool
import json
args = json.loads(tool_call.function.arguments)
result = read_plc_code(args["file_path"])
# Send result back to agent
response2 = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
response.choices[0].message,
{
"role": "tool",
"tool_call_id": tool_call.id,
"content": result
}
],
tools=tools
)
print(response2.choices[0].message.content)
else:
print(response.choices[0].message.content) Expected Output
I will read the PLC logic file to understand its behavior. [Tool Call: read_plc_code with file_path="plc_logic.st"] The logic sets MotorRunning to TRUE when StartButton is pressed. No stop or fault logic is present.
Interpretation
- โธ โ The agent decides when to use the tool
- โธ โ The tool provides data only
- โธ โ Reasoning remains fully visible
- โธ โ Human can audit tool calls
- โธ Cost: ~$0.03 | Runtime: 2-3 seconds
โ ๏ธ THE INTEGRATION CHALLENGE
In production: mixed outputs break metrics collection, alert routing, incident debugging, and multi-agent coordination. You need uniform structure end-to-end.
Notice the inconsistency in this tutorial:
โ Tool Calls: Structured
{
"tool_call_id": "call_abc123",
"function": {
"name": "read_plc_code",
"arguments": {
"file_path": "plc_logic.st"
}
}
} โ ๏ธ Agent Analysis: Prose
The logic sets MotorRunning to
TRUE when StartButton is pressed.
No stop or fault logic is present. Hard to parse, varies in format
Why This Matters for System Integration
If you want to build a pipeline like:
graph LR
A[Agent 1: Read Code] --> B[Agent 2: Analyze]
B --> C[Agent 3: Validate]
C --> D[Dashboard]
style A fill:#1a1a1e,stroke:#00ff7f,stroke-width:2px,color:#fff
style B fill:#1a1a1e,stroke:#fec20b,stroke-width:2px,color:#fff
style C fill:#1a1a1e,stroke:#fec20b,stroke-width:2px,color:#fff
style D fill:#1a1a1e,stroke:#04d9ff,stroke-width:2px,color:#fff
Agent 1's tool calls are structured โ
But Agent 1's analysis output is prose โ ๏ธ
Agent 2 can't reliably parse Agent 1's findings โ
Operational Impact
- โ ๏ธ Metrics: Can't aggregate "issues found" counts from prose
- โ ๏ธ Routing: Can't trigger alerts based on severity if severity is buried in text
- โ ๏ธ Debugging: Can't search/filter by specific findings across multiple runs
- โ ๏ธ Multi-agent: Agent 2 can't reliably consume Agent 1's outputs
Tutorial #9 will teach you schema-first design + validation + logging โ enforcing structured outputs not just for tool calls, but for all agent reasoning and analysis. This enables reliable multi-agent pipelines, automated validation, metrics collection, and incident response.
Important: Structured outputs ensure format consistency, not logical correctness. An agent can return perfectly structured JSON with wrong analysis. Structure makes correctness checkable (validators, constraints, cross-checks), not guaranteed.
๐ EXPLICIT OPERATIONAL PROHIBITIONS
โ Never Allow Tools To:
- โ Allowing tools to write or modify files/systems
- โ Letting agents chain tools autonomously without oversight
- โ Exposing hardware or network APIs
- โ Skipping explanation of tool usage
โ KEY TAKEAWAYS
- โ Tools expand perception, not authority
- โ Read-only tools are the safest starting point
- โ Tool usage must be explicit and explainable
- โ This is the foundation for supervised agents
๐ NEXT TUTORIAL
#9 โ Structured Outputs for PLC Data Extraction
Learn how to force agents to return machine-parseable results.
๐งญ ENGINEERING POSTURE
This tutorial enforced:
- โธ Capability before autonomy
- โธ Observation before action
- โธ Explicit boundaries
- โธ Human-in-the-loop by design