Comprehensive Guide: Securing OT Water Pressure Sensors in a Critical Water Infrastructure



Comprehensive Guide: Securing OT Water Pressure Sensors in a Critical Water Infrastructure
Executive Summary
Water supply systems are formally designated as Critical Information Infrastructure (CII) in most jurisdictions. Their disruption does not merely cause inconvenience — it can result in cascading public health crises, economic paralysis, and in extreme cases, loss of life. This guide implements defense-in-depth security for Operational Technology (OT) monitoring devices, recognizing that a compromised pressure sensor could disrupt water distribution operations or provide falsified telemetry that affects control decisions
Scenario
Design and deploy a secure OT monitoring system for a Water Pressure Sensor (WPS) inside a Critical Information Infrastructure (CII) pump room supporting a life-dependent water source.
Key Considerations in this Guide
This guide is grounded in internationally recognized standards and real-world operational experience. It prioritises:
- Safety of Personnel
- Deterministic Control
- System Availability
- Integrity of physical process data
Threat Model for the Pump Room
Examples:
Threat 1 – Sensor spoofing
Threat 2 – Manipulated pressure telemetry
Threat 3 – OT network lateral movement
Threat 4 – Wireless gateway compromise
Threat 5 – Malicious firmware insertion
Threat 6 – Insider engineering workstation compromise
Phase 1: Planning & Risk Assessment
The goal of this phase is to assess the overall operational and security landscape — covering policies, roles, procurement requirements, and risk management. This will require defining the security requirements using recognized frameworks and local regulatory requirements.
As this pump room supports a life-dependent water source, uninterrupted operations are critical. It is therefore important to conduct a thorough risk assessment, identify credible threat scenarios and operational consequences (e.g. attackers remotely manipulating pressure readings causing water supply disruption) for the OT infrastructure.
Rationale
You cannot secure what you don't understand. OT environments differ fundamentally from IT:
- Safety and availability take precedence over confidentiality
- Legacy systems with decades-long lifespans
- Real-time requirements that preclude standard security updates
- Potential for physical consequences to human life
Actions
- Conduct a OT-specific risk assessment using ISA/IEC 62443 framework
- Define security zones and conduits per ISA-95 Purdue Model
- Establish a cross-functional team including:
- Process engineers
- OT technicians
- Cybersecurity specialists
- Physical security personnel
- Compliance/legal representatives
Outcome
A Security Design Document mapping:
- Asset inventory with criticality ratings
- Data flows between Levels 0-3 of Purdue Model
- Safety Instrumented System (SIS) boundaries
- Regulatory requirements (e.g., NERC-CIP, WaterISAC guidelines)
Phase 2: Sensor Selection & Hardening
Rationale
The sensor is the first link in the chain. Compromised hardware provides false data or becomes an entry point to the wider OT network.
Actions
A. Selection Criteria
-
Choose sensors with native security features:
- Encryption at rest for configuration data
- Secure boot with cryptographic verification
- Hardware security modules (HSM) or TPM chips
- No default credentials (require unique credentials per device)
-
Compliance with recognized standards:
- IEC 62443-4-2 – Technical security requirements for industrial automation components (identification & authentication, encryption, integrity).
- UL 2900 – Software cybersecurity for network‑connectable products (focus on vulnerabilities, known malware).
- NIST SP 800-82 – Guide to Operational Technology (OT) security, providing comprehensive security controls for industrial control systems.
- Common Criteria (ISO/IEC 15408) – International standard for evaluating security functionality and assurance; certification (e.g., EAL2+, EAL4+) provides independent verification of vendor claims.
Relationship between these standards:
| Standard | Focus | Role in Sensor Selection |
|---|---|---|
| IEC 62443‑4‑2 | Technical security requirements for industrial automation components | Core product specification – defines what security capabilities the sensor must have. |
| UL 2900 | Software cybersecurity for network‑connectable products | Vulnerability testing – ensures firmware is free from known flaws. |
| NIST SP 800‑82 | Comprehensive OT security guidance | Reference framework – aligns sensor security with overall system security architecture. |
| Common Criteria (ISO/IEC 15408) | Generic IT security evaluation methodology | Certification – verifies that the vendor’s claims (including those from IEC 62443) are actually implemented. |
- Operational requirements:
- Intrinsically safe for pump room environments
- Analog backup capability (4-20mA output alongside digital)
- Local display/indicator for manual verification
- Physical tamper evidence (seals, breakable housings)
B. Hardening Process
-
Before deployment:
- Update to latest vendor-approved firmware (never IT patches)
- Remove/uninstall all unnecessary services (web servers, FTP, Telnet)
- Disable unused ports and protocols
- Change all default credentials using secure password vault
- Generate unique cryptographic keys per device
-
Physical installation:
- Install in locked enclosures with tamper switches
- Use conduit for all cabling to prevent tampering
- Implement red/black separation where power (red) and data (black) are physically separated
C. Hardening SCADA Servers and Workstations (Purdue Level 2)
While the primary focus is sensors, the SCADA servers and engineering workstations that interact with them also require hardening. CIS Benchmarks are relevant here, but must be applied with caution:
- Applicability: SCADA servers typically run Windows Server or Linux; workstations run Windows 10/11. For these, CIS Benchmarks (e.g., CIS Microsoft Windows Server Benchmark, CIS Red Hat Enterprise Linux Benchmark) provide detailed configuration guidance.
- Adaptation required: Before applying any CIS recommendation, assess its impact on real‑time operations and safety functions. For example:
- Do not disable necessary services like DCOM or OPC.
- Test all changes in a non‑production replica environment.
- Document deviations from the benchmark with risk acceptance.
- Alternative standards: Some organizations prefer ISA‑62443‑3‑3 (system security requirements) or vendor‑specific hardening guides (e.g., Rockwell Automation’s “Secure Deployment Guide”). However, CIS Benchmarks remain a valuable starting point for the underlying OS.
- Outcome: A hardened SCADA server/workstation that meets both security best practices and operational reliability.
Phase 3: Network Architecture & Segmentation
The network must be designed so that the pump-room sensor system is isolated from non-OT networks. Place the sensor and its controller (PLC or gateway) in a restricted OT VLAN/segment. Do not put these devices directly on the corporate LAN or the Internet. Use firewalls or industrial routers to create a demilitarized zone (DMZ) for any necessary IT/Internet connections.
For example, all remote access (engineer laptops, cloud SCADA links) should go through a secured jump host or VPN gateway in a DMZ . The Purdue reference model explicitly recommends multiple layers (enterprise IT, DMZ, control zone) to contain breaches.
Rationale
A flat OT network allows lateral movement. Segmentation contains breaches and preserves critical safety functions.
Actions
A. Implement Purdue Model Architecture
Level 3: Operations Management (DMZ)
↓
Level 2: Supervisory Control (Historian, HMI)
↓
Level 1: Basic Control (PLC, RTU)
↓
Level 0: Process (Your pressure sensors)
B. Critical Design Principles
-
One‑way data diodes
- What: Hardware devices or software‑enforced unidirectional gateways that physically allow data to travel in only one direction.
- Purpose: If a sensor network is compromised, attackers cannot send commands back into the sensor or control system.
- Outcome: Even with full control of a sensor, an adversary cannot manipulate the PLC/HMI because the return path does not exist.
- Use case: Between Level 1 (PLCs) and Level 2 (Historian/HMI). Sensors usually send telemetry to PLCs, but for extra protection, a diode could be placed between the sensor subnet and the PLC subnet – however, this requires careful analysis of any bidirectional communication needed (e.g., configuration).
-
Industrial DMZ (iDMZ)
- What: A buffer network that sits between the IT network (corporate) and the OT network (control).
- Purpose: IT and OT cannot communicate directly; all cross‑traffic terminates in the iDMZ.
- Design: Typically contains servers that replicate data from OT to IT (e.g., historians, patch servers, authentication servers). No direct IT‑to‑OT device communication is allowed.
- Outcome: A breach in IT does not immediately reach OT; a breach in OT does not easily pivot into IT.
-
Dedicated OT firewalls
- What: Firewalls specifically designed for industrial environments, often supporting industrial protocols.
- Capabilities beyond standard IT firewalls:
- Deep packet inspection (DPI) for Modbus/TCP, DNP3, PROFINET, etc. – can validate register ranges, function codes, and detect malformed packets.
- Whitelisting – only permit explicitly allowed source‑destination‑protocol combinations.
- Protocol conformance – reject packets that violate the specification (e.g., a Modbus read request with illegal length).
- Why not a standard IT firewall? IT firewalls cannot parse industrial payloads; they see only IP/TCP headers. OT firewalls need to understand the application layer for it to function as desired.
-
VLAN segmentation
- What: Virtual LANs logically separate devices on the same physical switch.
- Purpose: Contain lateral movement. If an attacker compromises a pressure sensor, they cannot directly ARP‑spoof or broadcast to the PLC or another sensor unless a router (firewall) routes between VLANs.
- Implementation details: Each functional group (pressure sensors, flow meters, safety PLCs, environment) in its own VLAN. Routing between VLANs is forced through an OT firewall with strict rules.
- Outcome: A single compromised device can only attack its immediate VLAN; the firewall blocks attempts to reach other zones.
C. iDMZ vs. Air‑gapped Network – Demystification
They are NOT the same thing.
| Concept | Definition | Role in Critical Design Principles |
|---|---|---|
| Air‑gapped network | A network that is physically isolated – no network connections to any other network, including the internet. Data transfer must be manual (e.g., sneakernet). | Extreme isolation. Used for the most critical safety systems (e.g., nuclear reactor protection). Not practical for most water SCADA systems that require remote monitoring or business integration. |
| iDMZ | A controlled bridge between two networks (IT and OT). Connections are tightly controlled, often via proxy servers or replication services. | Practical security – enables necessary data exchange while reducing risk. It is not air‑gapped; it is a demilitarised zone. |
Air‑gapping as a Critical Design Principle?
Only for systems that truly need total isolation and can independently operate without external connectivity.
In a water pump room, you might air‑gap the Safety Instrumented System (SIS) while the main SCADA network uses an iDMZ for remote operations.
D. Sensor Network Specifics
-
Industrial‑grade switches – why they matter and how to harden them
- Port security – Bind a specific MAC address to a physical port; if a different device connects, the port shuts down or alarms. Prevents an attacker from unplugging the sensor and plugging in a rogue laptop.
- Storm control (broadcast/multicast) – Limits the rate of broadcast/multicast traffic per port. OT networks often have chatty protocols (e.g., PROFINET). A misconfigured or malicious device could flood the network with broadcasts, causing denial of service. Storm control caps this.
- Spanning Tree Protocol (STP) hardening – STP prevents loops in redundant topologies. Attackers can inject BPDU packets to cause topology changes and denial of service. Hardening includes enabling BPDU Guard (shuts down a port if a BPDU is received from an end device) and Root Guard (prevents rogue switches from becoming root bridge). Outcome: The switch is resilient against layer‑2 attacks.
-
Implement network monitoring at the sensor VLAN – why TAPs, Wireshark, and anomaly detection?
- Passive traffic monitoring with Wireshark/Moloch – Why passive? Active scanning (nmap, Nessus) can crash fragile OT devices or disrupt deterministic timing. Passive monitoring listens to existing traffic without injecting probes. Wireshark for deep packet inspection, Moloch (now Arkime) for indexed, searchable full‑packet capture at scale.
- Network TAPs (Test Access Points) – Hardware devices that mirror traffic without introducing a point of failure and without the switch’s CPU processing overhead. Why not SPAN ports? SPAN ports can drop packets under load, are attackable via switch misconfiguration, and may fail to capture certain errors. TAPs are fail‑open, ensuring monitoring never disrupts the live network. Outcome: Reliable, forensic‑grade packet capture.
- Anomaly detection tuned for OT – Algorithms that learn the normal communication patterns of the sensor VLAN. Examples of anomalies: sensor suddenly communicates with a new IP address; Modbus read requests become write requests; traffic volume spikes or drops unexpectedly; new protocols appear. Why OT‑specific? Standard IT IDS will flag legitimate industrial traffic (e.g., unusual payloads) as suspicious. OT‑tuned systems reduce false positives.
Phase 4.1: Secure Communication - Wired
Rationale
To secure sensor communications in life-dependent OT environments, two complementary security layers (3 and 7) must be implemented:
Actions
A. Two Complementary Approaches for Securing Sensor Communications
In a real OT environment, you will have a mix of modern as well as legacy sensor devices, bearing in mind that hardware is often custom-built by OEM vendors and may remain in service for decades. Therefore, two approaches are needed:
1️⃣ Application-Layer Identity Security (Layer 7)
If the sensor supports modern protocols with built‑in security, enable the encryption and authentication within the application layer.
-
These examples are end‑to‑end, independent of the underlying network, and allows fine‑grained identity management via certificates.:
- MQTT with TLS,
- OPC UA (with security profiles),
- Secure DNP3 (IEEE 1815‑2012) or IEC 62351‑secured versions
This ensures:
- Cryptographic device identity
- End-to-end payload integrity
- Command-level authorization
2️⃣ Network-Layer Transport Protection (Layer 3)
If the sensor only speaks plain‑text Modbus, serial RS‑485, or other legacy protocols, you cannot apply mTLS directly.
-
These examples wrap the entire communication channel using network‑layer security making it a secure tunnel.
- Serial‑to‑Ethernet converters with built‑in IPsec,
- VPN tunnels (IPsec, OpenVPN, WireGuard) between the sensor subnet and the collector.
- Enforce strict firewall rules and one-way flows where possible
This shields traffic from interception but does not replace device identity controls.
B. mTLS Implementation
-
Certificate Authority Hierarchy:
Root CA (offline, air-gapped) │ └── OT Intermediate CA (online, protected) │ ├── Device CA (for sensors) ├── User CA (for engineers) └── Server CA (for applications) -
Sensor Certificate Deployment:
# Example: Generating sensor certificate openssl req -new -key sensor.key -out sensor.csr \ -subj "/C=US/O=WaterUtility/OU=PumpRoom/CN=pressure-sensor-01" # Sign with Device CA (template for automation) openssl x509 -req -in sensor.csr -CA device-ca.crt \ -CAkey device-ca.key -CAcreateserial -out sensor.crt \ -days 365 -sha256 -extfile sensor_ext.cnf -
mTLS Configuration for Sensor Gateway:
# Example: Nginx as TLS termination proxy server { listen 8883 ssl; # Server certificates ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; # Client certificate verification ssl_client_certificate /etc/ssl/device-ca.crt; ssl_verify_client on; ssl_verify_depth 2; # Strong cipher suites only ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384'; ssl_prefer_server_ciphers on; # Protocol restrictions ssl_protocols TLSv1.2 TLSv1.3; } -
Certificate Management for OT:
- Automated provisioning using SCEP or EST protocols
- OCSP stapling for revocation checking without network flood
- Short lifetimes (30-90 days) with automated renewal
- CRL distribution points accessible to OT systems
C. Key Management Specifics
- HSM integration for private key storage
- Regular key rotation synchronized with maintenance windows
- Emergency revocation procedures for compromised sensors
Phase 4.2: Secure Communication - Wireless
Rationale
While traditional wired fieldbus protocols (Modbus, PROFIBUS) or industrial Ethernet are common in pump rooms, LoRaWAN (Long Range Wide Area Network) offers significant advantages for specific CII water monitoring use cases for pump rooms where wireless deployment is feasible:
-
Retrofit Flexibility: Installing new wired pressure sensors in existing, operational pump rooms can be disruptive and expensive. LoRaWAN enables wireless retrofitting.
-
Remote Monitoring: It can monitor pressure at distal points in the water network (e.g., far ends of pipes, reservoirs) where power and network infrastructure are unavailable.
-
Battery Longevity: LoRaWAN end devices are designed for ultra-low power consumption, allowing for years of operation on battery power, which is critical for maintaining monitoring during grid outages.
However, integrating a wireless, low-power technology into a CII environment introduces a distinct threat model. The communication must be secured end-to-end, respecting the device's energy constraints while preventing it from becoming a wireless bridge into the OT network.
Actions
A. LoRaWAN Architecture Components
A secure LoRaWAN deployment consists of several layers that must each be secured :
-
End Device (The Pressure Sensor): A LoRaWAN-enabled pressure sensor (e.g., Dragino PS-LB, RAKwireless Sensor Hub with a 4-20mA probe). This device reads the physical pressure and transmits the data via LoRa RF.
-
Gateway: A LoRaWAN gateway receives RF messages from many end devices within range and relays them to a Network Server via a backhaul connection (e.g., Ethernet, Cellular). The gateway is a critical bridge; it typically does not decrypt the sensor payload but forwards encrypted packets.
-
Network Server (NS): The central brain of the LoRaWAN network. It handles device activation, manages duplicate packet elimination, and routes decrypted application data to the appropriate Application Server. In a CII context, this server should reside within your OT network or a secure, private cloud.
-
Application Server (AS): The final destination where pressure data is processed, visualized, and acted upon (e.g., SCADA, historian).
B. Security Best Practices for CII LoRaWAN Deployment
-
prioritise Over-the-Air Activation (OTAA): LoRaWAN supports two activation methods: OTAA and ABP (Activation by Personalization).
-
Problem with ABP: ABP has static, pre-configured session keys. If a device is compromised and its keys extracted, an attacker can impersonate the device indefinitely, and the network has no built-in mechanism to re-key it. It also lacks proper replay attack protection.
-
Solution (OTAA): Always use OTAA. OTAA is a join procedure where the device and network dynamically derive new, unique session keys (NwkSKey, AppSKey) for each power cycle or session. This ensures forward secrecy and simplifies secure key management over the device's lifetime.
-
-
Understand and Protect the LoRaWAN Root Keys:
-
LoRaWAN security relies on two 128-bit AES root keys provisioned on the device :
- AppKey: Used during the OTAA join procedure to derive the session keys.
- DevEUI/JoinEUI: Unique device identifiers.
-
Action: These root keys must be injected into the device in a secure, audited environment before deployment. They should be stored in a Hardware Security Module (HSM) or a secure element on the device itself where possible. The corresponding keys must be stored securely in your Network Server's database, never in plaintext.
-
-
Enforce End-to-End Encryption at the Application Layer:
-
LoRaWAN provides two layers of 128-bit AES encryption:
-
Network Layer (NwkSKey): Ensures message integrity and authenticity between the device and the Network Server.
-
Application Layer (AppSKey): Ensures end-to-end confidentiality between the device and the Application Server. The Network Server cannot read the application payload (the actual pressure reading).
-
-
Action: Verify that your LoRaWAN infrastructure strictly separates these keys. The Network Server should only possess the NwkSKey, while the Application Server (your SCADA front-end) holds the AppSKey. This prevents a compromised Network Server from revealing sensitive pressure data.
-
-
Secure the Gateway as a Critical Edge Asset:
-
A gateway placed in or near a pump room is a physically accessible and high-value target.
-
Action:
-
Harden the Gateway OS: Change default passwords, disable unnecessary services (e.g., Telnet, FTP), and apply security patches.
-
Secure the Backhaul: Encrypt all traffic from the gateway to the Network Server using a VPN (e.g., IPsec, WireGuard) or TLS. The gateway must authenticate to the network.
-
Physical Security: Mount the gateway in a locked enclosure with tamper detection. Control physical access rigorously.
-
-
-
Implement Robust Key and Firmware Lifecycle Management:
-
Key Rotation: Session keys derived via OTAA are temporary. Plan for and test the process of devices rejoining the network to refresh these keys. For long-term root keys (AppKey), have a documented, secure procedure for rotation, perhaps tied to a major maintenance cycle or security incident.
-
Secure OTA Firmware Updates: The sensor itself will inevitably need firmware updates.
- Action: Ensure the device supports cryptographically signed firmware updates. The device must verify the signature before installing any new firmware to prevent malicious code from being flashed onto the sensor. This process itself occurs over the secure LoRaWAN channel.
-
-
Network Monitoring & Anomaly Detection (LoRaWAN-Specific):
-
Baseline Traffic: LoRaWAN sensors have predictable traffic patterns (e.g., one uplink every 15 minutes).
-
Action: Your monitoring system (from Phase 5) should be tuned to detect anomalies specific to LoRaWAN, such as:
- Sudden changes in transmission frequency (could indicate a battery-draining attack or malware).
- Join requests from unauthorized or unknown devices.
- Failed authentication attempts on the Network Server.
- Unexpected gateway disconnections or backhaul anomalies.
-
Phase 5: Monitoring, Detection & Response
Rationale
Preventive controls alone are insufficient. Detection and response minimize dwell time and impact.
Actions
A. OT-Specific Monitoring
-
Baseline normal operations:
- Expected communication patterns (which sensors talk to which collectors)
- Normal data ranges (pressure readings within 40-80 PSI)
- Typical timing (readings every 5 seconds ± tolerance)
-
Implement OT-aware SIEM:
- Splunk with Industrial Asset Intelligence
- IBM QRadar with OT modules
- Microsoft Sentinel with OT connectors
-
Detection rules for:
- Protocol violations (Malformed Modbus packets)
- Out-of-range values (Pressure readings > 1000 PSI)
- Communication anomalies (Sensor talking to new IP)
- Physical tamper alerts (Enclosure opened)
- Time anomalies (NTP drift or timestamp manipulation)
B. Incident Response Planning
-
OT-specific playbooks that consider:
- Safety first - Never automatically disconnect without human review
- Manual override procedures for critical functions
- Controlled degradation rather than complete shutdown
-
Communication protocols for OT incidents:
- Dedicated incident response team with 24/7 availability
- Pre-established relationships with equipment vendors
- Regulatory reporting requirements (within 1 hour for CII breaches in many jurisdictions)
Outcome
A detection and response capability that:
- Identifies anomalies within minutes
- Prevents safety incidents during response
- Meets regulatory reporting requirements
Phase 6: Maintenance & Lifecycle Management
Rationale
OT systems have 15-30 year lifespans. Security must persist through maintenance, upgrades, and personnel changes.
Actions
A. Change Management Procedures – The Jump Host Principle
The Problem:
Engineers, administrators, or vendors occasionally need to perform maintenance on OT devices (e.g., updating PLC logic, reconfiguring sensors). Historically, they might connect their laptop directly to an OT switch or device. This introduces severe risks:
- Lack of hygiene: A support laptop might be infected with malware from the corporate network or internet. Directly plugging it into an OT switch bypasses all perimeter defences.
- Lack of accountability: Direct connections are rarely logged; you cannot prove who did what and when.
- Misconfiguration risk: A momentary mis‑key can send rogue packets that disrupt real‑time operations.
The Solution – Jump Host / Jump Server:
A hardened, dedicated server (or virtual machine) placed in a controlled zone – typically the iDMZ or a dedicated OT admin subnet. All interactive access to OT devices must first authenticate to the jump host, then from there initiate a connection to the target OT device.
- Additional controls:
- Session recording – every keystroke is logged.
- Time‑bound approvals – access must be requested and approved; credentials expire.
- Protocol filtering – the jump host allows only specific protocols (e.g., RDP, SSH) to specific OT device IPs.
- No file transfer – or tightly controlled malware scanning.
Example workflow:
- Engineer submits a change request → approved.
- Temporary firewall rule opens access from jump host to target PLC.
- Engineer connects via RDP to jump host (using MFA).
- From jump host, launches engineering software (e.g., Siemens TIA Portal) that communicates with the PLC.
- All traffic is recorded.
- After maintenance, firewall rule is automatically removed.
Outcome:
- Controlled, logged, and reversible access.
- No foreign devices ever touch the OT network.
- Consistent security posture even during emergencies.
Relation to iDMZ and Air‑gapping:
- The jump host is typically placed in the iDMZ (or an OT admin zone) because it needs connectivity to both the user (from IT or remote) and the OT devices.
- If a system is air‑gapped, even a jump host cannot be used remotely; maintenance requires physical presence and strictly controlled procedures (e.g., using a dedicated, clean laptop that never leaves the facility). The jump host concept applies to networks that have some connectivity, not to fully air‑gapped systems.
B. Regular Security Activities
- Quarterly configuration reviews against secure baseline
- Annual penetration testing by OT‑specialized firms
- Continuous vulnerability monitoring for OT components
- Tabletop exercises for OT incident response
C. Documentation & Training
- As‑built security documentation
- OT security training for all personnel with access
- Lessons learned repository from incidents/near‑misses
Outcome
A sustainable security program that:
- Adapts to changing threats
- Survives personnel turnover
- Maintains security through equipment lifecycle
Critical Considerations & Trade-offs
1. Safety vs. Security
- Never implement security that compromises safety interlocks
- Pressure sensors may be part of Safety Instrumented Systems (SIS)
- Consult ISA 84/IEC 61511 standards for safety-critical systems
2. Legacy Integration
- Retrofit solutions for existing sensors:
- Protocol gateways with security features
- Out-of-band monitoring (analyzing analog signals)
- Physical access controls as compensating controls
3. Regulatory Compliance
- WaterISAC guidelines for water sector
- NIST Cybersecurity Framework for critical infrastructure
- Local regulations (e.g., US: America's Water Infrastructure Act)
4. Cost-Benefit Analysis
- Calculate Value at Risk for water supply disruption
- Consider insurance implications of security investments
- Factor regulatory penalties for non-compliance
Suggested Implementation Roadmap (12 Months)
| Duration (mths) | Focus Area | Key Deliverables |
|---|---|---|
| 2 | Assessment & Design | Risk assessment, Architecture design, Vendor selection |
| 4 | Foundation Building | Network segmentation, iDMZ implementation, Baseline policies |
| 4 | Secure Deployment | Sensor hardening, mTLS implementation, Monitoring setup |
| 2 | Operationalization | IR playbooks, Training program, Compliance documentation |
Conclusion
Securing OT pressure sensors in a Critical Information Infrastructure (CII) water facility requires thinking beyond traditional IT security models. The familiar CIA triad must be re-pivoted into CAIC — prioritising by Control, Availability, Integrity and Confidentiality — reflecting the operational realities of industrial environments.
Simply put, operators must have unchallenged control over your CII always, transmit untainted data through verified and trusted channels as any lost of control over CII water system to the malicious actors could, in the worst case, compromise safety, disrupt essential services, and ultimately put lives at risk.
Therefore, follow the defense‑in‑depth approach — secure components, segmented networks, encrypted communications, continuous monitoring, and robust procedures. Embrace DevSecOps and consult with OT security specialists and process engineers before implementation, particularly for safety‑critical systems.
By the way, if you are still here, you should realise that this is just a good guide.