Most OT Cyber Assessments Are IT Audits in OT Clothing.
The current state of the practice: scan the network, pull a CVE list off the asset inventory, sum CVSS scores into a heatmap, recommend patches the plant will never apply. The deliverable looks rigorous and answers the wrong question.
- CVE-list audits, scraped from asset inventories and stitched into a CVSS heatmap
- IT vulnerability scanners pointed at OT — active probes on devices that were not built to be probed
- Generic ATT&CK technique mentions, with no mapping to the protocols on the bus or the devices on the rack
- A recommendations register the plant cannot execute — patch windows that do not exist, reboots that are not permitted
- A heatmap that does not change the plant's risk by one decibel
The buyer recognises the report. The buyer has filed three of them. None of them moved the needle.
OT Risk Is Not a Patch Backlog. It Is a Consequence Equation.
OT risk has three terms and CVE count is not one of them. The same asset can carry a CVSS 9.8 vulnerability and present negligible OT risk — and a fully patched asset can sit on the critical path of a black-start sequence. The rubric matters.
| Term | What it measures | What it ignores |
|---|---|---|
| Consequence | Loss of view, loss of control, mis-operation, safety impact, regulatory breach — measured in plant outcomes, not data records | CVE severity in isolation |
| Adversary capability | Plausible threat actor tier for this asset, this site, this geography — opportunist, criminal, nation-state — and the techniques each can actually deploy | Theoretical exploits no realistic adversary will weaponise here |
| Exposure | Reachable attack surface accounting for segmentation, conduit boundaries, protocol-level access, compensating controls and physical access | Open-port counts on a flat-network assumption |
Risk = Consequence × Capability × Exposure. Every line in our report scores on this rubric. Every recommendation moves one of these three terms.
Our Method — ATT&CK for ICS, Mapped to the Protocols and Devices in Your Plant.
MITRE ATT&CK for ICS is the public scaffold. On its own it is a vocabulary, not an assessment. We add three layers of mapping that turn a vocabulary into a finding.
| Layer | What we map | Example |
|---|---|---|
| Protocol layer | Each ATT&CK technique against the specific OT protocols on the bus — IEC 61850 (GOOSE / SV / MMS), DNP3, Modbus TCP, IEC 60870-5-104, OPC UA, IEC 61400-25, SunSpec | T0830 Adversary-in-the-Middle → GOOSE multicast injection on substation process bus |
| Device layer | Each technique against the device classes that actually live in the plant — protection IEDs, merging units, RTUs, bay controllers, station gateways, PLCs, HMIs, engineering workstations | T0836 Modify Parameter → protection IED setting group switch via MMS |
| Consequence layer | Each technique against the operational outcome it produces in this plant — mis-trip, breaker mis-operation, loss of view, false dispatch, loss of containment | Mis-trip of a 220 kV feeder during peak export window |
The library is curated for the protocols and device classes we see in Indian and South Asian substations, plants and renewable sites — and it grows with every engagement.
Worked Example — GOOSE Spoofing on a 220 kV Substation Bus.
One real chain, end to end, against the rubric. This is what twinos Assess · Substation emits against an IEC 61850 bus — not a hand-written finding, not a CVE row, not a CVSS number. One row out of a generated register.
| Step | Finding |
|---|---|
| Technique | T0830 Adversary-in-the-Middle / T0836 Modify Parameter — GOOSE multicast publisher impersonation on the IEC 61850 process bus |
| Protocol exposure | GOOSE is unauthenticated by default in IEC 61850 Ed.1 deployments; the process bus is L2 multicast with no native sender verification |
| Device exposure | Subscribing protection IEDs on Bay-3 and Bay-5; merging units publishing CT/VT samples on the same VLAN; engineering workstation reachable from the station LAN |
| Adversary capability | Tier-2 criminal / hacktivist with station LAN access via a stale contractor laptop — observed reachable path during scoping |
| Consequence | Forged GOOSE dataset triggers a protection mis-trip on a 220 kV outgoing feeder during peak export — loss of revenue, DSM penalty exposure, regulator notification |
| Risk score (our rubric) | Consequence: High · Capability: Medium · Exposure: High → Aggregate: High. CVE-only view of this chain: zero CVEs, would not appear on any scanner output |
| Recommendation | Enable IEC 62351-6 GOOSE signing on the publisher set; segment process-bus VLAN behind a managed L2 boundary; revoke stale engineering identities; add publisher-anomaly detection on the station gateway |
Zero CVEs. High OT risk. Generated, not authored. The next three slides are how.
twinos Assess · Substation — ATT&CK at the LN / DO / DA Level.
Public ATT&CK for ICS publishes technique names. twinos Assess holds a database of realisation conditions per technique — mapped to the protocol primitives, logical nodes, data objects and data attributes of IEC 61850; equivalent primitives for DNP3, Modbus and OPC UA; and extended to OS hardening gaps and hardware exposure. For any device whose semantic model we hold, we can compute which techniques are realisable today — before we walk on site.
| Technique | Protocol primitive | Mapped point | Realisation conditions | Realisable? |
|---|---|---|---|---|
| T0836 Modify Parameter | IEC 61850 MMS Write | PTRC.Op.general — protection trip block | Writable from MMS client without authentication · AccessControlList not enforced on this dataset · Key-switch policy in REMOTE | Yes — High consequence (mis-trip) |
| T0830 Adversary-in-the-Middle | IEC 61850 GOOSE publisher | GoCB on station bus VLAN | GOOSE unsigned (62351-6 not enabled) · subscriber accepts on multicast MAC · publisher VLAN reachable from station LAN | Yes — High consequence (mis-trip) |
| T0858 Change Operating Mode | S7 / vendor mgmt | Bay-controller mode register | Mgmt protocol unauthenticated · EWS reachable from station LAN · no audit on mode change | Yes — Medium consequence (loss of view) |
Every row in our risk register starts as a realisation-condition query. The report is a join, not a narrative.
Two Scanners. Same Realisable Surface. Zero Impact on Your Bus.
Most assessments collect what an engineer can write down in a notebook. twinos Assess collects what the device, its configuration files and its hardware actually say — and feeds it into the realisation-condition library.
| Mode | What it reads | Bus impact | What we see |
|---|---|---|---|
| Active — protocol | MMS, GOOSE, SV, DNP3, Modbus under signed read-only ground rules | Negligible — read-only, rate-limited | Live device state, configured AccessPoints, GOOSE subscribers |
| Passive — protocol | SCL files (.icd / .cid / .scd), Modbus register maps, DNP3 device profiles | Zero packets on the OT network | Full IEC 61850 LN / DO / DA model, dataset bindings, GOOSE / SV publishers and subscribers |
| Passive — OS | Configuration exports from station gateways, EWS, HMIs — services, accounts, patch state, hardening posture | Zero | OS-level realisation conditions per technique |
| Passive — HW | Hardware datasheets, FAT documents, port surveys | Zero | Debug ports, console exposure, firmware extraction reachability |
On a typical IEC 61850 substation, around 80 percent of the answer comes from SCL alone. We bring the bus in only when the SCL does not tell us.
The Risk Register Is Computed. Same Rubric, Every Site, Every Time.
There is one path from device to register — and the customer can re-run it next year and get the same verdict.
- Scanner output — device, protocol surface, OS posture, HW exposure
- × Realisation-condition database — per-technique realisability per asset
- × Consequence × Capability × Exposure rubric — per-asset, per-technique risk
- → Prioritised risk register, device-level roadmap, evidence pack, executive readout
- Same depth on every device — the engineer does not get tired on Bay-9
- Reproducible — the auditor can re-run the scan next year and replicate the verdict
- Recommendations move scanner-observable preconditions — close the precondition, the technique falls off the register on the next scan
- Comparable across sites — fleet-level risk roll-ups stop being apples-to-oranges
We compute the risk register the way a process engineer computes a heat balance.
Risk-Based Scoring. Not CVE-Based.
The industry default
- Count CVEs per asset from a vulnerability database
- Sum or average CVSS — produce a per-asset 'severity'
- Rank assets by patch backlog
- Heatmap by colour, file the report
- Plant cannot patch most of it; nothing changes
- Same finding next year, same colour, same shelf
The twinos rubric
- Score consequence per technique per asset — what breaks in the plant
- Tier the plausible adversary for this site, this geography, this asset class
- Score exposure with segmentation and compensating controls credited
- Risk = Consequence × Capability × Exposure — defensible line by line
- Recommendations move one of the three terms — and most do not require a patch
- The same asset can be 'CVSS 9.8' and negligible risk — and vice versa
What You Get — Artefacts, Not Adjectives.
| Deliverable | What it contains | Who reads it |
|---|---|---|
| Prioritised OT Risk Register | Per-asset, per-technique findings — protocol, device class, consequence, capability tier, exposure, aggregate risk, recommendation | OT security lead, plant manager |
| Device-Level Remediation Roadmap | Sequenced 0–3 month, 3–9 month, 9–24 month actions — segmentation, configuration, compensating controls, replacement only where it pays back | CISO, engineering head |
| Compliance Evidence Pack | Findings mapped to IEC 62443-3-3 system requirements and NIS2 obligations — auditor-ready, citation-grade | Auditor, compliance officer |
| Executive Readout | Boardroom narrative — what the plant is exposed to, what we are doing about it, what we are accepting and why | C-suite, board risk committee |
Every artefact is defensible — line by line, against a documented rubric. The auditor reads it the same way the board does.
How an Engagement Runs — Without Touching Production.
Operations-conscious by design. We do not actively scan OT devices. We do not patch in flight. We do not require a maintenance window to deliver the assessment.
- 1. Scoping — sites, scope of bays / zones / conduits, ground-rules signed off in writing (1 week)
- 2. Passive collection — span-port capture, configuration export, drawing review, interviews; no active probes on the OT network (1–2 weeks)
- 3. Protocol & device analysis — IEC 61850 SCD parsing, DNP3 / Modbus traffic decomposition, device-class mapping (1–2 weeks)
- 4. Risk modelling — apply the rubric, score consequence / capability / exposure, draft the risk register (1–2 weeks)
- 5. Validation workshop — walk the findings with plant and corporate teams; calibrate consequence and exposure with the people who run the asset (2 days)
- 6. Report & readout — register, roadmap, evidence pack, board readout (1 week)
Indicative duration — 4 weeks for a single site, 6–8 weeks for a multi-site portfolio. No tripped breakers. No emergency calls.
Why twinos.
- Generated, not authored — every finding is computed from scanner output and a realisation-condition database; the engineer does not get tired and the auditor can re-run the scan next year
- OT-native methodology — built from substations and renewable sites; not an IT framework retrofitted with OT vocabulary
- Protocol- and device-specific to LN / DO / DA — every finding cites the IEC 61850 logical node, data object and data attribute, not a generic technique number
- Risk-based, not CVE-based — consequence × capability × exposure; the only rubric a board can defend and a regulator can read the same way
- Audit-ready by construction — every artefact maps line-for-line to IEC 62443 and NIS2; the auditor does not need a translation layer
- Operations-conscious — 80 percent of the answer comes from SCL; we bring the bus in only when the SCL does not tell us
We are the OT cyber assessment your plant engineer would write if they had three months and a methodology.
Where This Sits in Your Programme.
The assessment is the entry point — and complete in itself. The risk register, roadmap and evidence pack stand on their own; many of our customers stop here and use the output to drive their internal programme for a year.
When the next question is asked, the same methodology extends:
- Continuous OT monitoring — convert the risk register into live detections tuned to your protocols and devices
- Architecture review — segment, harden and instrument the conduits the assessment surfaced
- Incident response retainer — the team that wrote the rubric is the team that picks up the phone
No upsell pressure. The assessment is sold standalone, priced standalone, and delivered standalone.