WHITE PAPER · twinos

twinos Assess · Substation — Risk You Can Defend in a Boardroom

OT cyber assessment for power substations. Computed, not authored. Risk you can defend in a boardroom.

Most OT Cyber Assessments Are IT Audits in OT Clothing.

The current state of the practice: scan the network, pull a CVE list off the asset inventory, sum CVSS scores into a heatmap, recommend patches the plant will never apply. The deliverable looks rigorous and answers the wrong question.

  • CVE-list audits, scraped from asset inventories and stitched into a CVSS heatmap
  • IT vulnerability scanners pointed at OT — active probes on devices that were not built to be probed
  • Generic ATT&CK technique mentions, with no mapping to the protocols on the bus or the devices on the rack
  • A recommendations register the plant cannot execute — patch windows that do not exist, reboots that are not permitted
  • A heatmap that does not change the plant's risk by one decibel

The buyer recognises the report. The buyer has filed three of them. None of them moved the needle.

OT Risk Is Not a Patch Backlog. It Is a Consequence Equation.

OT risk has three terms and CVE count is not one of them. The same asset can carry a CVSS 9.8 vulnerability and present negligible OT risk — and a fully patched asset can sit on the critical path of a black-start sequence. The rubric matters.

CONSEQUENCEloss of view ·safety · breach×CAPABILITYopportunist ·criminal · nation×EXPOSUREsegmentation ·identity · accessOT RISKper assetper techniqueThree terms. CVE count is not one of them.
TermWhat it measuresWhat it ignores
ConsequenceLoss of view, loss of control, mis-operation, safety impact, regulatory breach — measured in plant outcomes, not data recordsCVE severity in isolation
Adversary capabilityPlausible threat actor tier for this asset, this site, this geography — opportunist, criminal, nation-state — and the techniques each can actually deployTheoretical exploits no realistic adversary will weaponise here
ExposureReachable attack surface accounting for segmentation, conduit boundaries, protocol-level access, compensating controls and physical accessOpen-port counts on a flat-network assumption

Risk = Consequence × Capability × Exposure. Every line in our report scores on this rubric. Every recommendation moves one of these three terms.

Our Method — ATT&CK for ICS, Mapped to the Protocols and Devices in Your Plant.

MITRE ATT&CK for ICS is the public scaffold. On its own it is a vocabulary, not an assessment. We add three layers of mapping that turn a vocabulary into a finding.

ATT&CK FOR ICS · TECHNIQUEPROTOCOL61850 MMS / GOOSE / SV · DNP3 · Modbus · OPC UAwhat runs on the wireDEVICEProtection IEDs · merging units · RTUs · bay controllers · station gateways · HMIs · EWSwhat sits on the rackCONSEQUENCEMis-trip · breaker mis-operation · loss of view · loss of control · safety event · regulator notificationwhat breaks in the plant
LayerWhat we mapExample
Protocol layerEach ATT&CK technique against the specific OT protocols on the bus — IEC 61850 (GOOSE / SV / MMS), DNP3, Modbus TCP, IEC 60870-5-104, OPC UA, IEC 61400-25, SunSpecT0830 Adversary-in-the-Middle → GOOSE multicast injection on substation process bus
Device layerEach technique against the device classes that actually live in the plant — protection IEDs, merging units, RTUs, bay controllers, station gateways, PLCs, HMIs, engineering workstationsT0836 Modify Parameter → protection IED setting group switch via MMS
Consequence layerEach technique against the operational outcome it produces in this plant — mis-trip, breaker mis-operation, loss of view, false dispatch, loss of containmentMis-trip of a 220 kV feeder during peak export window

The library is curated for the protocols and device classes we see in Indian and South Asian substations, plants and renewable sites — and it grows with every engagement.

Worked Example — GOOSE Spoofing on a 220 kV Substation Bus.

One real chain, end to end, against the rubric. This is what twinos Assess · Substation emits against an IEC 61850 bus — not a hand-written finding, not a CVE row, not a CVSS number. One row out of a generated register.

StepFinding
TechniqueT0830 Adversary-in-the-Middle / T0836 Modify Parameter — GOOSE multicast publisher impersonation on the IEC 61850 process bus
Protocol exposureGOOSE is unauthenticated by default in IEC 61850 Ed.1 deployments; the process bus is L2 multicast with no native sender verification
Device exposureSubscribing protection IEDs on Bay-3 and Bay-5; merging units publishing CT/VT samples on the same VLAN; engineering workstation reachable from the station LAN
Adversary capabilityTier-2 criminal / hacktivist with station LAN access via a stale contractor laptop — observed reachable path during scoping
ConsequenceForged GOOSE dataset triggers a protection mis-trip on a 220 kV outgoing feeder during peak export — loss of revenue, DSM penalty exposure, regulator notification
Risk score (our rubric)Consequence: High · Capability: Medium · Exposure: High → Aggregate: High. CVE-only view of this chain: zero CVEs, would not appear on any scanner output
RecommendationEnable IEC 62351-6 GOOSE signing on the publisher set; segment process-bus VLAN behind a managed L2 boundary; revoke stale engineering identities; add publisher-anomaly detection on the station gateway

Zero CVEs. High OT risk. Generated, not authored. The next three slides are how.

twinos Assess · Substation — ATT&CK at the LN / DO / DA Level.

Public ATT&CK for ICS publishes technique names. twinos Assess holds a database of realisation conditions per technique — mapped to the protocol primitives, logical nodes, data objects and data attributes of IEC 61850; equivalent primitives for DNP3, Modbus and OPC UA; and extended to OS hardening gaps and hardware exposure. For any device whose semantic model we hold, we can compute which techniques are realisable today — before we walk on site.

SERVERLOGICAL DEVICELOGICAL NODEDATA OBJECTDATA ATTRIBUTEIED_Bay3PROTPTRCOpgeneral (BOOLEAN)REALISATION CONDITIONSMMS write accepted without authentication· AccessControlList not enforced· key-switch in REMOTET0836 · REALISABLEIEC 61850 semantic tree — every leaf is a query against the realisation-condition database.
TechniqueProtocol primitiveMapped pointRealisation conditionsRealisable?
T0836 Modify ParameterIEC 61850 MMS WritePTRC.Op.general — protection trip blockWritable from MMS client without authentication · AccessControlList not enforced on this dataset · Key-switch policy in REMOTEYes — High consequence (mis-trip)
T0830 Adversary-in-the-MiddleIEC 61850 GOOSE publisherGoCB on station bus VLANGOOSE unsigned (62351-6 not enabled) · subscriber accepts on multicast MAC · publisher VLAN reachable from station LANYes — High consequence (mis-trip)
T0858 Change Operating ModeS7 / vendor mgmtBay-controller mode registerMgmt protocol unauthenticated · EWS reachable from station LAN · no audit on mode changeYes — Medium consequence (loss of view)

Every row in our risk register starts as a realisation-condition query. The report is a join, not a narrative.

Two Scanners. Same Realisable Surface. Zero Impact on Your Bus.

Most assessments collect what an engineer can write down in a notebook. twinos Assess collects what the device, its configuration files and its hardware actually say — and feeds it into the realisation-condition library.

ModeWhat it readsBus impactWhat we see
Active — protocolMMS, GOOSE, SV, DNP3, Modbus under signed read-only ground rulesNegligible — read-only, rate-limitedLive device state, configured AccessPoints, GOOSE subscribers
Passive — protocolSCL files (.icd / .cid / .scd), Modbus register maps, DNP3 device profilesZero packets on the OT networkFull IEC 61850 LN / DO / DA model, dataset bindings, GOOSE / SV publishers and subscribers
Passive — OSConfiguration exports from station gateways, EWS, HMIs — services, accounts, patch state, hardening postureZeroOS-level realisation conditions per technique
Passive — HWHardware datasheets, FAT documents, port surveysZeroDebug ports, console exposure, firmware extraction reachability

On a typical IEC 61850 substation, around 80 percent of the answer comes from SCL alone. We bring the bus in only when the SCL does not tell us.

The Risk Register Is Computed. Same Rubric, Every Site, Every Time.

There is one path from device to register — and the customer can re-run it next year and get the same verdict.

SCANNER OUTPUTprotocol · deviceOS · hardware×REALISATION DBLN / DO / DA · OSconditions per technique×RUBRICconsequence ×capability × exposureRISK REGISTERroadmap · evidenceexecutive readoutOne pipeline. Same depth on every device. Re-runnable next year.
  • Scanner output — device, protocol surface, OS posture, HW exposure
  • × Realisation-condition database — per-technique realisability per asset
  • × Consequence × Capability × Exposure rubric — per-asset, per-technique risk
  • → Prioritised risk register, device-level roadmap, evidence pack, executive readout
  • Same depth on every device — the engineer does not get tired on Bay-9
  • Reproducible — the auditor can re-run the scan next year and replicate the verdict
  • Recommendations move scanner-observable preconditions — close the precondition, the technique falls off the register on the next scan
  • Comparable across sites — fleet-level risk roll-ups stop being apples-to-oranges

We compute the risk register the way a process engineer computes a heat balance.

Risk-Based Scoring. Not CVE-Based.

The industry default

  • Count CVEs per asset from a vulnerability database
  • Sum or average CVSS — produce a per-asset 'severity'
  • Rank assets by patch backlog
  • Heatmap by colour, file the report
  • Plant cannot patch most of it; nothing changes
  • Same finding next year, same colour, same shelf

The twinos rubric

  • Score consequence per technique per asset — what breaks in the plant
  • Tier the plausible adversary for this site, this geography, this asset class
  • Score exposure with segmentation and compensating controls credited
  • Risk = Consequence × Capability × Exposure — defensible line by line
  • Recommendations move one of the three terms — and most do not require a patch
  • The same asset can be 'CVSS 9.8' and negligible risk — and vice versa

What You Get — Artefacts, Not Adjectives.

DeliverableWhat it containsWho reads it
Prioritised OT Risk RegisterPer-asset, per-technique findings — protocol, device class, consequence, capability tier, exposure, aggregate risk, recommendationOT security lead, plant manager
Device-Level Remediation RoadmapSequenced 0–3 month, 3–9 month, 9–24 month actions — segmentation, configuration, compensating controls, replacement only where it pays backCISO, engineering head
Compliance Evidence PackFindings mapped to IEC 62443-3-3 system requirements and NIS2 obligations — auditor-ready, citation-gradeAuditor, compliance officer
Executive ReadoutBoardroom narrative — what the plant is exposed to, what we are doing about it, what we are accepting and whyC-suite, board risk committee

Every artefact is defensible — line by line, against a documented rubric. The auditor reads it the same way the board does.

How an Engagement Runs — Without Touching Production.

Operations-conscious by design. We do not actively scan OT devices. We do not patch in flight. We do not require a maintenance window to deliver the assessment.

  • 1. Scoping — sites, scope of bays / zones / conduits, ground-rules signed off in writing (1 week)
  • 2. Passive collection — span-port capture, configuration export, drawing review, interviews; no active probes on the OT network (1–2 weeks)
  • 3. Protocol & device analysis — IEC 61850 SCD parsing, DNP3 / Modbus traffic decomposition, device-class mapping (1–2 weeks)
  • 4. Risk modelling — apply the rubric, score consequence / capability / exposure, draft the risk register (1–2 weeks)
  • 5. Validation workshop — walk the findings with plant and corporate teams; calibrate consequence and exposure with the people who run the asset (2 days)
  • 6. Report & readout — register, roadmap, evidence pack, board readout (1 week)

Indicative duration — 4 weeks for a single site, 6–8 weeks for a multi-site portfolio. No tripped breakers. No emergency calls.

Why twinos.

  • Generated, not authored — every finding is computed from scanner output and a realisation-condition database; the engineer does not get tired and the auditor can re-run the scan next year
  • OT-native methodology — built from substations and renewable sites; not an IT framework retrofitted with OT vocabulary
  • Protocol- and device-specific to LN / DO / DA — every finding cites the IEC 61850 logical node, data object and data attribute, not a generic technique number
  • Risk-based, not CVE-based — consequence × capability × exposure; the only rubric a board can defend and a regulator can read the same way
  • Audit-ready by construction — every artefact maps line-for-line to IEC 62443 and NIS2; the auditor does not need a translation layer
  • Operations-conscious — 80 percent of the answer comes from SCL; we bring the bus in only when the SCL does not tell us

We are the OT cyber assessment your plant engineer would write if they had three months and a methodology.

Where This Sits in Your Programme.

The assessment is the entry point — and complete in itself. The risk register, roadmap and evidence pack stand on their own; many of our customers stop here and use the output to drive their internal programme for a year.

When the next question is asked, the same methodology extends:

  • Continuous OT monitoring — convert the risk register into live detections tuned to your protocols and devices
  • Architecture review — segment, harden and instrument the conduits the assessment surfaced
  • Incident response retainer — the team that wrote the rubric is the team that picks up the phone

No upsell pressure. The assessment is sold standalone, priced standalone, and delivered standalone.

Start a discussion.

Specify, factory-build, and coordinate with twinos.

Start with Specify →