OT cybersecurity expert Dan Ricci writes about what OT asset management and visibility can reveal about an industrial environments, including the surfacing of risk signals, exposure of hidden dependencies, and insight about the efficacy virtual network segmentation policies and practices.
Industrial
Operational Resilience
Operational Technology
Cyber Resilience
Risk Management

From OT Asset Management to Insight: Turning Visibility Into Something That Matters

Dan Ricci
/
Feb 17, 2026

If you’ve spent time in operational technology (OT) security, you’ve likely seen how everyone focuses on visibility as the main goal. Visibility is important; you can’t protect what you can’t see. Still, just having visibility doesn’t make a difference. What matters is how you use it to reduce risk.

Most organizations keep an inventory of their devices, firmware versions, and network connections. But the real value comes when that inventory does more than sit in a spreadsheet. It should help you find risk signals, expose hidden dependencies, and check if your segmentation is working as it should.

This works best when you combine your inventory with operational data, such as SCADA logs, historian records, remote access service data (such as session logs and activity traces), and packet captures. It’s also important to talk to the people who run the process. Tools provide information, but operators know what’s really happening.

For example, an organization observes an engineering workstation pulling data only to its PLC and historian; everything looks normal. But when pulling logs, SCADA events, historian records, and packet captures, the security team noticed the PLC was communicating to another engineering workstation at 2 a.m. every few nights. Operators explained that it was an old vendor laptop that sometimes got left plugged in after troubleshooting. That single conversation changed the entire risk picture. The inventory didn’t show it. The diagram didn’t show it. But the operational evidence did.

Turning Raw Data Into Something Useful

OT environments produce a lot of data—device types, firmware versions, communication modes, protocol use, and more. But this data only becomes useful when you put it into context.

When you organize your inventory and group assets by what matters for operations, like controllers, HMIs, historians, safety systems, and unknown devices, you can see your real priorities. A vulnerability on a safety controller is different from a flaw on a historian, and it’s usually clear which needs attention first.

Adding traffic analysis makes things clearer. You might find devices that don’t belong, controllers connecting to other systems, or engineering workstations active at odd times. These aren’t just strange events; they’re early warning signs. When traffic analysis shows something odd, a controller talking to a system it never talks to, or an engineering workstation active at 2 a.m., that’s not just noise, that’s an early warning sign. 

The best thing you can do is run a quick investigation, check the logs and packet captures, and then talk to the operators. They’ll tell you whether it’s normal, a misconfiguration, or something that drifted over time. If it repeats or bypasses segmentation, escalate it. These small anomalies are often the first clues that something in the environment has changed.

Finding the Dependencies You Didn’t Know You Had

Every OT environment has “invisible dependencies,” connections you often only notice when something goes wrong. By looking at communication paths, you can find these links before they cause trouble.

You start to see which controllers depend on a single HMI, which safety systems use shared network infrastructure, and which historians are choke points. You also spot unexpected cross-zone links, such as OT assets relying on IT services, vendor access paths that skip segmentation, or old systems communicating across VLANs because of outdated ACLs.

There are also operational dependencies, the kinds of things only operators know. Some engineering tools only run on old laptops. Some firmware versions depend on vendor support cycles. Some assets can’t be patched without stopping a production line. These details help you create realistic remediation plans. And the best way to uncover them is simply by spending time with the people who run the process.

Regular walkthroughs, informal interviews during shift changes, or quick “show me how this actually works” conversations give you insights no tool can provide. Operators will tell you which systems are fragile, which workarounds are normal, and which connections are only used during outages. That kind of collaboration turns your visibility into real operational understanding.

OT Virtual Segmentation Fact vs. Myth

Segmentation diagrams may look perfect on paper, but real environments are messy. When you add flow data to your inventory, you see what’s actually happening, not just what the diagram shows.

This is where logs, remote access data, and packet captures are especially useful. They help you see if firewall rules match real communication needs and if ACLs are too open. They also help you find rogue paths, like vendor modems, hidden wireless bridges, or temporary connections that never got removed.

If you check your segmentation regularly, not just once a year, microsegmentation becomes something you can measure, not just a design on paper. You can see if your policies match what’s really happening in operations.

The size of your organization also matters. In many OT environments, people handle several roles at once. Continuous validation doesn’t have to mean constant disruption. It’s about finding a schedule that fits your team. Even checking periodically is better than setting things up and forgetting about them. For some organizations, that might mean a quick validation every quarter.

Others might tie it to major changes , like after a firmware upgrade, a network reconfiguration, or a vendor maintenance window. The point isn’t to chase perfection; it’s to build a rhythm that keeps segmentation honest without overwhelming the people who keep the plant running.

From Reactive to Resilient

Real change happens when your inventory leads to insights, and those insights drive action for OT staff. That’s when your program transitions from being reactive to being resilient.

Resilience doesn’t mean being perfect. No organization is ever fully secure, and security needs will change over time. Being resilient means you’re not surprised by your own environment. You know your dependencies, notice changes early, and make decisions based on how your plant actually works, not just how you want it to work.

Visibility is just the start. The organizations that lower risk the fastest are the ones that turn visibility into clear risk signals, analyze communication paths to find hidden dependencies, and check segmentation against real operational flows.

That’s how OT security becomes real, measurable, and truly useful for the people who run the process.

Industrial
Operational Resilience
Operational Technology
Cyber Resilience
Risk Management
Dan Ricci
Founder, ICS Advisory Project

Dan Ricci is founder of the ICS Advisory Project, an open-source project to provide DHS CISA ICS Advisories data visualized as a dashboard to support vulnerability analysis for the OT/ICS community. He retired from the U.S. Navy after serving 21 years in the information warfare community.

Stay in the know Get the Nexus Connect Newsletter
You might also like… Read more
Latest on Nexus Podcast