To protect and improve the security of SCADA / DCS systems, cyber security audits and assessments have become a popular instrument to determine where we stand with regard to security and what needs to be done. But what makes for a good security assessment? Many companies offer these services today, both vendors as well as the traditional IT providers, and large differences in quality and price exist. So let’s take a closer look.
When we evaluate cyber security, we distinguish between two types of cyber security gaps: One I will call the design gap, and another I call the implementation gap.
The design gap exists when we are missing security counter measures in the security architecture, either not implemented or a device lacks the capabilities. Examples of this for the technology security architecture are missing firewalls, bad network segmentation, missing encryption, missing antivirus, missing authentication, etc. Examples of missing counter measures from the operations security architecture are missing patch management processes, backup processes, security incident handling processes, etc. The design gap is measured by inspecting for compliance with a reference such as ISA99.03.03 or specific national / industrial regulations (OLF, CIP, ictQATAR, WIB, etc.)
The implementation security gap exists when the counter measures (cyber security controls) show deficiencies in their configuration, or are available but not configured. Examples of this for the technology security architecture are badly configured firewall rule sets, bad passwords, insufficient hardening of equipment. The implementation gap in processes (operations architecture) is measured using a maturity model where processes are judged on how well they are defined, repeatable, manageable and continuously optimized.
For the design gap we need a reference, we need some objective criterion that specifies if a counter measure is required or not. If we would leave this decision to an individual engineer’s skills and experience, we would lose consistency. The reference must specify what counter measures need to be implemented for countering a specific threat level. This can be the method used by ISA 99, which specifies a set of security requirements for each of the four Security Assurance Levels (SAL), or a more direct approach such as that taken by ictQatar, which specifies the requirements for critical information infrastructure in its Guidelines for ICS Security. Of course, such a specification needs to be in general terms so it can withstand the fast-changing technology and threat landscape.
We call measuring the design gap the audit portion because we compare the security architecture (technology and operations) against a reference of requirements. The audit creates a complete inventory of all security controls present without evaluating their effective implementation. So the outcome is an evaluation of the security design for both the security technology as well as the security operations.
To determine the implementation gap, we need to evaluate the effective implementation of each security control. This can be a correct technical implementation (proper hardening, adequate filter definitions, etc.) or the maturity of the security operations process (defined, repeatable, manageable, etc..). We call this part the assessment. The assessment also requires using references, references such as provided by NIST, CIS and the Bandolier project.
A good audit and assessment should be objective and results should be repeatable and consistent. Objectivity with regard to the level of protection needed, the quality of design /configuration/implementation, and of course the observations themselves.
Another important factor is the completeness and thoroughness of the assessment. For example, just running a vulnerability scanner and documenting the output in a report is not going to identify all security issues. The plug-ins of vulnerability scanners are primarily capturing the general IT deficiencies, but the specific control system requirements and software components used in SCADA / DCS are often missed. So it is important to collect all configuration / installation data and make certain all software components are accounted for in the vulnerability analysis.
That brings us to the security analysis process, an important process because here we determine the risk to the SCADA/DCS. Security analysis identifies and “translates” the vulnerabilities and configuration deficiencies into risk. It is not always straightforward how a particular vulnerability impacts the SCADA/DCS when exploited, or the nature of the target distribution. This translation process from vulnerability to risk is called risk profiling. Risk profiling allows us to rank risk, which is important because we need to define which risk to remediate and which risk can be accepted as residual. For this, we need metrics to differentiate between levels of risk.
The completeness, thoroughness and quality of the security analysis are where the differences between a good and a bad security assessment are made. Here is where we see many differences in the overall quality and cost of the assessment. It is easy to capture the low hanging fruit, but hard work to detect all vulnerabilities in the system.