John Ruskin, one of the great visionaries of the 19th century, said “Quality is never an accident; it is always the result of intelligent effort”, in our continuing journey through the lifecycle of a threat pattern, we are now at the testing phase. After analyzing requirements, asset and threats, designing a general and reusable model for the threat pattern and implementing the model in our security monitoring platform, we cannot assume no mistakes were made. This is why a well-structured and accurate testing process is vital – to validate the pattern has been implemented correctly and to ensure we are able to effectively address the attack scenario.
Even in this phase, the typical process of the Software Development Life Cycle (SDLC) can help us head in the right direction. Once implemented, software is usually tested with two distinct but complementary methodologies: a white-box and a black-box approach. Let’s see how this would apply to our newly implemented threat pattern.
White-box testing is intended to verify the threat pattern logic and ensure implementation is coherent with the requirements. If the design phase has been approached correctly, the different components making up the pattern should already be well defined and documented. For each of them we need to prepare a list of specific inputs and expected outputs to trigger each part of the implementation. As with any software, testing progress can be quantified by the percentage of code covered by the tests.
This process may be time consuming – and time is a precious resource in a Security Operations Center (SOC) – so automating even part of it is key. Luckily, at this stage we should have a good amount of historical data at our disposal as information has already started flowing into the system. Accurate sample selection is critical as they must ensure both good horizontal coverage (e.g. the number of the tested components), and vertical coverage (e.g. the variety of the inputs). Mapping each selected sample with the component under testing make logic validation and threat pattern effectiveness immediate. Furthermore, the same samples can be re-injected into the platform after any change to ensure the output is still consistent with the requirements.
Conversely, black-box testing examines the functionality of the threat pattern without dissecting its logic. In this stage, it is recommended to start from the attack scenarios originally elaborated during the analysis phase and determining the possible means an attacker would leverage to exploit the depicted situations. Many organizations find establishing or engaging with a red team (playing the role of the attacker) beneficial, with a blue team defending the kingdom by ensuring the implemented threat patterns behave as expected. At the end of the exercise the two teams meet to review the results and list all the situations not detected by the current implementation and the threat patterns requiring further review.
This leaves us with the question of how to measure the patterns’ effectiveness based on these test results? The simplest way would be to evaluate the number of false positives and false negatives. Those two parameters alone may be sufficient to understand how effective the threat pattern implementation was – based on the false negatives rate, and the potential “noise” it can generate – based on the false positive rate.
Here’s a very important lesson every SOC should have learned by now: the most dangerous enemy may not be an obscure attacker, rather it may be the number of false positives overwhelming analysts and preventing them from responding to the security issues which really matter! It is imperative to reduce the noise-to-signal ratio before it becomes insurmountable on our journey to protect the organization.
At the end of the process, a root cause analysis must be conducted on every issue identified, but beware of simply applying changes to the implementation – it would be a pity to get lost at the very end! Always start from the requirements and the results of the analysis before touching the code: did we miss the attack scenario entirely? Is there any data source we did not consider? Was it a mistake in the implementation? Approaching this stage with a simple checklist also helps to ensure a potential change here does not have a negative impact on other threat patterns.
It is important to be conscious of the fact that the work performed thus far may become obsolete in a very short time. New TTPs and attack scenarios may arise, as well as the organization’s evolving business approach. To mitigate this situation, in the last article of this series we present the benefits of this structured and focused approach to maintaining and evolving a threat pattern in production.