By Jason Matlof, Executive Vice President, LightCyber
Today, it’s fashionable to be called detection and leave behind the vestiges of a security model built entirely around prevention. Prevention is still a necessity, however, everyone knows that complete prevention is a fallacy.
Despite important advances in firewall technology, sandboxing solutions, and security intelligence, it is no longer possible to prevent every attack. A motivated attacker can and will get into your network. Preventative measures may protect against 95 percent or more attempted attacks, but eventually one will be successful, perhaps through extremely clever spear phishing or social engineering that compromises a valid user account. Once an attacker is inside, the real challenge is detecting them quickly before theft or damage ensues.
So the question is what is it that a tool detects? Does it alert one to the ubiquitous presence of malware or is it actually calling out an attack that is currently in process? If a tool is intended to find an attacker, how well does it pinpoint the attack?
There is such a confusing array of terms and vendor claims about detection. One thing for certain is the problem is not lack of data. In fact, most security organizations suffer from a data overload, particularly in terms of the sheer number of daily alerts pumped out by a security system. A Ponemon Institute survey of enterprises showed the average enterprise receives 16,937 weekly alerts. Such a high volume makes it unlikely a security operator would be able to find real indications of attack activity buried under so many alerts that are dominated by false positives and minimally valuable warnings about the presence of malware.
A swift remediation of marketing claims would be to get each vendor to report on two metrics: accuracy and efficiency. Efficiency is simply the number of alerts a security system produces in real customer deployments. Too many alerts overwhelm a security team and generally mean each one is of low quality. To make objective comparisons, the results should be normalized to 1,000 endpoints per day.
For instance, a system that produces 10 alerts in a network with 5,000 endpoints is far better than one with four alerts in a network with 500 endpoints. The first has two alerts per thousand endpoints per day, and the latter has eight alerts per thousand endpoints per day. Having this kind of data would help organizations make better decisions about the purchase and use of security tools. A system producing 400 alerts per 1,000 endpoints per day is not just inherently at a disadvantage over a system producing 10 — it is likely simply not usable.
The other component to this assessment is the accuracy or usefulness of these alerts. According to the Ponemon study, only 4 percent of alerts are actually investigated when organizations are flooded with thousands of alerts. Alerts must point to specific threats with a high degree of accuracy so they practically jump out in the face of the security operator. This means there can only be a small, workable number of alerts and they all must be valuable.
Usefulness can be a matter of definition, but it has to involve a specific action on the part of the security operator rather than a passive result. For an alert to be valuable it has to be investigated, remediated or resolved in some hands-on way. Alerts that are auto-achieved or whitelisted explicitly imply that they are not useful to the analyst receiving them. Usefulness as a metric can be expressed as ratio of useful alerts to total alerts expressed as a percentage. Here there is no need to normalize the results to the size of the environment.
At LightCyber, we believe it’s time for vendors to report accuracy and efficiency as averaged across their customer base. The metrics will be enormously helpful in knowing which system one should buy. It also may drive a new focus on making detection more accurate and efficient. Such a focus could drive product development and result in tools that are more effective in dealing with the most pressing security issues.
Accuracy and efficiency also work together to make a security group more productive. Studies show many security or general IT organizations waste a considerable amount of time due to inefficient or inaccurate security alerts. As much as two-thirds of a day can be spent in a wild goose chase that does not bring a security operator any closer to finding an attack or dangerous threat. This is especially damaging when most organizations are either short-staffed, overworked or both. Too many alerts and alerts of poor quality also contribute to fatigue and low morale, which, in turn could affect turnover and promote sloppy, half-hearted work.
Finding network attacks requires a new standard for fast, accurate detection. Let’s set a new standard to ensure that security operators have the necessary tools that increase their effectiveness rather than reduce it.
Jason Matlof is a seasoned executive with nearly 20 years of experience in the technology, networking and network security industry. Most recently, Matlof served as VP WW Marketing for A10 Networks through their IPO (NYSE: ATEN), and was previously Vice President of Marketing at Big Switch Networks where he built category leadership in the emerging Software-defined Networking (SDN) product category.