This is the first part of a two-part series that explores the reasons behind the failure of security technologies to protect companies and their data from emerging threats in the age of cloud, connected devices and an ever-expanding threat surface.
After working in the cybersecurity industry for the last 15 years, what I find most striking is that, despite all the money spent on cyberdefense, the number of data breaches every year continues to increase. Data theft is now so common that, according to a report by McAfee, more than 6 out of 10 IT professionals have experienced a serious data breach.
While it may seem counterintuitive, growing reliance on technology is making companies more vulnerable. Both intruders from outside the network, as well as inside threats, are evolving with our defenses as connected devices proliferate and migration of sensitive data assets to the cloud present a lucrative target. With each technological advancement for business, assaults on our computing resources become more sophisticated as we broaden the attack surface.
Too Much Security Technology Monitoring Networks
Since the beginning of cybersecurity, most companies have focused the majority of their spend on ensuring that they wall off their core infrastructure with the best technology to stop attackers from getting in. Every time a new threat or attack vector emerged, we added another network technology to protect the business from it.
Eventually, we had too many consoles to monitor or correlate information between and the security information and event management (SIEM) market was born. Initially, SIEM promised to help us reduce all of the alert fatigue and help us find the root cause of every problem. Unfortunately, SIEM’s initial promise didn’t work out as advertised and became another extremely complex piece of technology that was even more difficult to manage than the network appliances it was meant to protect. SIEM did one thing well that made it worth paying for, though: It provided compliance reports.
The dirty little secret of the security industry is rooted in a small company that was acquired by Google called Virus Total. The single source of truth, and the source of new signatures on a daily basis, was Virus Total. Nearly all security vendors use Virus Total to prove whether a threat they have identified with their solution is new. Security manufacturers also trade their signatures with Virus Total so they can access the broad library of threats that they catalog.
Tools like anti-virus products package the Virus Total signatures into DAT files that are sent to their scanning engines on their customers’ machines on a daily or weekly basis. The core technology is the same for all AV manufacturers. Since 90%+ of the signatures in the DAT files are essentially the same, there were only minor variances between the different AV products for many years in terms of performance. If your firewall and IDS/IPS are also using the same signatures as your AV, there is a good chance that a threat unknown to Virus Total would be successful on your network.
Unless “patient zero” has been infected and identified, Virus Total will not have a signature for a net new threat. Attackers are well aware of this and have built tools that recompile old malware or generate net new vulnerabilities. They even trade malware on the dark web for bitcoin as a service. If the malware isn’t successful, they will build you new versions until it is. Malware is proliferating at rates that will soon be unmanageable by traditional security-as-an-afterthought technology. This illustrates the reactive nature of the entire security industry.
Advanced Detection Technologies and AI
Advanced detection technologies also have their challenges. When I worked at FireEye, sandboxing was the popular way to detect net new threats that were unknown to Virus Total. During my time there, where I directed technology alliances, we built more than 60 integrations with other manufacturers.
However, attackers knew this and began encrypting their payloads. Unless everything is decrypted before it is analyzed by the sandbox, nothing would be found. FireEye partnered with another manufacturer that had an appliance capable of decrypting network traffic. That significantly improved its ability to identify new threats for a period of time. Next, the attackers found a new way to plant their code without letting it run until the sandbox timed out. Several other strategies have been developed by attackers since and the game continues. Sandboxing is still the preferred method for forensics once a threat is found, but by that point, someone has already been infected.
The next challenge for advanced detection is that most of the modules that are a part of the larger security product platforms require an outbound internet connection to the manufacturer’s cloud. Take away that connection and the efficacy and effectiveness of these solutions drop significantly. They are essentially using your “bad day” to educate the rest of their platform.
Artificial intelligence (AI) has been a buzzword for the last few years, and AI and machine learning (ML) are widely marketed by almost all manufacturers. However, the maturity of their use and effectiveness is very hard to gauge. The use of AI is great for doing things such as breaking down complex code, and ML can then be used to look for repeating patterns.
What they require to be efficient and effective are very large data sets on which to train the AI models, but it can take years and terabytes of data to make them useful. There are evolving maturity models for how companies use AI to help demonstrate their effectiveness. Companies that have been using AI and ML for more than four years have a better chance that the technology has reached a level of maturity to provide real protection.
Customers believed that there was no way to stop every new threat to their environments, so endpoint detection and response (EDR) technology became the next big thing in security. If you can’t stop the bad guys, then record everything and hire an extremely expensive forensics team to dig through the data. Most customers do not have the resources or budget to do ongoing threat hunting on their networks. The EDR manufacturers were more than happy to develop those services for their customers. You can hire the company that sold you the technology that is not stopping the attacks to watch it for you. The vendors call that managed EDR. I call that the fox watching the hen house.
Orchestration is another big buzzword. The goal of these tools is to do what SIEM promised several years ago. They are designed to automate the remediation of threats automatically. Some use AI and ML, but most of them require significant ongoing configuration to be effective. Unfortunately, this is another after-the-fact technology because it relies on all of the other security products in your network to provide information to do the job.
Legacy security vendors have been trapped in a complicated, losing cycle of software patches and updates, of their own making. Customer infrastructure continues to get more and more abstracted from traditional network and data center infrastructure. Legacy solutions were never designed to support these new cloud-based infrastructures. We are starting to see new models for prevention, detection and resolution with security-as-a-service (SECaaS) providers.
We need to stop playing catch up after the fact, and get ahead of the game and focus on what the bad actors actually want: the data. By understanding how our data flows, creating and enforcing policies that help to govern that flow while protecting the data in a place when it is not flowing should mean that threats to data can be reduced to nearly zero and digital trust can be restored.
In the second part of this series, the author discusses the failure of patch culture and the role of application development in resolving the failure of traditional security in the enterprise.
— Brian Stoner