A Brief History of SIEM
Security information and event management (SIEM) systems provide real-time analysis of security alerts generated by applications and network hardware. These products are also used to log security data and generate reports for compliance purposes. Stephen Gailey, Head of Solutions Architecture at Exabeam, discusses the evolution of SIEM from revolutionary security technology to modern industry benchmark.
To understand where we are today with security information and event management (SIEM), it’s important to first understand exactly how we got here. Like many modern security technologies, the history of SIEM isn’t a particularly long one. In fact, the first commercial SIEM products were introduced less than twenty years ago, around the start of the century.
My personal initiation into SIEM came around 1999, when me and my team at Deutsche Bank tried to build a SIEM-like tool of our own from scratch. Fast forward to 2006 and I found myself at Barclays Capital with budget to spend and only a few SIEM vendors to choose from. Even with the limited information available to me at the time, I knew enough to base my choice at least partially on the ability of a SIEM platform to scale effectively, something which remains critical today.
SIEM 1.0 circa 2006 – A revolutionary new approach to security
The arrival of the first generation of SIEM platforms heralded a new dawn in the data security industry, combining security event management with security information management for the first time. However, these early SIEM 1.0 platforms had an Achilles heel in that they only scaled vertically, which quickly became a major limiting factor. As bigger and bigger hardware was needed to handle loads it eventually became I/O bound, at which point scaling came to an end.
At Barclays Capital we redesigned and re-platformed multiple times before realising we simply had nowhere else to go. We had achieved what at the time was the largest single SIEM deployment in the world at just over 650 million events per day. While that may seem laughable now, we were pretty proud of it at the time.
Scalability wasn’t the only limiting factor either. Getting data in and out of the SIEM was a major challenge, dashboards/reports were basic, and alerts were simplistic in nature, with very little in the way of feed correlation or enrichment possibilities.
SIEM 2.0 circa 2011 – A significant improvement, but still flawed
The second generation of SIEM arrived just in time. Perhaps unsurprisingly, the main difference to SIEM 1.0 was in the way it scaled, doing away with the centralised database and instead using big data to allow horizontal scaling.
At Barclays, this allowed us to quickly break the previous ingestion barrier and we soon found ourselves comfortably processing 2.5 billion events per day. Furthermore, horizontal scaling meant we could add lower spec hardware to the system to scale without limits. Double the number of servers meant roughly double the data or twice the performance.
SIEM 2.0 also allowed for better reporting and dashboards, as well as the ability to query historical data for the first time. With SIEM 1.0, long data retention periods were irrelevant as the technology could not effectively query the data back more than a few weeks. SIEM 2.0’s big data architecture meant data could be queried over much longer periods and have responses returned in a reasonable amount of time.
However, if scalability was SIEM 2.0’s greatest gift, it eventually became its greatest curse. Not because it didn’t work, but because it simply moved the problem further down the operational pipeline. Before SIEM, security professionals were essentially blind, unable to see what was happening in their own IT environments. The first generation of SIEM gave them sight but the second generation took it away again by presenting more data than they could possibly cope with. It also failed to innovate around the alerting aspects of SIEM, leaving teams reliant on pre-configured alerts which at best correlated against only a few elements.
SIEM 3.0 (circa 2015 – Present) – Revolution rather than evolution
Enter the third generation of SIEM, a radical iteration focussing on operational capability rather than technology. SIEM 3.0 introduced analytics into the mix for the first time, through the application of machine learning.
What makes SIEM 3.0 different is the step away from pre-configured alerting to a more risk-based approach. Alerts remain valuable when you are looking for simple, pre-known facts but in modern security management, teams are just as likely to be faced with unknown zero-day threats. Analytics-based security monitoring applies statistical techniques to quantities of data in order to build operational models, which are baselines for each individual user and entity in your environment. This technique is known by Gartner and others as user and entity behaviour analytics (UEBA).
In a UEBA system, a baselining period allows for the establishment of ‘normal behaviour’ in an environment. Any future deviations from this baseline then attribute risk to users and systems, allowing security teams to focus on actual risks rather than false alarms.
SIEM 3.0 also removes the final scalability problem in SIEM, the operational process piece. Of course, this does require security teams to take what is often a more difficult step than replacing a security technology – changing an existing operational process, many of which have been embedded for years. Political resistance to change can make this final step the hardest of all, but few CISOs can look ahead and see a rosy future if they stick with second generation SIEM technology.
Aside from the fact that security operations are far less likely to spot security breaches before the kill-chain is complete, when relying on alert based SIEM technology, the reality is that such operational teams are increasingly full of over worked and poorly trained security analysts doing little more than hitting the numbers of false positive dismissal. No technology which sustains such inefficient and ineffective working practices can last long in a world where both the breaches and fines get bigger every year. Who knows what SIEM 4.0 will bring, but until it’s arrival, SIEM 3.0 is undoubtedly the enabling technology of a modern effective security operations centre (SOC).
Stephen Gailey
Stephen is an experienced Information Security Manager used to working in highly regulated environments, dealing with compliance and legislative challenges from multiple jurisdictions. Much of Stephen’s career has been spent in financial services; primarily investment banking but also in retail banking, telecoms, utilities and insurance business environments.
Stephen is currently Head of Solutions Architecture at the Smarter SIEM company, Exabeam. He joined Exabeam from Splunk, where he ran the Financial Services practice and the EMEA Security Practice. Prior to Splunk, Stephen spent seven years at Barclays where he was the Group Head of Information Security Services. At Barclays, his team built what was probably the largest SIEM in the commercial world and delivered some of the largest programmes around privilege access management and data governance and control, as well as many other projects. He was also instrumental in the rapid integration of Lehman Brothers and in helping the bank unify its security organisation across several distinct business units.
Steven’s other key achievements include: creating and running the Deutsche Bank Global Internet Services team; helping Eircom to create a formalised IT governance structure based upon international standards and developing a major e-commerce and trading platform for Standard Bank Offshore.