In this episode, Simon and Steve discuss what ABA or next-gen SIEMs tell you, how they evolved, and what you need to configure and deploy one.
Intro
Welcome to Right Size Security, a podcast where we discuss all manner of infosec: from enterprise security to practical security every business can use all the way down to end user security. Your hosts for Right Size Security are me, Simon Gibson, longtime CISO — and me, Steve Ginsburg, former head of operations and CIO. For this episode of Right Size Security, we’ll discuss advanced behavioral analytics and threat protection technology — what we’re calling the next generation SIEM (Security Information and Event Management) or user behavioral analytics, UBA. We settled on advanced behavioral analytics because it seemed to more accurately describe what it accomplished and what it did. Today we’re going to be discussing what ABA or next-gen SIEMs tell you, how they evolved, what you need to configure and to do to deploy one, considerations when you’re shopping for one, and how it will fit with the rest of your security program. We’re also going to discuss some of the challenges, and of course, we’ll have our short topics before we get into it. Thanks for tuning in.
Transcript
Simon Gibson: All right, so today on Right Size Security we’re going to discuss SIEMs and advanced behavioral analytics for the next generations, but before we get into it, Steve: voting machines. We had a little conversation about them.
Steve Ginsburg: Sure, yeah. I just thought it would be an interesting topic to maybe address a little bit, both for itself and then for kind of the wider implications for enterprise security and how those things — the approach — might be interlocked. When I think about voting machines, obviously we’re at a point where for years different municipalities have been moving towards digital voting, and there’s always been a lot of concern about, ‘How do you make these things safe?’
It’s something — I’ve only read a little bit about it, but tried to wrap my head around it as well, and what’s interesting... one of the things I saw was that one of the leading voting machine companies, they’ve now flipped and said, ‘Well, you need a digital paper trail.’ So like one of the things in security in that aspect is, ‘Okay, there needs to be some outside-the-system verification,’ and kind of a backup from there. And then when I think about them, I think about, ‘What would make it the most secure?’
So for me, it would seem like open-source software would be great, and something where every push — as we see in the open-source community — every push is done from an open repository. You probably have some thoughts about the security around the tunnels to do it, the cryptology that needs to go to make sure that...sign keys everywhere, obviously, and that type of thing. It’s just that kind of classic thing about: how do you make sure enough that the administrators of the system can’t be corrupted? And then to analyze it, obviously, you have to go up the full stack. So what’s the physical security of each of the facilities? What’s the physical security of the hardware? ‘Cause if there’s wrong chips on the motherboard, then you still might not have security even if the software is secure. Then at the system and OS level, those libraries, then the application level and kind of beyond.
Simon Gibson: Yeah, I have done a little bit of thinking and know some people — again, this goes back to our discussion last episode when we considered privacy — it is a very complicated ecosystem because even when you think about voting machines that produce a digital paper, and you have a human verify that this is indeed what they meant to vote for, and you sort of hope that all works, you still need computers to get people on voter rolls. You still need people to be able to sign up to vote; you need to have a ballot delivered to your house. You still need all that, and that all involves computers.
Now you have that whole part of the universe to consider as well, apart from just -- if the voting machines become too hard to attack, somebody will push down somewhere else. And the whole of it is what I think you have to look at holistically, when it comes to protecting this. And that’s where guys like Matt Blaze, for example — he’s a professor who studies nothing but voting machines, and his class is entirely devoted to voting security and the ecosystem around them — and I think if we’re going to have a real serious conversation about this, then this is one of those things where government needs to involve the academics and have a real serious discussion about this. That’s sort of the same way Wassenaar was done. Say what you want about how that’s working right now, but having a real serious conversation about it is what we need to do.
Steve Ginsburg: Yeah, I think one of the things I’m encouraged about is: it is a worldwide problem. So solutions should be coming up all around the globe that could contribute to that and hopefully we’ll see maybe some unified efforts, almost like a U.N. of voting or something like that; an international effort to bring the best practices forward. And perhaps that’s already underway and I’m just not aware of it.
But I think, for our conversation about SIEMs, it’s interesting because... it’s ultimately the same kind of question, which is: ”How do I know everything that’s going on that I should know to make sure that these systems are secure, or at least as secure as they can be given what efforts we are willing to make in this area?”
Simon Gibson: Absolutely. All right, so let’s get into it: advanced behavioral analytics. Let’s get into SIEMs now, or advanced behavioral analytic platforms, user behavioral analytics — call them what you want. My feeling is that these tools are deployed and designed to detect behavior: wanted and unwanted behavior -- to be able to aggregate logs, collect telemetry from systems, do analysis of baseline normal or normal-ish, and abnormal or abnormal-ish, and give that information to the security teams, or very often, as is the case, the operations or systems administration team.
When we think about the earliest SIEM, there were the ArcSights and NetWitnesses of the world, but the one that really caught fire and took off was Splunk, and that was built for systems administrators. That was really built to answer the question in a large group of open systems machines: hundreds, thousands, tens of thousands — and an event occurred: what was the first event that caused the cascading failure? Or what was the message that was sent through the system that caused the problem? And Splunk was designed to look at that. And that’s a good way to think about the current generation of advanced behavioral analytics platforms.
Steve Ginsburg: Yeah, I think the correlation was obviously a very central part — and we talked about that before — correlation and search were the key characteristics of Splunk that really moved it forward. So before that, it would seem most administrator’s teams were hopefully moving towards taking their CIS logs into a united place, and that still has its own merit, no matter how you do it, for sure, and then starting to run maybe Regex and custom Perl, that kind of thing.
Simon Gibson: I mean, that was it. It was super awk and grep, and Splunk came along and said, ‘You don’t need to do that. We can make a regular expressions search engine that isn’t based on any data structure. You pick it, and it’s across a time continuum. It’s a regular expression across time. And we will track that for you, whatever you’re looking for.’ And I think that was kind of the heart of the explosion — well, certainly the adoption — of SIEM by security teams.
The question of: ‘What was the first event that caused this problem?’ — and in my case, a lot of the times, it was a directory got removed, or a system got rebooted that shouldn’t have been, or some change occurred on a system that wasn’t meant to [occur] — caused a problem. ‘Whodunnit’? And Splunk was especially good at analyzing those logs across many systems, tracing who logged in to what, what IP they came from, what command was issued, what return code was given, where they went next — Splunk was exceedingly good at being able to quickly take many, many hundreds of gigs of logs and quickly answer those kinds of questions.
Steve Ginsburg: Yeah. I think, in some of my writing, too, we were trying to address that for enterprise teams to think a little, too, about: what is the sophistication of their teams? So for example, some teams, their security operators would have no trouble writing that Regex and that Perl, and could write — I mean, we’ve seen some pretty great examples back in the day, so when you look at something like Splunk, a lot of that is done for you. You’re in the console, and it does two things. So for advanced teams, it frees up their time to do something else. On the one hand, it’s great to hack Regex; on the other hand, that takes time no matter how good you are. So if you’ve got tools that enable you to do that, you can be spending your advanced time on something else.
And then for teams who just don’t have that capability, it’s great to be able to just go into a tool and learn that and make use [of it]. And I think with Splunk, you see a lot of adoption in corporate environments, and also for security teams where maybe the folks aren’t that technical and can make that use as well.
Simon Gibson: Yeah. It is one of those things where — and I think this is especially true in security — it is true to some extent in operations, net ops, CIS ops — very often you don’t know the question you’re going to ask until you need to ask the question; you just have a lot of data. And so, if there’s a particular machine compromised or behaving in a certain way you’d never expected, you may never have a canned query for that question. And the ability to have all the information in aggregate — all the information meaning, if the question you need to ask is about 10,000 systems, can you ask that of 10,000 systems? Splunk was a very good tool for enabling that.
It’s funny, I just read a really good thread on Twitter by Phil Venables, who talked about the need for controls around data inputs. And I can’t tell you how many times I’ve been involved in some sort of incident with a Splunk, and I’ve looked for all the information, and two or three data feeds were down. And we didn’t have any information for a few weeks from a particular machine that was now very critical to whatever it was we were trying to understand. And that the controls you need to put around getting that information are just about as serious — or more serious, even, in some cases — than what you do with the information and how you store it.
Steve Ginsburg: Yeah. You want to practice with all areas of security to make sure when you have an incident — at least as much as you can envision, sometimes there will be an emergent type of incident that you’ve just never seen before, but as much as you can, you want to rehearse and not assume that you’re going to be ready for the forensics that you need.
It also reminds me of what I consider one central issue when I think about Splunk: as a commercial product, it is — or at least was, last time I checked — set up to charge by the amount of data going into the logging system. And it can become inordinately expensive if you have a tremendous amount of logging data.
Simon Gibson: Yeah. It can be real expensive, real quick. But I mean, I think, to be fair, this isn’t just about Splunk. It was a good example. But I do think that does get us into the right part of where we need to begin with this is: what do you need to create a baseline, and what does that really mean? Because again, as successful and as much adoption as Splunk saw and how transformative it was, it didn’t necessarily come with a lot of those analytics; those were community-generated for the most part. Now, Splunk has done a lot of that work; Splunk Cloud is a new offering; a lot of the analytics that were offered as other true SIEMs or behavioral analytics detection platforms, Splunk now has as part of their offering.
But really, talking about a baseline, at the end of the day, the holy grail of information security is observing an event in a system — in an ecosystem of users and computers and data and applications — and understanding, is this normal or not? Is this good or bad? Is this something we should do something about?
My favorite story is: you get a popup to change your password on Monday, you’re busy, you ignore it, you get the pop-up Tuesday, Wednesday, Thursday; finally, Steve, if you don’t change your password on Friday, you’re not going to be able to log-in when... Damn it; you change your password when you leave, and you come back Monday, and you’ve completely forgotten your password; you have no idea what it is. You’ve failed your password five or six times when your phone rings. And it’s your IT Department. And they say, ‘Steve, can I help you?’ And you say, ‘Yeah, I’ve lost my password; I set it on Friday.’ They verify it’s you and promptly reset your password, and you’re off to work. You didn’t put a ticket in, which is always a pain when you don’t have a password, because you can never log in to put a ticket in; that was always sort of one of those catch-22s. But now the help desk has come along, and made your day much easier. But what happened behind the scenes there was there was security [that] just worked perfectly. You failed some control; because that was abnormal, somebody noticed, and they followed up to see if you were you logging in. If they’d called your desk or your cell phone, and you’d said, ‘No, that’s definitely not me,’ they would have alerted the security team and the security team would have gone into action. To me, that is the perfect example of information security working perfectly.
Steve Ginsburg: Yeah. And I see those things becoming even more difficult in the era of ephemeral infrastructure. So — and I’m still wrapping my head around this too, as the year goes on and new products are coming out and just thinking a little bit more about it, even though these kind of concepts are not brand-new by any means.
But when you first start in information security, I think at least I go back far enough that it really was, ‘Okay, I can count how many computers I have; maybe someone’s going to bring something from home later on and it’ll start to be — you have to deal with people BYOD and even BYOD routers and things like that, and hunt those down and kill them.’ And then you might have contractors and visitors; there’s that whole idea of your infrastructure changing. But even still, that’s all almost glacial compared to once the VMs got added in, and then kubernetes clusters and things like that, which is really now the primary mode of everything [that] is meant to be spawned — even entire cloud environments are meant to be spawned spontaneously and dynamically. And then so really, I think, knowing what’s normal, that’s even a greater challenge than it was even a year ago.
Simon Gibson: Yeah. The ability to understand it — fortunately, we’ve seen a lot of companies and infrastructure built around understanding this specifically, and giving the right permissions and credentials sets because of this sort of ephemeral nature, your ability in a microservice to connect to a data source with the right level of permissioning to get the data you’re supposed to — those are all things now that are much more solved than they were in the glacial earlier state of infosec.
Steve Ginsburg: Right. We’re seeing non-perimeter-based security, policy-based AI, and policy intent-based.
Simon Gibson: And definitely identity and role-based [security]. I think a good example of role-based security [is] we talked to some hospitals where there’s teaching going on, and people are students in the daytime, and then at night they’re interns and they’re doctors, and so they need different levels of access depending on what their role is. And that can just be a question of: are they on shift or not? Are they in the class? And that example, you can take to a microservice or a kubernetes or something.
Steve Ginsburg: We had some folks at Pandora who had multiple roles over multiple times, and some of them became literally the poster children in our minds and discussions for: ‘Oh, what about that person who’s been in five different roles in the company?’ Which is great; they had different things to contribute…to your point, their policy roles change.
Simon Gibson: Yeah. And in SIEM, you know, when we think about how to deploy them and what are the most important things...the baselining, and in order to do that, the instrumentation. And I think at the fundamental layer, we can talk about ELF stacks. For example, with a fundamental layer, it’s going to be the storage and the retrieval of the different types of telemetry and information, whether it’s CIS logs, bespoke information from agents, SNMP, net flows, data packets, from the core of your network where you might have the main controllers or server farms, DMZs from corporate to data centers, DMZs from data centers to internets — all of those different places, those ingresses; all those key pivot places logging and sending their data. It has to go somewhere, so that’s a big component of these devices: to collect all that data.
Steve Ginsburg: Yeah. And that’s amazing to be able to see all those streams concurrently and potentially work against... correlating them all in interesting ways and seeing emerging events come from them.
Simon Gibson: Back in the early days of SIEM, the very idea was quite simple. It was: you have a firewall at the perimeter and some sort of a DMZ and a firewall behind that. If an event happens at your perimeter DMZ firewall, maybe it’s okay. It’s when that event happens on the firewall between your interior and your DMZ, now you’ve got an event to correlate. Now you know this thing that was just noise actually got through and triggered an event on another firewall, and you can start to look at it.
But with our early first generation of these, the data was voluminous. It was voluminous amounts of data. Packets were another great way of [organizing] information, but again, just such tremendous amounts of data that storing them became exceedingly difficult.
Steve Ginsburg: Yeah. I mean, there really are so many layers to bring in now. As you said, it’s everything from network flows to threat feeds of external events that you want to compare to what’s happening on your network from there. Maybe you want to talk a little about threat feeds in the ABA world, just ‘cause that continues to evolve in terms of what things are available and how people integrate them into the system.
Simon Gibson: Yeah. So, I think for threat intel, it’s basically data about things that are already ‘known bad.’ Which is a good thing; the problem is, they’re already known bad, and so people who are real[ly] serious about being bad know when they’re burned, generally. That’s why I think you hear about different levels or classifications of threat feeds. Certain threat feeds will identify indicators of compromise, or known bad acts, but they’re not making it very public, ‘cause they don’t want the bad guy to know they’ve been burned. So there’s this sort of different layer, so they’re important to have; they do give you some intelligence about what’s happening on your network...but they’re definitely not the only — they’re not the answer.
I think for understanding those kinds of things, you sort of need a few other things after that telemetry. You sort of need to understand the hierarchy of employees. Because remember, the goal is to create known good from known bad, or good-ish and bad-ish, because you may have an employee who gets a special project, who’s now accessing things that they maybe didn’t before, and it’s not bad because they’re supposed to be doing it. But if you haven’t been told in the security team that somebody from sales is supposed to be looking at HR information, then maybe that would look bad-ish.
But the goal of this is to understand not just the telemetry about the perimeter machine, but also about the hierarchy of who should be doing what, and then honestly, we sort of talk a lot about data loss prevention and understanding data flows. Data classification is a really important factor when we think about baselining as well. What is sensitive data? How can you define it? Certainly, if you’re looking at traffic leaving your perimeter, moving around, and you’re trying to get to that baseline, can you understand the data and who’s sharing what?
One of the things that I think the role of the CIO has changed so drastically too these days is CIOs, up until fairly recently, were very much in charge of data centers and networks and computers; and maybe they were buying machines to run VMs on. But now, I think the CIO maybe still has some of that, but there’s a huge part of the CIO’s responsibility that is understanding the interconnections between cloud services. So we have Salesforce, for example, and what data can be shared with it from our smartsheets deployment, or different employees, or how can we get data from treasury to fulfillment? And payroll to the bank? And it’s really this understanding of data models between these cloud providers.
Steve Ginsburg: Yeah. Absolutely. A lot of the work is spent, as you said, understanding the roles, and roles are something that — for those of my peers who have not already jumped deeply into this — there’s a lot of nuance. You mentioned a salesperson changing categories. When we went down this path, your initial take, of course, is going to be: ‘Engineers should have this class; systems operators, project managers have this class of access to data.’ But then you quickly find: well, project managers are so closely coupled with engineers that if you keep them separated, you very quickly have to start finding the unifying points, or it can’t work. They’re working on the same projects, that kind of thing.
And yet, to your point, you still ultimately need to protect data to a pretty high level, especially depending on what type of organization you are. If you’re a fintech company and you have financial data, you really have to protect it. Every company has HR data; you have to protect that, too.
Simon Gibson: And medical data...
Steve Ginsburg: Right, exactly.
Simon Gibson: Yeah. So I think the point, though, from all of this, is that in order for SIEMs to be effective or ABA threat protection tools to work well, you need to understand what is normal and what your network does: what endpoints are on it, where they communicate, where your data sources are, how human beings interact with the data, how the data is hosted on the applications, whether they’re microservices coming up and down or they’re humans interacting with databases, or license files that you’re hosting in your DMZ — whatever you’re doing. It’s that question of how all this works as an ecosystem, who should access stuff.
And again, like you were just saying, you may have a project manager and an engineer — if you don’t have a data classification program in place, it’s really difficult to explain why I need to give this project manager access to something only engineers have. Then you have to just say, ‘Well, this is what the engineers have, but he’s working with them,’ or ‘she’s working with them.’ If you have a data classification project, you can say, ‘This is definitely not stuff project managers can see, but because we have a data classification program, we can make an exception because now we understand what we’re doing.’
Steve Ginsburg: Yes. And for mid-sized companies and beyond, there’s really a lot of work here because it’s not only understanding what these roles are today, and what these data sources are today, but in most companies, all of this is very dynamic. Every department that you’re working with, they’re changing what software tools they use, they’re changing what outside vendors they use, multiple times a year, often multiple times a month if they’re really, really busy. And even who your experts are in that organization who can tell you what data is being used and how it should be classified, at least relevant to that department, those people are changing as well.
So really catching up with all of that, and making sure, to your point, that you’ve got a good reflection of that in the system, you really need tools that are as dynamic as [they] can be. If it’s not all happening electronically, which it probably isn’t, you need a lot of human effort to really make sure that that stuff is codified.
Simon Gibson: Yeah. And even effort — you need a mandate. You need an edict that says: ‘This shall be important.’ And it’s not. I mean, it is generally not the first thing to come from the board or the CEO or the exec staff.
Steve Ginsburg: Yes. And few folks in a company would choose to have a meeting about data classification if they can be doing whatever their actual job is.
Simon Gibson: Yeah, so it’s a difficult balance to strike. So I think the last part, now that we’ve summed up the need for all the baseline and telemetry collection: the usage of behavioral analytics tools — the two main use cases are forensic and real time.
So real time situational awareness — the ability to monitor end points, user behavior, perimeters, DMZs, proxies, access points — to be able to monitor those in real time and detect something. So a good example is an unexpected large amount of data leaving your network. For some reason, you see many gigs and gigs being transferred and there’s no obvious reason for that. That could be a really good giveaway that your database has been compromised, and somebody’s in the middle of taking a whole bunch of sensitive information. So that’s the sort of real time.
The forensic use case is where somebody comes to you in a month or two, and says, ‘You’ve been breached,’ or ‘We realized this person did something,’ or ‘We have a problem, and we need to go back,’ and now you need to be able to understand exactly what the scope of the problem was — how much damage was done, what was taken.
Steve Ginsburg: Yeah, especially because it’s known that — very common in security circles that you often don’t discover a breach for a long time. It’s often months; weeks if you’re lucky, or hours if you’re really on it. And I think the move towards systems that have automated monitoring around this really is the strongest thing that one can do in this ephemeral world. But that still needs a combination.
I think, in order to know what you want to have automated alerts and even maybe automated remediation on, you probably need visualization. So you need instrumentation first, data collection, visualization, manual interaction, and then move up the food chain towards the best automated action that includes all this awareness of: what are the roles, what is the purpose, what is the criticality?
Simon Gibson: Yeah. So when you’re thinking about buying these things, and we kind of get into what are the mindsets — what do you want to know? There’s sort of the two: cloud and on prem seem to have been much more front and center today. Typically, it was on prem, people were sort of hesitant to use these tools for cloud.
Steve Ginsburg: Yeah, those who have heard me speak at any length know that I certainly started — and I think a good amount of my peers that came from operations, especially -- started with a certain amount of, ‘Do I really want to send my key data into a cloud service?’ That becomes another potential concern there. But more and more, when I talk to my peers now, it really is ‘that train’s left the station,’ or whatever analogy you want to use. Increasingly, to your point, as you said, enterprises now are hybrid, multi-cloud, so a good amount of the important data is going to be in the cloud almost regardless of strategy now for enterprises.
There are probably a few holdouts, and certainly, as we talked about, certain types of industry. If I was security with a government arms contractor, nothing would be in the cloud, for example. Some companies still have a reason to have policy that the data’s not up there. But for most companies, they’ve already moved through moving very important enterprise data into the cloud. And so I think the logging goes with that, which is: you’re still going to have to work with that vendor and make sure they can be secure with your log data, and your SIEM, if you’re running it in the cloud. But taking that kind of bet is not considered provocative anymore.
Simon Gibson: I think the concern that always comes up for me is that you get a better set of analytics...you sort of end up using a bit of crowd-sourced intelligence when it’s all shared; you do get this best-of-breed. You get other companies pen-testing cloud infrastructure that you might not — they might be able to afford a pen test that you couldn’t on your budget, and so you benefit from anything those other pen testers found.
The concern I would see with having this kind of data in a cloud is that if you are breached, that means that there’s another company that also knows you were breached. Do you want that CEO calling your CEO to tell them? And what’s the process when they discover you’re breached? And how sure are you that they are going to be confidential and take good duty of care around that information?
Steve Ginsburg: Sure. So it’s a good point. That should be contractually spelled out in a purchase. That’s a good thing to get into the documents there. I think we talked about [it] before: I looked at it as kind of, what is the leverage security model? And you hinted at that in terms of the pen testing, which is: if you have a great operations team, it’s actually very easy to say, ‘Well, a lot of vendors in the world might not be able to do as well on security as my operations team.’ And I think that’s true, and something that I continue to think — my peers I continue to encourage to say, for small companies, you should still really be concerned about that, and even medium-sized companies, because, to your earlier point also, most companies don’t want to prioritize security. That’s not where they see themselves as making money the most, in most cases. There are some places where that wouldn’t be true, but generally they are.
And so when you look at — we’ve given examples of Salesforce, for example — they have enough money leverage to keep secure -- that after a certain point, I could no longer feel that it made sense for me to say, ‘Well, I know that my security team can do better than they can.’ They’ve got a multi-billion dollar business around keeping that data secure. They’ve got teams to do it. It doesn’t mean that they’ll never fall down, but it certainly means that they’ve got a good chance at remediating as quickly as anyone would, or that kind of thing. That being said, there are great failures, so one still needs to be careful.
Simon Gibson: Yeah. I mean, I think about the security and the dimensions of confidentiality, integrity, availability, and I really wonder what would happen if Salesforce was down for 72 hours, or there was a breach in confidentiality that was uncovered many months later. I just don’t know what would happen — that would just be a huge thing. To your point, there are a lot of resources there added, but I think one of the things, if we’ve learned anything in infosec, is that if it can happen, it’s probably going to.
Steve Ginsburg: Right. I think it’s safe to manage for everything will fail; it’s just when and what will you do when you do that?. And the question in that case is: Would you feel better if you kept those eggs in your own basket?
Simon Gibson: Absolutely. It’s a question of how you manage the risks you have. I would get asked this question, and it took me a long time to realize — I used to be a little pissed off at it — but I would have boards ask me, ‘Are we secure?’ And I’m the guy in charge of security. So for me to say ‘No’ sort of negates all the work I’m doing. Do you see, that’s this terrible fool’s errand of a question.
And really, what you’re being asked is the wrong question. Because the answer is: ‘No, we’re not secure.’ And if you really expect that we’re going to get you to digital security, you are asking and thinking about this completely differently. What you need to know is: What are our risks? How risky are they? What are we doing about them? And have we looked at all of them? Those are the right questions to ask, not: ‘Are we secure?’
Steve Ginsburg: I think the SIEM provides a valuable role in being able to give continued nuance to your answer there. You can at least say, ‘These are where we have seen events; this is where we have not seen events; this is how we’d summarize them, all that analytics around the data about what kind of events are showing up on our single pane of glass every day.
Simon Gibson: Well, yeah, definitely. And in the deployment of these...part of that is understanding that the SIEMs will not work as well as they’re designed if they have an incomplete picture. So if the SIEM doesn’t understand all the things it needs to get context, you’re going to get not very good efficacy. You’re going to get false positives, false negatives.
The example I gave you about the password, where a human being called...so that’s the problem, right? The human being is the state machine that saw the failed login that an alarm was raised to, and then rather than having an AI or some other sort of smart algorithm decide to make the phone call to your desk, we gave that to a human who then decided that it was really you who was locked out. And that use of humans as state engines is sort of why there’s such a negative unemployment problem in infosec. There isn’t machine learning; there isn’t infrastructure yet for this. Now the good news is, a bunch of the companies that we’re looking at are actually working on these kinds of things.
Steve Ginsburg: Yeah. And even in that example, there’s a parallel for at least part of that, which is: when you log in to Facebook from a new machine, if it has another way to contact you, similar here, an out-of-band way, it’ll say, ‘Hey, was that you?’ and let you either verify it or not. So part of that use case is getting covered these days.
But yeah, I think, to your point here, the central point is the AI and intent-based part of it, and this is true not only for the security component, but even the infrastructure component. I think that’s one of the most exciting areas that’s changing right now as companies are getting more and more machine learning around different use cases, and to your point also, sharing what data they’re seeing across companies and then starting to write some real intelligence to be able to react and be able to remediate if necessary.
Simon Gibson: Yeah. So I think when you’re considering buying, those are the kinds of things to be concerned with, is the amount of data, the level of effort to instrument it, how much data you’re going to send…] Can you get the data if you have remote offices with slow connections and you’ve got a whole bunch of data that you need to ship back from proxies or from different routers or switches or SNM peepholes? Are you going to be able to get it all? Is it going to be confusing if you get an attack and somebody decides to egress out of Jakarta, maybe, and you don’t have telemetry there? How effective is your system after all? So all those are the kinds of things as you’re thinking about these things.
And then I think the next category about this is: Where do these all fit? And you brought up legal earlier, which, this business of]: ‘I’ve been breached; now not only does this other company know about it, but their engineers know about it. Anybody potentially, their sales people know about it. How do we make sure, to your point, that that’s written into the contracts, and is legal okay with it?
Steve Ginsburg: Yeah. And the security teams are very often married in an ongoing effort with the legal department of the company. So there’s the contractual part of it too, and from the legal part, ‘What response does my enterprise want us to have? What kind of things do they want to be notified [of] right away?’
In some cases, of course, there are even state laws about what kind of breaches you need to be notified more publicly about; and then how are you going to go about that, and then the security team, when they’re going to take action, if there’s a concern — and you gave an example about data being exfiltrated; if an employee is involved, then an action is going to have to take part; or even if it’s a third party that’s identified, when do you bring in the FBI, how do you do that? So I think lots of legal touchpoints and interfaces.
Simon Gibson: Yeah, and I have done this on multiple occasions in different companies at different scales, but having a break-glass plan ready to go is nice. I’ve fortunately, knock on wood, not had to really access it or activate it, but having corporate communications involved; having outside counsel involved; having a strike team of people who can come in if you need it involved; understanding who the points of contact are and keeping that list fresh, because people come and go all the time. The minute you put a plan together, someone’s going to change departments and now that point of contact isn’t relevant anymore.
Steve Ginsburg: Yep, and tying it back to the SIEM, and the data visualization, and reporting, is you want those interfaces to be clear. So you want to be able to go to legal with a very clear answer about what you think is happening; not: ‘Hey, we think…’ And there might be times when all you can do is say, ‘Hey, we think this is sort of happening.’ But you want to be able to move quickly to: ‘This is what exactly happened.’
Simon Gibson: And I think that a lot of this — and we are seeing this get more mature — but in the mind of the infosec person, generally speaking, the level of breach and sophistication are things that they get concerned with. I’ve seen a fairly complex attack on a fairly sensitive system from what seems to be a fairly competent adversary; this looks like it’s really bad. And legal might come along and say, ‘Well, is it near our financial systems?’ And you might say, ‘No, it’s actually on a machine that we host documents for stuff on.’ And the legal team might go, ‘Well, tell me when it gets to the financial systems.’ But those are two very different sets of concerns.
Legal has a very different kind of — their job and their incentivization and their resources are totally different from that of infosec, and keeping those aligned is important. And I think we sort of talked about where does the SIEM all fit in with stuff? It can actually be the central routing point -- the central place where all information is collected. Data is collected from machines, information about threat intelligence is pushed into it, things are compared, actions are taken, and I think there has been a fair bit of that: where the single pane of glass — which a lot of security operation teams want, a lot of vendors want to sell, for obvious reasons — but just generally to have everybody on the same sheet of music is the goal. Whether or not there’s a mandate or an edict from on high that you shall do this, having that actually does solve a lot of those problems.
Steve Ginsburg: Yeah. And I think it also touches on the idea that a lot of attacks might be multi-headed. And that a SIEM can provide a wider sense of what’s happening across the entire battlefield, not just in one place.
Simon Gibson: Yeah. And not only multi-headed in terms of the technology, but again look at multi-faceted and -dimensional in terms of propaganda and just setting people back from not knowing truth from untruth. And that’s an ‘active measures’ kind of attack, which is maybe very untraditional, at least in this part of the world. But again, not something that somebody was looking for, or even honestly a SIEM might have picked out...yet.
I do think the interesting part about a lot of this — and this is true — I used to always get asked, or get asked a lot, anyway, about how proactive are we being. And the problem is, you can be proactive; you can manage your risk; you can take a risk-based approach and shore things up, but there is a sort of a level of, I want to say, almost like inoculation. When a virus happens and you don’t have any immunities to it, you get sick. And once the thing has happened, you start to build up antibodies.
So like fake news and Twitter bots, and a lot of the stuff that we’ve seen, we have built some immunity to that now. I don’t know that it’s perfect, but it took that to happen. So in the world of infosec, just because it’s dynamic and changing, that there are going to be things where you can be as careful as you want and you’re not going to get everything until you’ve built — until it’s happened to you. And now you’ve been sort of inoculated from it.
Steve Ginsburg: Right. Which is a great reason to get involved, even for, if there are folks who are thinking, ‘I’d like to go down this path, but it seems overwhelming or daunting,’ one approach is to just get going with some scope. Learn how the tools work; get used to practice of looking over some part of your infrastructure; learning how you interact with this, respond to that, instrument it, improve it, evolve it, scale it, grow it. And then as the challenge gets bigger, you’re in a place where you can add from there. The other approach, of course, being: know, go through, understand really what all the inputs are as best you can to start, and pick a solution that’s really scaled for the entire effort.
Simon Gibson: Yeah. I don’t think that’s bad advice either...is to pick a specific area [to] look at. I think for a lot of the work that I’ve done in places I’ve been, it’s the gateways between networks: it’s the proxies, the multi-factor authenticated devices that get you from one-set-of-machines-to-another environment — whether that’s DMZ to the Internet, your corporate environment to your production environment — those are sort of the places that I like to watch because they’re sort of chokeholds; it’s less data.
You can do things like only looking at outbound data from a DMZ, for example. That will tell you lots of stuff. You don’t need to look at inbound and outbound. You’ve just suddenly cut your level of work way, way down. Because if a machine’s infected, it’s going to be freaking out. It’s going to act like an infected machine. I don’t necessarily need to see both directions of communication, because people generally consume more Internet than they produce; Most people surf and look at videos and pictures and read and produce a few emails or a website or some content. Because of that ratio, the amount of traffic you’re looking at is much less if you’re only looking at inbound. So little things like that: picking where your strengths are — it could be a financial system; it could be your treasury system; it could be your fulfillment system, where you build things. Whatever the small universe is, maybe pick that, and start instrumenting that and learning from it.
Steve Ginsburg: Any advice for people on the data sampling? So you said, ‘Hey, go to a place where there’s less data,’ but also in each one of these places, you might be in a situation where you might have to sub-sample; you can’t take all the network traffic, that kind of thing. Any advice on approach there?
Simon Gibson: Only how I’ve been burned by it. Unfortunately, sampling, especially with netflow, it is handy and it will help you, but when there’s an incident, you often want as much as you can get, even if it’s one-directional. So I think it’s helpful; it’s definitely helpful.
I mean, companies like Corelight that has — it’s basically consumer-supported bro. And this is a tool that looks at network packets, summarizes them into texts, and then stores what you want to know about those packets. Again, it’s a tenth of the data, and you get a whole amount of the information. I have with data sampling — again, the few instances I’ve had have been nice, because it was good to have something rather than nothing — but I’d always wished I’d had more.
Steve Ginsburg: Sure.
Simon Gibson: So I think that’s more or less it. I think we’ve kind of covered a fair bit about what you need to do when you’re thinking about a SIEM: why you want one — whether it’s to detect...and I think at the end of the day you want to answer the question, ‘Is this a good event or a bad event? Is this normal, or normal-ish? Is this bad, or bad-ish? But then the challenges around that... they’re not trivial.
Steve Ginsburg: Yeah. I think this kind of situational awareness. Every enterprise should make sure they have a good story here. And I think — we talked about — different enterprises will have different requirements ultimately, and most enterprises will be balancing the, ‘Well, what can I afford to do now versus the other things I need to get done?’ But to your point, when a security event occurs, you’re going to wish you did everything, and so I think moving responsibly towards that is something that pretty much every enterprise should be able to move forward and have a good and evolving story.
Simon Gibson: Yep. Agreed. All right, thanks for tuning in. Thanks, Steve.
Steve Ginsburg: Thanks, Simon.
- Subscribe to Right Size Security
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS