Simon Gibson, Author at Gigaom Your industry partner in emerging technology research Mon, 09 Jan 2023 21:37:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 GigaOm Radar for Governance Risk and Compliance Solutions https://gigaom.com/report/gigaom-radar-for-governance-risk-and-compliance-solutions/ Wed, 23 Feb 2022 21:35:32 +0000 https://research.gigaom.com/?post_type=go-report&p=1002978/ Risk and risk management should be a driving force within IT departments. However, for most enterprises, risk management is considered a tax

The post GigaOm Radar for Governance Risk and Compliance Solutions appeared first on Gigaom.

]]>
Risk and risk management should be a driving force within IT departments. However, for most enterprises, risk management is considered a tax levied on technology infrastructure already swimming in oceans of technical debt.

Yet if the recent pandemic showed us anything, it is that managing risk—particularly unforeseen scenarios—is critical both to life safety and to how effectively a business can recover from unexpected impact.

With the effects of the pandemic still in play, but with the economy recovering, we take a look at companies selling software platforms that manage governance, risk, and compliance (GRC), from the perspective that in these “unprecedented times,” GRC software should take on new importance and be seen in a new light.

The companies we looked at all specialize in providing software designed to identify and report on risk by tracking and measuring how well a company is doing against a set of criteria and controls. These metrics range from financial audits to IT security measurements and can be scoped to fit the GRC requirements of small, medium, and large companies. In some cases, the software allows auditors to manage multiple audits, including multiple audits across multiple companies.

GRC is the assessment and measurement of risk, including the ability to report on what is controlled and what cannot be, the outcome of that position in terms of compliance, and the overall governance of business processes. There are many frameworks that can be used to measure many different types of business processes.

Leading GRC solutions streamline the process of determining what risks are in scope, gathering the status of the controls in place that are used to manage the risk, and report on progress.

This is a particularly interesting time in the field of risk management because of how technology is converging in ways that allow the measurement of risk to become more automated and programmatic.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

The post GigaOm Radar for Governance Risk and Compliance Solutions appeared first on Gigaom.

]]>
Key Criteria for Evaluating Governance, Risk, and Compliance (GRC) Solutions https://gigaom.com/report/key-criteria-for-evaluating-governance-risk-and-compliance-grc-solutions/ Tue, 01 Feb 2022 17:51:10 +0000 https://research.gigaom.com/?post_type=go-report&p=1002517/ Risk, and risk management, should be a driving force within IT departments. However, for most enterprises, risk management is a tax levied

The post Key Criteria for Evaluating Governance, Risk, and Compliance (GRC) Solutions appeared first on Gigaom.

]]>
Risk, and risk management, should be a driving force within IT departments. However, for most enterprises, risk management is a tax levied on technology infrastructure already swimming in oceans of technical debt.

Yet if the COVID-19 pandemic showed us anything, it’s that managing risk—particularly unexpected scenarios—is critical to both life safety and how effectively a business can recover from impact.

With the effects of the pandemic still in play, but with the economy recovering, we take a look at companies selling software platforms that manage Governance, Risk, and Compliance (GRC). It’s our opinion that in these “unprecedented times,” GRC software should take on new importance and be seen in a new light.

The companies we looked at all specialize in providing software designed to identify and report on risk by tracking and measuring how well a company is doing against a set of criteria and controls. These range from financial audits to IT security measurements and can be scoped to fit the GRC requirements of small, medium, and large companies. In some cases, the software allows auditors to manage multiple audits, including multiple audits across multiple companies.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

The post Key Criteria for Evaluating Governance, Risk, and Compliance (GRC) Solutions appeared first on Gigaom.

]]>
GigaOm Radar for Phishing Prevention and Detection https://gigaom.com/report/gigaom-radar-for-phishing-prevention-and-detection/ Fri, 14 Aug 2020 20:38:34 +0000 https://research.gigaom.com/?post_type=go-report&p=966635/ In 2020, global email usage will top 300 billion messages sent, according to technology market research firm The Radicati Group. And that

The post GigaOm Radar for Phishing Prevention and Detection appeared first on Gigaom.

]]>
In 2020, global email usage will top 300 billion messages sent, according to technology market research firm The Radicati Group. And that can be a problem, because email remains a leading conduit for malware delivery and phishing exploits. The message: an effective phishing prevention and detection solution must be a critical component of your enterprise security strategy.

The ability to effectively analyze and understand email traffic gives us the opportunity to derive great insight into different vectors of threat. These threats include malicious activity, crime, extortion, BEC (business email compromise), as well as insider threats. That insight can then be used to respond and effectively secure and protect both employees and organizations alike.

Vendors in the market today tackle this challenge in a variety of ways. Some focus purely on inbound email communications, others on internal communications within the same (or otherwise trusted) domains. As in many other areas of information security however, there is no silver bullet. The best approach will ultimately be some hybrid of these, building a layered approach often called defense in depth.

In this Radar report, we have considered a broad cross-section of the many phishing prevention and detection solutions and approaches in the market today.

When evaluating these vendors and their solutions, it is important to consider your own business and workflow. Different solutions, or combinations of solutions, will be more or less appropriate depending on the nature of your email traffic and business workflow. It is also important to consider your internal ability to handle the potential complexity of the solutions. For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.

The post GigaOm Radar for Phishing Prevention and Detection appeared first on Gigaom.

]]>
Area 1 https://gigaom.com/report/area-1/ Fri, 20 Mar 2020 15:53:47 +0000 https://research.gigaom.com/?post_type=go-report&p=965927/ We covered Area1 Security in our 2018 Phishing Landscape. The company has continued its upward trajectory and holds a strong position in

The post Area 1 appeared first on Gigaom.

]]>
We covered Area1 Security in our 2018 Phishing Landscape. The company has continued its upward trajectory and holds a strong position in its ability to detect, block, and report on sophisticated, targeted attacks.

Over the last year, Area 1 has been working on detecting broader attack types. These include mass-scale scams, in which attackers cast a wide net in the realization that the attack will be detected and stopped, but that nevertheless promise enough profit to be worth that effort. This does not mean that Area 1 has done any less work on their special sauce which is detecting sophisticated, targeted attacks.

Area 1 has also dedicated time to NLP and heuristics, doubtless as a result of the $5.3 Billion lost to Business Email Compromise (BEC). Area 1’s architecture and approach were to understand the behavior and relationships between senders and recipients or users and their destinations. Over the last year, Area 1 has built heuristic algorithms that enable their software to track the state of a conversation and its back-and-forths. This enables them to detect when a compromised account is being used to make an odd request in the middle of a thread that may extend over weeks. Area 1 not only claims to be able to stop these attacks, but they also have receipts to that effect.

Area 1 looked at the market landscape and realized that some capabilities they did not previously offer were now commodities. For example, while attachment detonation and analysis, and enforcement of Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) might not stop sophisticated attacks, these were nevertheless expected by enterprises; thus, Area 1 has begun offering them.

Area 1 listened to their customers’ pain, understanding that Security Operations Centers (SOCs) are overwhelmed. They launched enterprise-grade, cloud-native deployments, along with search to track down the source and scope of an attack. They also released their Autonomous Phish SOC and automated mbox abuse mitigation, which took a first pass at the abuse@company’s inbox to relieve SOC personnel of that burden.

Area 1 has been busy in 2019 and was the first company we saw to “put their money where their mouths were” with their now-famous Pay-Per-Phish model.

The post Area 1 appeared first on Gigaom.

]]>
Key Criteria for Evaluating Phishing Protection Platforms https://gigaom.com/report/key-criteria-for-evaluating-phishing-protection-platforms/ Mon, 24 Feb 2020 18:06:16 +0000 https://research.gigaom.com/?post_type=go-report&p=965524/ Phishing is the primary method for breaching businesses. According to a 2018 Verizon Data Breach report, 96% of all attacks begin with

The post Key Criteria for Evaluating Phishing Protection Platforms appeared first on Gigaom.

]]>
Phishing is the primary method for breaching businesses. According to a 2018 Verizon Data Breach report, 96% of all attacks begin with phishing, so stopping them before they start has a huge return on investment (ROI) for security programs. From saving on security analysts’ time to avoiding lost productivity because of infected machines, everything is improved by stopping phishing before it can happen. The average pretexting or business email compromise (BEC) attack costs companies around $130,000 each instance, which, for most companies, will be less than installing phishing protection.

Your enterprise is unique. Your employees’ varied skills, your appetite for risk, and your customers make up a unique environment. However, the threats faced, and the mechanism available in response, are the same. How your enterprise incorporates existing capabilities into your threat model can mean the difference between a reactive program and a proactive, sustainable one. Understanding the privacy concerns and capabilities of phishing prevention vendors is the goal of this report.

This Key Criteria report will help C(x)Os and security practitioners evaluate phishing prevention solutions that reside between email servers and the internet, and scan either email headers, attachments, the body, or some combination of them.

Key findings:

  • Stopping phishing attacks before they are delivered provides economy of scale by reducing security teams’ workloads.
  • Email security gateways provide enterprises with a method to proxy inbound email communication, detect and remove phishing, as well as adequately address privacy concerns.
  • The vast majority of prevention solutions take place between the internet and the email service. Taken in context with the “kill-chain,” which says the earlier an attack can be stopped the less likely it is to succeed, stopping an attack after reconnaissance and weaponization, and before delivery, is the goal of phishing prevention platforms.
  • While other players in the space focus on endpoint detection and prevention, the goal of companies we talked with is primarily focused on removing the phishing attack before it hits the inbox.

The post Key Criteria for Evaluating Phishing Protection Platforms appeared first on Gigaom.

]]>
Zimperium https://gigaom.com/report/zimperium/ Fri, 14 Feb 2020 17:57:28 +0000 https://research.gigaom.com/?post_type=go-report&p=965395/ We covered Zimperium in our December 2018 Landscape Report, and they continue to stand out in their approach and philosophy. Zimperium was

The post Zimperium appeared first on Gigaom.

]]>
We covered Zimperium in our December 2018 Landscape Report, and they continue to stand out in their approach and philosophy. Zimperium was early to realize that growth in mobile provided attackers opportunities to exploit it not only because of sheer numbers but because of complexity that arises out of mobile. In the majority of companies, BYOD is encouraged, and even where corporate devices are issued and MDM is deployed, users can conduct personal communications using mobile. These are not limited to email.

As is the case across the cybersecurity spectrum, attackers target the weakest link. With phishing, attackers target users, and with the explosion of mobile, attackers increasingly are turning to attacks that leverage mobile devices. They include messaging apps like Signal, WhatsApp, and SMS. Attackers understand this and, phishing has extended beyond protecting corporate email inboxes.

Mobile devices contain a tremendous wealth of information about their owners and how their owners interact with the connected world. As mobile devices became more pivotal for conducting business, their interactions with connected devices such as desktops and trusted access control mechanisms increases. A mobile device can be infected with malware designed to be delivered and exploited in trusted environments or other systems such as connected cars.

The approach that Zimperium undertook is designed to protect the mobile endpoint from being the point of entry, and their focus on that is what sets them apart. Zimperium deploys to mobile endpoints as an App in Google Play or App Store or with an MDM or their SDK.

Their product is designed to understand the behavior of three main components. The device, the apps installed, the networks it connects to, and the behavior they observe. These all culminate around the Zimperium phishing prevention app. They have a large install footprint of 70mm devices and, in some cases, can retrieve anonymized attack signature data. This information, in combination with their threat research team, trains their engine so it can detect unusual behavior indicative of a compromise.

The approach is driven by threat intelligence behavioral heuristic analysis. This approach is intended to ensure that users’ information is kept private. They do not analyze, nor are they able to view content included in web pages, emails, attachments, or secure messages. Instead, they can observe what resource is being requested and by what and analyze DNS requests and SSL certificates. All of the mobile devices’ network behavior is observed, and they can determine if the device is actually compromised.

The post Zimperium appeared first on Gigaom.

]]>
Public Key Infrastructure https://gigaom.com/report/public-key-infrastructure/ Mon, 21 Oct 2019 12:00:12 +0000 https://research.gigaom.com/?post_type=go-report&p=962644/ The role of the CIO has evolved and encompasses more than managing servers, data centers, and the applications that run on them.

The post Public Key Infrastructure appeared first on Gigaom.

]]>
The role of the CIO has evolved and encompasses more than managing servers, data centers, and the applications that run on them. The CIO must now come to grips with potentially hundreds of cloud applications and platforms, including some that are not being secured within their organizations.

Most importantly, CIOs must manage the interconnected nature of these many SaaS applications. They must understand their use cases, the administration, and ownership of them as well as be able to make risk-based decisions about what should and should not be allowed to interconnect. At the core of it, applications, identities, and data must be validated so informed decisions can be made upstream. While the focus of the CIO has shifted, the job description is only now being updated. Given this, the CIO needs to be as flexible as possible while leveraging the cloud to control risks such as keys being lost in multi-factor authentication or security event management. This flexibility is not just about the tools they choose, but proactively addressing risks to secure treasury, customer data, and supply fulfillment; and of course, managing audit and compliance requirements.

Public Key Infrastructure (PKI) ensures higher levels of security when deployed within organizations by validating the authenticity of resources and encrypting data as follows:

  • Verifying the authenticity of an endpoint, such as mobile devices, insulin pumps, industrial control systems, and even file servers. This verification is critical when you consider the importance of something like downloading quarterly financials.
  • Ensuring data has not been tampered with.
  • Controlling who can get access to the data.
  • Guaranteeing the servers are authentic.

Enterprises that want to stay competitive understand that reputation and trust are very difficult assets to earn back once they are lost. By using PKI, they are able to gain an edge that enables them to make security decisions based on sound cryptographic fundamentals. Doing this ensures decisions made upstream with SaaS, PaaS and identities, applications, and encryption are sound and grounded.

PKI can and should be applied to every digital identity across the enterprise including devices, apps, and people. Yet all too often it is not, due to the complexity and cost associated with an on-premises do-it-yourself implementation. Despite their necessity, successful deployments have historically remained out of reach for most organizations, and concernedly so, mistakes can put you out of business should you become unable to decrypt critical data. If not correctly deployed, the foundation of subsequent security decisions will be intrinsically flawed.

Keeping PKI centralized on-premises requires a tremendous amount of resources to run and may not even adequately cover everything like signing or public Certificate Authorities (CAs). Failure of any one facet can be catastrophic. For this reason, a cloud-based deployment model allows enterprises to fully secure the environment with simple deployment while reducing maintenance operations – resulting in real Total Cost of Ownership (TCO) savings over time. Table 1 outlines the necessity and risks involved in certificate use cases.

Table 1: Necessities and Risks in deploying certificates

Technical Examples

Fundamentally PKI is the creation, issuance, management, distribution, usage, storage, and revocation of digital certificates. These certificates authenticate the identities of various parts of the data transfer process, as well as encrypt traffic between different endpoints. Take the following three online banking examples:

  1. Identifying phishing sites that pose as authentic websites. Phishing sites try to trick unwitting users into entering personal information. Using PKI, the bank is able to create certificates that cryptographically prove they are who they claim to be and the user can distinguish the phishing site from an authentic site.
  2. Encrypting sensitive data. Entering credit cards or other information to a bank site needs to be encrypted so that other devices on the network are not able to capture the information. Using PKI, banks are able to encrypt the traffic from their web servers directly to the user’s desktop or mobile device.
  3. Authenticating software to prevent malware. Malware can be inserted into code that users believe to be safe, and when installed, create vulnerabilities. With PKI deployed, software manufacturers are able to sign their software, allowing their customers to verify the authenticity of the code and confirm it has not been tampered with.

The post Public Key Infrastructure appeared first on Gigaom.

]]>
Deception Technology https://gigaom.com/report/deception-technology/ Tue, 10 Sep 2019 16:20:20 +0000 https://research.gigaom.com/?post_type=go-report&p=962217/ Deception technology is designed to detect the presence of adversaries on an enterprise network by using decoy systems as bait and lures.

The post Deception Technology appeared first on Gigaom.

]]>
Deception technology is designed to detect the presence of adversaries on an enterprise network by using decoy systems as bait and lures. It analyzes the relationship between computer systems, data, and user behaviors by providing a method to detect early and gather analysis of breaches on the enterprise network. Deception platforms can also be useful in identifying credential exposures, deflecting, and remediating attacks for the purpose of reducing attack surfaces.

This technology has few false positives. A natural evolution from honeypots, it follows the same theory of strategically placing decoy systems among similar systems—for example, domain controllers (DCs), file servers, simple file transfer protocol (SFTP) servers, or any other likely breach target—and generating an alert on attempted connections to them. While the decoys, which are created as virtual machines (VMs) or virtual IP addresses on VLANs, may appear to belong in the enterprise, they lack legitimate workloads. Their sole purpose is to detect threats—not service users—and therefore, any connection made to them should be deemed suspicious and viewed as a possible attack.

Deception technology is broadly agentless-based, which enables its deployment without the additional overhead of endpoint management. The caveat is that the technology utilizes breadcrumbs and lures that need to be distributed on endpoints. These lures are invisible to end-users but visible to attackers and used to convince them to connect to decoy machines. These breadcrumbs need only be placed on endpoints from time to time. This can be done using systems center configuration manager (SCCM), Windows management instrumentation (WMI), push scripts, or are generally supported by any endpoint management technology. They integrate with security information and event management (SIEMs) and have APIs. Part of their core functionality is to shorten the time to detection and assist with forensic investigations.

There are three misconceptions about this technology:

    1. It creates a messy network of decoys that add overhead.
    2. It makes troubleshooting production systems challenging.
    3. Deploying decoy systems that can be compromised may allow attackers to gain a foothold.

These perceptions have been addressed as outbound connections from decoys are blocked while whitelists enable the decoys to ignore connections from things like scanners or monitoring. Obviously, whitelisting should be done with care and any system allowed to connect should be secure.

At first glance, this technology may appear to be a “luxury” item fraught with complexity and reserved for mature security programs. However, thanks to its ease of deployment, low overhead, management simplicity, scalability, and ability to provide operators with insights that have an extremely low number of false positives, it is a technology that almost any enterprise—small, medium, or large—could employ to an enormous advantage.

Key benefits of deception technology:

    • speeds up detection by correlating traffic with threat indicators;
    • provides orchestration and remediation to deliver value quickly;
    • establishes another layer of detection that shortens an attacker’s dwell time;
    • improves awareness by creating a real-time inventory of enterprise networks, systems, and software;
    • includes a communication medium that IT and security teams may otherwise lack.

The post Deception Technology appeared first on Gigaom.

]]>
Bug Bounty and Penetration Testing https://gigaom.com/report/bug-bounty-and-penetration-testing/ Wed, 04 Sep 2019 02:56:13 +0000 https://research.gigaom.com/?post_type=go-report&p=962166/ Bug bounties and penetration testing (pen-testing) are powerful techniques that uncover flaws in controls, applications, and hardware. They enable enterprises to secure

The post Bug Bounty and Penetration Testing appeared first on Gigaom.

]]>
Bug bounties and penetration testing (pen-testing) are powerful techniques that uncover flaws in controls, applications, and hardware. They enable enterprises to secure code prior to application launch or after the code is released and they help meet compliance requirements. At face value, hiring an ethical hacker and bypassing an application’s security in order to find and fix any weaknesses sounds straightforward; however, in this process enterprises often encounter complexities, nuances, and certain unintended consequences.

Bug bounties and penetration tests reveal vulnerabilities before they are exploited – minimizing the potential for embarrassment, loss of trust, and the costs associated with those. Failure to identify and disclose data breaches to customers places organizations in legal jeopardy. The reality is that, whether a vulnerability is known or unknown, it is only a matter of time until it is discovered and seized upon. The question to ask then is, “Do you want to know about vulnerabilities before or after your customers find out?”

To become more secure, all companies today must build internal muscle memory to cope with inherent code flaws to become more secure. This does not simply apply to engineers. A company must fund resources that include legal, communications, executive steering, customer service, and development. They must all be in lockstep if they are to develop the internal skills necessary to become more secure.

Fortunately, because of experienced hackers and hard-fought lessons learned, these disciplines have evolved. This is partly due in response to the underground bug market which revolves around hackers who find and sell exploits; at times for hundreds of thousands of dollars, depending on the severity of the bugs, the reliability with which they trigger, and the platforms they can affect. Some security vendors understood this early and based payment on the quality of vulnerabilities encouraging hackers to work harder to find them.

It cannot be understated that enterprises wishing to buy these services need to have a solid foundational understanding of the market and the subtle, but critical, differences between bug bounties and pen-testing, responsible disclosure as well as the different tools and platforms available. Launching bug bounties and penetration testing means opening your system and networks up to “hackers,” albeit ethical ones; you are trusting engineers to break controls to get to the crown jewels and then trust that they stop when they get there. To quote the Rolling Stones, “Just as every cop is a criminal and all the sinners saints.”

Key Findings:

  • The space for bounties and penetration tests is quite mature and most of the top vendors offer platforms to assist with making the complicated workflow easier.
  • Executive support for these programs is critical to their success.
  • Responsible disclosure and bounty programs are key to addressing vulnerabilities before they become an internal emergency which could cause brand damage, loss of trust, and/or regulatory fines and negligence charges.
  • Creating a responsible disclosure program can save your enterprise unneeded embarrassment. Regardless of whether or not you choose to launch a bounty program, vulnerabilities in your software or services may be discovered and announced, despite your organization’s intentions.
  • The security of all of your software and services will vary; however, nothing is ever 100% secure. By implementing a bounty program or conducting regular penetration tests, your organization will build internal muscle memory focused on improving security. Over time this will pay big security dividends.

The post Bug Bounty and Penetration Testing appeared first on Gigaom.

]]>
Right Size Security – Episode 5: Bug Bounties https://gigaom.com/episode/right-size-security-episode-5-bug-bounties/ Fri, 26 Jul 2019 12:00:27 +0000 http://research.gigaom.com/?post_type=m-podcast-episode&p=961741 In this episode of Right Size Security, Simon and Steve discuss bug bounties, penetration testing, some of the tools used to conduct

The post Right Size Security – Episode 5: Bug Bounties appeared first on Gigaom.

]]>
In this episode of Right Size Security, Simon and Steve discuss bug bounties, penetration testing, some of the tools used to conduct them, and why this important field of information security is so nuanced.

In this episode of Right Size Security, Simon and Steve discuss bug bounties, penetration testing, some of the tools used to conduct them, and why this important field of information security is so nuanced.

Intro

Welcome to Right Size Security, a podcast where we discuss all manner of infosec: from enterprise security to practical security that any business can use all the way down to end user security. Your hosts for Right Size Security are me, Simon Gibson, longtime CISO—and me, Steve Ginsburg, former head of operations and CIO. For this episode of Right Size Security, we’re discussing bug bounties, penetration testing, some of the tools used to conduct them, and why this important field of information security is so nuanced. We’re going to walk through the ins and outs of this topic and explain some of the pitfalls enterprises should keep in mind before embarking on a bug bounty or penetration test—or pen test, as it’s known—and we’ll get into some of those details. Thanks for listening to Right Size Security.

Transcript

Steve Ginsburg: So those who listen to the podcast know we always start with some short topics, and I wanted to start this week first with just a quick follow-up. Last week we talked a little bit about voting machines; I brought up the interest in a standard for secure voting machines. And I questioned a little bit out loud: “Well, there must be some work done in this field.”

And I just wanted to follow up quickly, without going into all the details, but there are organizations, of course, that are engaged in this, so quick Googling brought up the Organization for Security and Cooperation in Europe, which is actually a very wide mandate organization with 57 countries, including the U.S. and Canada, in addition to European nations, some North American nations as well. And they definitely look at voting security and election security in general. Also, in the U.S.—a reminder, some folks will know—that it’s actually voluntary testing guidelines that are produced, which I thought was interesting.

So the companies do have—and this will lead a little bit into some of our topics today—requirements for disclosure and testing, but still there’s voluntary, and looks like multiple standards, so just thought that was kind of interesting. And there’s also another international nonprofit called the National Democratic Institute, or NDI, which it did not look like the U.S. was a member [of], according to their website. But they had some very clear type of standards, which was really what I was thinking about, like requirements for transparency…and was looking more at the overall technical requirements for a good voting system, guidelines in that regard. So just [an] interesting topic, and one that we’ll hopefully see in the same way we’d like to see corporate security improve overall; in all locations I’d love to see voting machines continue to get more and more rock-solid as much as they can.

Simon Gibson: Yeah. You’d think that as part of our democratic system, it is very intertwined with the government. The postal service is very key to our democracy, because if the mail doesn’t run, people can’t absentee vote. And I think for it being as integral to our country, FIPS or common criteria-like standards or NIST standards around voting machines that are mandatory and transparent is probably a good thing.

Steve Ginsburg: Yes. And this, in fact, does play a role in the U.S. Government standards for voting.

Simon Gibson: Nice.

Steve Ginsburg: And then, on the corporate side, there’s an interesting write-up I saw today on an ongoing issue, which is that a security researcher found what looks like a pretty significant problem with Zoom, the videoconferencing system.

Simon Gibson: Hm. I saw that go by, and I saw that Zoom had a problem with their video and the privacy around the camera, when you’re in the middle of the Zoom meeting, being turned on and off. And I do the thing quite a few researchers do than they care to admit; I thought, ‘Well, I have tape over my cameras all the time, and I’m sure it’s a problem, but my cameras are taped, and I’m generally pretty cautious around any kind of webcam.’ And so I did not dive into it much.

Steve Ginsburg: Yeah, so I’ve similarly opted for the sliding plastic door, and there are times when I thought, ‘Well, perhaps I’m being a little overly paranoid,’ but I also thought, ‘No, it’s probably likely that at some point’—you know, I think a couple of concerning things raised to me there [are] it looks like—and I think Zoom is a great company; their product is excellent, and just to be clear, I’m actually a supporter overall.

Simon Gibson: Same.

Steve Ginsburg: However, it sounds like they’re running a website to get the automatic magic feature of being able to join a meeting.

Simon Gibson: Like a little web server running on the local machine?

Steve Ginsburg: Yeah, it runs on the client, right? And it looks like [that] can be exploited, the researcher was able to show. And then there was a little bit of a concern, I’d say, from the path too, that there’s a note in the timeline that at a certain point, [the researcher] was notified that the security engineer was on vacation.

Simon Gibson: …when he reported the vulnerability.

Steve Ginsburg: Yeah. And I think security engineers should absolutely be able to take vacation, but ideally, there should be enough security researchers that something that looks as serious as this turned out to be, that a company can move quickly towards resolving [it] and really shouldn’t take a delay for staff outage. So I think that just goes under our general theme that we’ve been on about, that companies need to figure out how to provide excellent security. Hopefully, with each one of these events, enterprise listeners and people responsible for these programs will continue to have more fuel to improve them.

Simon Gibson: So interestingly, Steve, with Zoom, there isn’t a way that I was able to find, Googling around [for 5-10 minutes]— to report a vulnerability. There is no responsible disclosure program, they don’t have a portal and any kind of a policy or framework to let you submit a problem, if you happen to find one. Bug bounty program aside, if you are just a user of the service, I expect, as a paid member or even perhaps inside their licensing portal, you can file a support ticket [and] someone will get back to you, but in terms of the engineer, getting a reply [that] their security engineer isn’t in—I honestly am just not in the least bit surprised that that’s their answer. It’s unfortunate; it’s a 25 billion-dollar market cap company, but…

Steve Ginsburg: Yeah. And it really leads into our topic perfectly today. We’re looking at how companies should structure their pen-testing and bug bounty and have a program that’s robust and really improves their overall brand, the brand experience and the product experience, and also really leverages the large security community that’s out there.

Simon Gibson: Yeah. Very topical. So let’s get at it. Let’s get at bug bounties and pen testing and the values and differences between the two, first of all. I think that’s an important one.

Steve Ginsburg: Why don’t I let you do the definition?

Simon Gibson: Sure. I think it’s helpful to understand a little bit—a penetration test, or a bug bounty, they have the same goals. They want to do the same things. They go about them very differently, and I think the nuances in that are things that are the important ones that enterprises understand. My sense is that bug bounties began—my earliest recollection of bug bounties was partially based in the open-source world, with things like Apache and Linux kernels and Free BSD, and then the first commercial version probably was started at Microsoft, and is still arguably the best in the world, where they needed a method for Microsoft Windows end users to report vulnerabilities, and I think that’s how they got going.

Steve Ginsburg: Yeah, there was a period in the not-so-distant past where security was actually—it could have been something that would sink Microsoft operating systems. For awhile, there was just so many Windows security [features] that certainly, those of us in the Unix community felt a vast difference. But sadly, I have to say that over time, there were later some exploits in Unix at the core that were discovered that really took away some of the bragging rights that Unix would have; some very significant problems there, too.

Simon Gibson: I mean, I think security aside—just shelving it for a second—I think it was the ubiquitous Windows desktop and the fact that if a bug happened on one, it happened on all. So that means the entire world in effect, the entire enterprise; every system and every company everywhere, apart from the Unix machines or the SCADA machines or the op mainframes where everybody had that problem.

Steve Ginsburg: That’s right. And it makes it the thing to target, for most of the folks who want to target anything, because it’s also the opportunity—is really there.

Simon Gibson: Yep, good aside. The Chief Security Officer at Microsoft, a guy called Dan Geer, who founded @Stake, I think was at Microsoft a very short amount of time when he made that public statement and was fired promptly for saying that exact thing.

Steve Ginsburg: For representing that they were the big target because…

Simon Gibson: Yeah, absolutely. Went and founded a company called @Stake. Yeah, but that’s another story. For pen testing and bug bounty, really, it’s a way of setting some boundaries and some guidelines about how to report things. With penetration testing, I think that probably sprung up more out of the need for working within a big company and an enterprise. If you hire someone to pen test, you hire a company and you bring them in, or you build a group of employees.

Either way, it’s very much the way companies are used to doing business. There’s a master service agreement, there’s a contract, there’s an NDA, there’s legal teeth around it, there’s a scope, there’s terms of service, there’s not-to-exceeds, there’s timelines, there’s a set of deliverables. And all those things [must have value because] big companies don’t do things unless it’s going to add some sort of value; so somebody somewhere has calculated a value.

Steve Ginsburg: Yeah. In our last episode, we talked about SIEMs and situational awareness that a security team can build themselves. And of course, security teams can do their own pen tests—and we should talk about that—internal pen tests, but when you move to want to leverage other organizations to help you out, this is a great way to do it, and I think both provide pretty powerful models.

Simon Gibson: Yeah. I think it is definitely—it’s a question of the duration, the focus, and the level of comfort. And we can definitely talk about those. So the next big, important thing with this topic, pen test or bug bounties, are ranking vulnerabilities and scoring them.

Steve Ginsburg: Yeah. So one of the things you would ask me when we were looking at this is maybe to share a little bit from the CIO’s perspective of: How do you go into this? Why do you go into this? What are concerns that are going to happen? At least one of them is going to be: ‘Well, we’ve got a lot of different security issues potentially.’ If you have a complex product (and I think the examples we gave up front, both in voting and in the corporate world, say, over time), all digital code is probably exploitable. Maybe some things are so simple, they’re not. But generally speaking, if you have any complex organism in the digital side, there’s going to be some way to pry it open at some point, even if overall you have a good experience.

So I think, looking at that landscape and really being able to cull out priority, that’s one of the things that, as I came to understand it myself and when I’ve talked to peers, as an executive sponsor of that or somebody who’s going to be responsible for that program being financed and being undertaken, knowing that there’s going to be a value return for [it]—I don’t really want to just find a million trivial things that we’re not going to fix.

Simon Gibson: Exactly. And putting a rating system or a ranking system on a vulnerability discovered helps you then measure the risk to your business, the risk [of] it being discovered, the reputational risk, the availability risk, the risk of data being exposed, all those kinds of things. So it’s important to understand the ranking system. A common one is called CVSS, Common Vulnerability Scoring System. It was developed—and it effectively measures the impact of a vulnerability against criteria like availability: Is this vulnerability going to take down the system? It measures the vulnerability’s risk to exposing confidential information, and it measures it against exposing the data to reliability informations; can you trust the data? So those are sort of the three main criteria.

Steve Ginsburg: And you mentioned well before, but maybe worth reminding, is: Those things can be different, depending on the organization and depending on what area of an organization that is. So for example, there’s different types of confidential information, and some might be considered much more of a business risk than others. For example, HR data is an example you’ve given in the past where of course you never want that exposed. But it might not be as business-critical, depending on what it is, as customer data, for example, where…

Simon Gibson: Or exposing C-code, or exposing certain things and inner workings of applications that would then lead to more sophisticated attacks. There are nuances in that as well, but those are the three main things, and there’s a few others. But being able to measure your vulnerability is super important and, again, to your point, if you’re going to do these programs, you’re going to run these programs, you want to understand what the value of actually implementing them is.

And then I think you brought up earlier—what kind of trouble are these going to cause? Are things going to get taken down? Are things going to break?

Steve Ginsburg: Yeah. You know, all the conversations about modern companies involve the rapid rate of change and the increased business responsibility for companies to keep delivering quality product. And so anything that’s going to be [an] interruption of either teams that are developing or teams that are meant to secure or operate the organization, that has to be factored [in].

So on the one hand, there’s very high value in discovering any potential exploit and the trouble it can cause to availability and company reputation; on the other hand, one has to be careful that they’re not just creating busywork or disrupting quality work, even if for a valid reason.

Simon Gibson: Yep. One of the things that I think companies don’t really understand until they start grappling with these is embarrassment. If a vulnerability is found, doesn’t matter if it’s through a pen test or an end user reported it or an engineer found it. If a company realizes they have a critical vulnerability, and they need to patch it and inform customers about this, that—in a world where there’s apps and app updates and just people take rolling security patches all the time, there’s a little bit less of a worry around that, because people are… used to getting security updates, and it just happens and you don’t really need to explain a lot about it.

In a world where there’s routers or there’s data conduits and optics, whatever the thing is, to tell your biggest customers—your 10 million-dollar customers—‘Oh, we have this really risky vulnerability in your core networks, please patch this,’ companies have to be ready to ‘bite down on the rag’ and do that.

Steve Ginsburg: Sure. It’s not a happy discussion at that point. But I think also, folks who are doing vendor management within any organization, they’re going to look to: ‘Are my partners responsible about these things over time? How do they respond to these things ?’ So to your point, I think there is a great understanding that security risks happen, but companies they don’t manage it well do get managed out.

Simon Gibson: Yeah. And I think that we had a section about partner supply chain risk, and I think that that goes absolutely—I think companies really have to sit down and think at an executive level: ‘Do the benefits of me patching this vulnerability; telling my customer it’s critical; is there a risk that they are going to leave—stop buying from us?’ They’re going to not renew their contracts? Or are they going to look at us and think we’re a good partner. And are we really building credibility with them by coming to them ahead of a vulnerability being disclosed?—which is another super nuanced part about this as well.

Steve Ginsburg: Yes. And having their operational story very clear. We’d had engagements in the past where there might be a security issue or a reliability issue and then in calls dialing in to the company, it was clear that their folks actually did not have a clear vision of what was happening. In other words, some of the discussion about remediation, or possible steps to mitigate problems were not accurate.

So really understanding what—and you mentioned cloud security as we were talking about this, too—in a cloud world, that becomes potentially more difficult, which is a lot of companies are leveraging cloud for vast amounts of their infrastructure. Those that are doing it responsibly will understand what are the significant portions of—what are the implications of all that, and the detail, and those that don’t will potentially be in a place where they can’t enforce good security, or good security response.

Simon Gibson: Well, and I think that’s a good place to sort of rip apart responsible disclosure and coordinated disclosure. So in the world of telecom and routing and large interconnected systems, vulnerabilities discovered that could potentially affect the Internet at large, there’s sometimes the notion of a coordinated disclosure, where the people who fix it, who are responsible for maintaining these, get together, release a patch ahead of time of it being public, they go patch all the things, and then the vulnerability gets disclosed, and then everybody runs behind it and patches, but the core stuff is done.

And that’s the real nuance, which is: this vulnerability can be discovered whether or not you have a disclosure program. This will come out if somebody finds it. Or it’s going to be sold and kept quiet, and used on the black market as a zero day, and sold for potentially a lot of money, depending on what kind of vulnerability it is, on what platform and how reliably it triggers.

Steve Ginsburg: Right. And it also brings up for me—the immunity of the herd. And communities can be very helpful in security. And so, just kind of a call there that enterprise teams, people who are running security programs in any way, your security leaders and your IT leaders, they should be talking to other folks at other companies, at other organizations, about what they’re seeing for security, what they’re doing to improve their program.

Simon Gibson: But even if they’re busy guys and they’re not doing as much of that as they should, they should have a really good understanding that if a vulnerability is discovered and it’s brought to their attention, then they now have guilty knowledge that if this is disclosed by someone other than them to their customers, that’s probably worse than it being disclosed by the company that found it, right?

Steve Ginsburg: Absolutely.

Simon Gibson: All things to keep in mind. Again, this is such a nuanced space; I love it just because of specifically that. So we talked about why they’re different, and we talked a bit about cloud, so let’s get into: What are the things you need to do start doing pen tests repeatedly, reliably, or open a bug bounty or, at the very least, a responsible disclosure program, which, in the case of our opening topic, Zoom, they didn’t seem to have one.

Steve Ginsburg: Right. So one of the things that’s at the start is the executive sponsorship. I alluded to it before, but as we talked about earlier, it’s a very important piece, which is: you’re going to create this program, and there are multiple ways to go about it, in terms of what outside parties you use, how you leverage the outside community and your own teams to do these things. But when you raise issues—we talked about resourcing, we talked about priority—how are you going to make your way through all that?

It’s great when direct contributors can just work that out on their own, but they really need a framework, as we talked about, to make that work. And then if they have conflict, or they’re not sure whether work will be prioritized or what approaches should be taken, they need to be able to escalate that through management leadership. And if there’s not a clear path, you can get gridlock right there.

Simon Gibson: Yeah. I mean that’s for sure. Any well-meaning CISO can put a security at their company, and a little bit of indemnification—which we’ll talk about in a second—and start (sort of) a program. But what happens to the product when there’s a real critical vulnerability, and now you have to bring in the general counsel and the CEO, and they have to make a decision about how they talk about it: what would they tell their customers, what they tell their board; it does need executive sponsorship. And also, because if people are going to spend money and hire engineers, or they’re going to take engineers off other projects to work on this, there needs to be some value.

So somebody needs to work out what the value is in having a vulnerability disclosure program. How much does that add to the QA process? Hiring a pen test, they’re not cheap. Pen tests can be many hundreds of thousands of dollars, depending on the scope and the time and who you’re hiring. So what is the actual value proposition? Is it reputational risk? Is it [that] you need to be seen by your shareholders and your board as having done these things? Are you doing M&A, are you buying a company, and do you want to pen test their code and see how they are before you actually sign the deal? Or give them a terms sheet? So there’s a million reasons why these things have value, but a company needs the executive leadership to really work that out. I think the CSO and CISO are the guys that can do a good job explaining it, if they understand this space well.

Steve Ginsburg: Yeah. I think M&A is a perfect example. There’s certainly lots of cases about companies having been acquired, and then greater security risks [have] been discovered after the fact, which is certainly a pretty big business risk, if you’re the one who has done the acquiring, and the asset that you have doesn’t have the value that you thought, because there are security risks present, for example.

Simon Gibson: Yeah, or loses value right after you bought it because something was disclosed; some vulnerability and some piece of medical equipment was…and you were shorted. So the other thing, again, apart from just the reputational aspects and the executive sponsorship for a program, a legal framework is something that you need to understand really clearly before you start wading into pen tests and bug bounties and disclosure programs.

Steve Ginsburg: Yeah, certainly for disclosure, there are national, and certainly there are state, laws which might be different than your overall commitment. I know in California there are strong disclosure laws, for example. So there might be some real important actions that you’re going to need to take. Your legal team, and then your operational team, as a result, need to be clear what those are.

Simon Gibson: Right. And I think it’s important [to unpack this]—we use this word ‘disclosure’ kind of interchangeably in that sense—you know, in the one sense, there’s the company disclosing that they have had a breach and notifying the people…that varies from state to state; there’s a disclosure policy that needs to be around…what you will disclose to the community at large, what you’re willing to expose about how your company works and what happened, and also a disclosure outbound to the researchers who are in the bug bounty, about what you have in scope, and you have disclosed: these are the rules of the road; if we’re going to do a bug bounty or a pen test, this is the scope around it. So there’s a disclosure piece around that as well. So it kind of goes both ways, and the word ‘disclosure’ is an awfully—it’s a very large word. It encompasses a lot of things.

Steve Ginsburg: Right. It’s easy to just say, “the time when you’re going to say something,” but right, it has some very specific context in this realm.

Simon Gibson: Yeah. It’s definitely—it’s very contextual in how you use it. You know, the legal framework around DMCA and the cComputer Fraud and Abuse [Act] is another important thing, and this goes into executive sponsorship, and the executives need to be made aware of this. If I open a bug bounty and take, for example, Simon Widgets does a project, and we’re like, ‘Go ahead and hack it,’ so I’m protected by DMCA and Computer Fraud and Abuse [law].

If somebody hacks into my company, I can prosecute them. And that’s why you don’t just see people attacking websites and, ‘Oh, I’ve attacked a website.’ No, you’re going to jail; you have hacked a website. If you hack a website as part of a bug bounty, somebody has indemnified the work that you’ve done. Otherwise, you’re a hacker, and that’s against the law. It’s a really important thing. So when you do decide to indemnify, are you risking bringing in a hacker? And now you can’t sue them?

Steve Ginsburg: Now you’ve given them a legal cover.

Simon Gibson: Exactly.

Steve Ginsburg: Yeah, and just—this may be obvious, but part of the framework to get involved with having a bug bounty in the first place is—those of us who are involved in security know that you’re basically seeing automated ‘door latch’ attacks…

Simon Gibson: That’s a good analogy, sure, yeah.

Steve Ginsburg: Constantly, right.

Simon Gibson: Yeah. Door-rattling.

Steve Ginsburg: People are charging for an open piece, and at the heart of it, it’s a good example that the bug bounty is really about taking an expert community and saying, ‘Okay, I will provide you a lane in, where we will share that. And when you mention disclosure, one of the problems about not having a bug bounty program in place is what if you do get a responsible request from a security researcher who’s found something?

Security researchers and hackers—there’s a wide range, of course, of personalities out there, and so you’re going to have folks who are really the bad guys, and they’re just going to try to get in and do whatever, and their approach to you is going to be whatever that is, but you really have very concerned, responsible security researchers, and some of those are independent folks who—they do view it as a real, legitimate job in their world. And so they’re going to want to be compensated, but…

Simon Gibson: Or at least recognized.

Steve Ginsburg: Yeah. That’s right. It can be different, depending on what—but you don’t want to do every one as a one-off situation.

Simon Gibson: Yeah. You’re going to want it fixed. If I have a piece of software, especially if I’m paying for it, I want to notify the company: ‘You know, there’s a problem, I can exploit this. You got to get it fixed.’ So there’s definitely a sense of urgency. But you know, at the end of the day, whether or not you have that program, bugs can drop; people will announce those things. Even if there isn’t a Secure [Technology] Act company or a vulnerability disclosure program or a bug bounty, people can just announce it.

Google has a pretty strict policy with their zero day project, which has done a lot to find bugs in software. They actively research and will let the companies know they have 90 days to respond and fix this, and if they don’t, they go public. I think if the company’s working really hard to fix it, they’ll give them some leeway, but Tavis Ormandy could show up with zero day, and everybody better drop everything, ‘cause in 90 days, they’re going to just release vulnerability about your product.

Steve Ginsburg: Yeah. And I think that also, a big part of wanting to fix these things is—we’ve talked about [it] before—a lot of hacks sometimes takes a long time to be discovered, and not only do you want to generally know, but you want to know soon. If there is an exploit on your website and someone get into your internal network, or get into your customer data, if you can find that out—I mean, ideally you might find it out before it’s exploited, right: a responsible security researcher finds it and then tells you, you remediate it, and then your customer data is never threatened, for example. If you don’t have a program like this, sometimes people can be living on your systems for months or years, right?

Simon Gibson: Or just shelving that vulnerability for use when they’re ready to. I mean, that’s the whole market for zero day. Rand just did a big, long—basically a book about it maybe a year ago, maybe a little bit longer—but there’s a whole market for zero days. And it’s an interesting economic incentivization model that some of the more modern pen testing companies have adopted. So in the zero day market model, the better the vulnerability, the more reliably it triggers, and the platform it triggers on—so the scarcity—equals the value of the vulnerability.

So for example: a reasonably good iOS bug that can infect an iPhone that no one knows about is probably on the order of $500,000. It’s got a face value of that, give or take a little, depending on who’s buying it and what the nature of it is. But it’s a lot of money. So the researchers who work on finding these, and if you find two or three of those a year, you’ve got a small company and you work from home, and you’re doing okay. It’s super illegal; you’re probably on a whole lot of strange government lists, but it’s a market. It’s an economy.

What some of the modern pen test companies have done—there’s a couple of them; Synack is one of them—is understood that paying researchers to work on a penetration test by the hour doesn’t necessarily incentivize them; rather, pay the pen tester on the quality of the vulnerability they find during the pen test. And what Synack found was they get many more hours out of their researchers. So imagine, even you’re salaried, and you’re expected to work 8 or 10 hours a day, but you’re incentivized by the vulnerability, you might stay up all night and work on this and come back for… you might spend your weekends and evenings and just be crushing this because you’re finding vulnerabilities. And then what that ends up in is really high-value return for the customer.

Steve Ginsburg: Yeah. A big way that I looked at it was it was essentially taking the diversity of who security hackers—white and black hat—are, the wide diversity. Basically, the way I looked at it was—There will be black hats coming at us; and this is a way to have white hat hackers working for you.

Simon Gibson: Right. And this gets into the product, which is—the ability to vet researchers reliably; the ability to make sure that whatever they’re doing, there’s some controls around it. So you know, some of the companies we looked at in the pen test and bug bounty report have very novel methods for letting researchers get access to the systems that they’re testing, so that they can be monitored. Again, it goes to the point of: if I let somebody hack my system, am I really sure they’ve left it, and they didn’t put anything on the system that I don’t know about now? And can I be sure of that? That’s a difficult question to answer in some environments.

Steve Ginsburg: Right. This doesn’t replace the need for your SIEM and situational awareness from your own direct monitoring at all. But it can certainly enhance my getting a kind of more 360 view, by definition.

Simon Gibson: Yeah. But for sure, opening the door and allowing hackers to come in is not… I think most companies are pretty averse to that. And so understanding the costs/benefits—it’s an important analysis to do. The next thing that’s really, really, really important is an internal process for this kind of stuff. Just the communication between somebody reporting a vulnerability, acknowledging you’ve received it, and then some sort of a guideline as to how long you’re going to take to respond. Just having something that simple—because again, imagine the researcher who finds a vulnerability in a piece of software running on his or her machine that makes their machine vulnerable: well, they’re paying for the software, they want it fixed, or they’re not, it’s—but regardless, they are feeling a little betrayed, and they understand if they have—this piece of software is being used by tens of millions, or hundreds of millions of people, now there starts to be a little bit of pressure on this researcher.

The very least the company can do is say, ‘Thank you for reporting this.’ The company will usually ask you [for] a way to reproduce it, so the company can verify the vulnerability, and then responding back and saying, ‘We’ve taken this in; this is truly a vulnerability. We are going to fix it.’ You need a process to do that. I mean, even finding the right programming group in a big company to address the vulnerability can be challenging. I can submit a bug, but who’s the development team that owns the bug?

Steve Ginsburg: Right. And that communication itself can be an interesting point because of the diversity of the community. That security researcher who’s reporting a bug might have all sorts of different expectations. We already came up with a few different things they might be wanting or needing, and the companies who are going to be responding, they need to be sensitive to that.

One example again—not to dwell on the Zoom situation—I just thought it was very interesting that a security researcher who—overall I would give him high marks from just my personal opinion, from what I saw of how he reported it and the write-up on it—but one thing that just caught my attention was that one of the features that he was talking about that he called to their attention, Zoom gave a very polished answer that they wanted to keep their customers having the flexibility to either use this feature or not, and it was clearly a well thought-out and—he called them on it being a PR answer and essentially said negatively, ‘I don’t really want to hear a PR. answer in the middle of an ongoing security discussion.’ Which is a fair point.

On the other side, that’s one I had to see from both sides, which is: the company is communicating [about] something that, as it turns out, is ultimately going to reach the public. And so perhaps a polished, professional answer is the way to lead in some of these cases. But I think both of those are good points, and striking the right balance is really the way to go. You mentioned that with different PR firms, you might get a different response, too, if you get to the point where you’re going to a fully public discussion on a situation.

Simon Gibson: Yeah. For an incident that—it’s a very different type of public relations to manage a crisis than it is to get your latest feature into the Wall Street Journal. It’s a different company, or it’s a different discipline.

Steve Ginsburg: [Different] team in the company.

Simon Gibson: Yeah, a different team. And not only do you need a process to manage communication; you need to be able to manage internally about—so we’ve got a bug, somebody needs to verify it. Now that it’s verified, there needs to be a ticket. So is there a zero ticket open? Is there a help desk ticket? Where this can fold in nicely is with companies that already have help desk and ticket support systems. This can run right alongside your—when a customer has an outage or a critical severity bug, and you have a way to measure those things and already work on those, a vulnerability program can run alongside of it, but you still need to build it out. You still need to make sure that when the person on the help desk, or the person on the customer service team gets the report, they have a script and they know what to say.

And then they have a tool that they can put the bug into, and you know, there may not be a zero project for security. There just might not be. So maybe, if you don’t create a zero project, maybe you have some special tag that you build in to these kinds of things: you flag them especially so they can be tracked and remediated, and you have a process to report on them and an SLA, and all those kinds of things.

And then I think you brought this up too, in terms of rules of engagement: how much money are you going to spend? And once you rank the severity of the bug, how are you going to—once it’s reported, and based on the severity, what’s the incentivization program?

Steve Ginsburg: Right. And we mentioned, I think, for companies that don’t have active incidents coming through—so some companies will start a program, or they’re already engaged in ad hoc methods of dealing with these security bugs, and then they bring the program… retrofit as things are moving—but for companies that aren’t very active, they definitely need to be practicing these things, too. Because all this communication and incident response team—if those muscles aren’t flexed… Also, you’ll find out people don’t respond the way that you might expect, even if it was written in a plan that everyone agreed to six months ago, that type of thing.

Simon Gibson: Yeah, it’s true, and in some cases a vulnerability—a bug bounty—it may yield a lot of low-hanging fruit that gets repaired quickly, and it didn’t really flex those muscles. A pen test, or an ongoing pen test quarterly, or whatever your release cycle is, that kind of helps keep those muscles going, and that’s something you can hire for and that you keep rolling. And again, it’s a good differentiator; I think it’s important that they have the same goals, but they perform different functions. You know, the pen test and the bug bounty, it’s important to think about them very differently.

Steve Ginsburg: I think one thing too that, again, might be obvious, but I’m not sure that we really made clear here: for smaller companies that are getting involved and are considering pen tests and bug bounties, we mentioned leveraging a community to do that. One of the things that can be very dramatic about this is leveraging a much larger scale than you have. So most companies are struggling to keep an appropriate number of security engineers on staff.

We’ve talked about—[it] depends on the organization, but let’s face it, most organizations would rather pay for whatever is obviously driving revenue than the security aspect, which does enhance revenue, and in some cases can drive revenue by great reputation and great answers for security reviews, and things like that. So security teams can drive revenue, but it’s not as obvious as the core product of many companies, many organizations.

Simon Gibson: For sure.

Steve Ginsburg: So as a result, most security teams are not going to have dozens of extra employees, for example. And so the bug bounty and the pen tests can be used in coordinated methods, either together or alternating, ideally ongoing, to really bring a much larger pool of individuals looking at your website or your public…

Simon Gibson: Yeah, whatever you’re making.

Steve Ginsburg: Yeah. Your public footprint, right?

Simon Gibson: Yeah. Another interesting use case around cloud is…Initially, I looked at different cloud offerings, and I spoke to the CISO—or CSO—and one of my concerns was, ‘Aren’t you a really juicy target? There’s so much stuff up there. Isn’t everyone coming after you?’ ‘Well, yes, but we have a bunch of Fortune 100s, and some Fortune 10s—they’ve all pen-tested us, and they’ve all found different things, and so now every company who uses us benefits from the work they did. And so that’s an interesting way to think about cloud, is that there can be very specific focused pen tests if a large Fortune 10 company wants to go use some SaaS service and it’s an important…public-private cloud…some sort of relationship—you will probably get a pen test on that. And then you, the company that can’t necessarily afford that, will benefit from those things.

Steve Ginsburg: Right. And that’s also a good example of: if you can afford it, you’d like to do it before your next biggest customer or potential customer does it. And finds out that there’s serious problems, that they don’t want to do business [with you].

Simon Gibson: That’s a really interesting one, where I had a team that worked for me. We would routinely test things and find pretty significant problems. I mean, it was a pretty routine thing, and not trivial problems, but real, real serious problems. And it didn’t mean we didn’t want to do business, but what would hurt the vendors we worked with is them taking a long time to fix our problems.

We had a particular vendor where we had millions of dollars embargoed, and all the different leaders around the company agreed not to buy any of their stuff because of these vulnerabilities. And it took them many quarters to fix it. And then they finally did, and we were able to verify that the fix was in, and they ended up becoming a big customer. So the thing that will hurt you isn’t the pen test; it’s either the refusal to fix it or the priorities…those are the things as a big company that will hurt you.

Steve Ginsburg: Yeah. And from the flip side, that’s a great way to show how security can drive revenue. If you do a great job, better than your competitors on that, you’re now in business, and they’re going to come to you and move forward.

Simon Gibson: So let’s get into a couple of the challenges…How you’re messaging things is important. We sort of hit that with the crisis communication plan. I have always really thought it’s important, and implemented these to have a crisis communication plan in the desk. So that once something like this does happen, you have the right lawyers to hire, you have the right firm to hire, you have the right messaging. This terrible thing happened (insert thing here), this is what we’re doing about it; this is how long we expect it to take; here are the people that are engaged on it—have a plan that works with the community, with the rest of the world.

Another thing is that in the executive sponsorship part, having everybody agree to the severity of a vulnerability is very important at the onset, and the priorities that should be given when they’re presented.

Steve Ginsburg: Absolutely. And with both of these, I think, a big part is to consider that when security events happen, one shouldn’t think of them as sort of happening in this vacuum where it’s like, ‘Okay, this security event is going on.’ If there are security events, they’re going to happen at the same time that other things are happening. They’re going to happen when your company has a big trade show or quarterly reporting or the executives are on their ‘off site’ [meeting] or people are traveling overseas, or any…

Simon Gibson: Product launch.

Steve Ginsburg: Any number of things. Companies are very busy. And the individuals who work, especially at the board level and the exec level, they’re incredibly busy. And so you need to know—ideally, you don’t want to pull those people out of meetings, if you don’t need to, and then if you need to, you need to be ready to do that.

Simon Gibson: Well, and they have to have an agreement up front. Because there’s nothing worse than sitting in a room of executives and having three of them agree, ‘This is a vulnerability, we should do something,’ and two of them, ‘Well, I don’t know how important this is, and I think we should just keep on doing the other thing.’ You really need a clear guideline and a clear matrix of, ‘This has now crossed the threshold into sev 1, and we need to do everything or this is sev 3, and we’ll issue a workaround, and we’ll get afix out, but that stuff really needs to be agreed on at once, ‘cause you don’t want to hit that deadlock.

Steve Ginsburg: That’s right. And we talked about in our last episode how having a clear message from your monitoring to have the clear story, and if you’re in an evolving situation, it’s really a combination of having good information coming from the outside, good information coming from the inside, and then having that fit into a clear definition, as you were saying of: what will those things mean if…

Simon Gibson: Right, lacking ambiguity, and it’s not up to one person to say, ‘Well, I think…’ Just: you have crossed the threshold, and it’s empirical. The other thing, you know, we talked about flexing muscles: one of the things that I think is taken for granted but is an important factor is that flexing these muscles is important, and learning from all these things is important. If you do a vulnerability program, a pen test, if you have a bug bounty, every 3-6 months with new releases you’re getting cross-site scripting, or a SQL injection, maybe there’s some training that could happen that would prevent these kinds of things, you know?

Steve Ginsburg: Yeah. There is a lot of security training that can benefit developers and others in organizations. It’s really about how they’re going about their work over time.

Simon Gibson: Yeah, yeah. And flexing these muscles means looking at the trends, and then applying the right security training. We see this one problem recurring; is there a group here that’s doing this, or is this across the company? Do we need some libraries or some way to link things that now sanitize inputs? Do we need a process that needs to be deployed for our programmers?

Steve Ginsburg: Right. And that can sometimes be about seniority of teams, but sometimes it’s not about that at all.

Simon Gibson: The busyness.

Steve Ginsburg: And also, there’s all sorts of specialties. Sometimes coming from webscale companies and dealing with mostly that, I tend to think of software developers of a certain type, but across all enterprises there are software developers who are specialized closer to machine hardware, and closer to any number of things where their sense of the modern web-available security exploits might be very, very different, and yet they might still come across that. A good example would be how web cameras become exploitable in the world. That’s a device that maybe didn’t even consider itself a security device in any way, and yet that can be responsible for some of the biggest storms on the Internet.

Simon Gibson: Yeah. And I think that’s a good way to sort of wrap this up, which is: these are extremely valuable tests, whether it’s physical security—you think you have controls to access to buildings and doors and perimeters—maybe they don’t work the way you think they do. You issue a badge in one place a person has to walk through an area—can they swap out the badge for another color and get access before anybody has a chance to get a look at it?

There’s all sorts of sneaky things that people will do and think outside the box when you believe you have a set of controls that work around controlling access to a database. Is there some hardcoded credential somewhere that somebody is using directory traversal? Is there somebody somewhere looking at something in a way that you’re not? And these tests prove that. These tests show that, and they are super valuable.

Bug bounties can cost very little; you can spend some money on high-quality vulnerabilities that are submitted. You can spend a ton of money on them; you can hire pen tests that do simple things like scan; or pay seriously experienced researchers to work hard on a specific application before you release it and know and find things. There’s a ton of value about understanding the values and the risks is super important. It’s one of those things that companies should not avoid doing, but they need to understand the risks. Fortunately, I think today, this has evolved such that there are a lot of good partners to work with. And that’s what our report’s going to cover.

Steve Ginsburg: Yeah, absolutely, and maybe just a final thought from my side: we talked a bit about targeting it, and I think scope is—for those who are thinking about this and maybe if they’re newer and want to get engaged, is: for each one of these, they can be intelligently scoped to start, and then you can widen them as it makes sense, or launch multiple efforts as it makes sense.

Simon Gibson: And even yes, for very large companies who don’t want to necessarily expose everything wrong with them, I think scoping things is very important, for sure. I think that’s it. This was a good one. Thanks, Steve.

Steve Ginsburg: Thanks, Simon.

Simon Gibson: Thanks for listening to Right Size Security.

The post Right Size Security – Episode 5: Bug Bounties appeared first on Gigaom.

]]>