I’ve had the opportunity to write a few blogs and articles on one of my favourite topics: vulnerability management. In particular, the thinking on risk in this space. What’s acceptable risk? What isn’t?
You can get into the details on the variety of articles published in the last month. As an overview, We Need to Rethink Risk in Vulnerability Management is a good place to start. It links to the articles but I’ll note them here as well:
- Patch management needs a revolution, part 1: Surveying cybersecurity’s lineage
- Patch management needs a revolution, part 2: The flood of vulnerabilities
- Patch management needs a revolution, part 3: Vulnerability scores and the concept of trust
- Patch management needs a revolution, part 4: Sane patching is safe patching is selective patching
- Patch management needs a revolution, part 5: How open source and transparency can force positive change
Here’s the bottom line. We need to have a serious conversation about the TCO (Total Cost of Ownership) when it comes to software. Installing and testing security fixes isn’t “free”.. you might get the fix as part of your purchase or subscription, but if it’s even remotely complicated, you need staff, time and money to install, test, and deploy these things.
But what’s the end goal? No one buys software to fix security issues. They buy software to grow and expand their business, to be useful. So the “zero known vulnerabilities” goal is a faulty goal because it doesn’t align to the business need you have (probably to make money). You don’t make money installing patches, you spend it.
In order to make money you need a trusted brand and an intact reputation. Breaches and significant cybersecurity events can damage both of those, not to mention the cost spent to do the cleanup (which is excessive, just ask IBM’s Cost of a Breach report. Preventing these is a good goal.
“Zero known vulnerabilities” does not advance that goal. Verizon tells us that exploitation due to software is single digit events. Don’t believe me? Look at the Data Breach Investigations Report that Verizon puts out. I trust their analysis.
So if “zero known vulnerabilities” isn’t a business goal, it isn’t a signficant driver to breaches, and it’s prohibitively expense then I have to ask: Why do we keep demanding this, like it’s some kind of measure of success?
You can fix every single vulnerability and still see data breaches and other significant cybersecurity events trend up, annually. That means we’re focused on the wrong thing.
We have to put corporate dollars to work on the things that actually move the needle here. We need sophisticated tools to minimize phishing, spoofing, and other forms of social engineering (in other words, solve the “human element”). Even training is a step in the right direction but I think there are enough tools out there to solve these despite humans doing what humans do. And invest in automation to look for misconfigurations that lead to compromise (S3 leaky buckets, anyone?). Catch misconfigurations when they happen and reduce your exposure.
This is a topic I’m passionate about because I believe we are collectively investing billions per year, globally, in an area we shouldn’t. Which means there are billions not being invested where it can make a difference.
It’s like investing in bad stock. Someone told us it was good once, we see it continue to decline, we’re losing money hand over fist, but the sunk cost fallacy has kicked and we’re determined that one day it’ll recover. Even worse, we keep buying it, thinking it’ll recover — rather than investing somewhere better with a proven track record. Honestly, this is where I think advances in AI will be able to help us as well (I think of Clippy warning us not to click odd links that show up in email).
We’re capable of doing better. We should just do it.