Thursday, March 21, 2013

Boxing the Fundamental Assumptions of Cybersecurity Risk Management

Here's something to wrap your head around (or more literally, put in your head) as you head to NIST on April 3rd to make your contribution to the Critical Infrastructure Cybersecurity framework development processes, an effort begat by the recent Presidential Executive Order.

Many in our community love to talk about risk management as the common sense, business oriented antidote to the mandatory and therefore inflexible and slow moving instructions in the NERC CIPs.

You could certainly put me at least half in that camp.  Well, after reading THIS sharp Brookings paper from Ralph Langer and Perry Pederson, that half of me is feeling a little wobbly.

Want to see if you can handle it?  Let's see you go for a round with them.  They begin with a jab -- the DHS definition itself:
The following is a definition of risk-based decision making from appendix C of the Department of Homeland Security’s risk Lexicon: “risk-based decision making is defined as the determination of a course of action predicated primarily on the assessment of risk and the expected impact of that course of action on that risk.”
And then counter with a flurry of lefts to some assumptions, a series of rights to some more, and finish with a big left to the whole foundation upon which cyber risk management normally rests:
The basic assumption embedded in this and all risk formulae is that unknown future events of an unknown frequency, unknown duration, unknown intensity, from an unknown assailant, with unknown motivations, and unknown consequences are quantifiable. Consequently, if one thinks s/he can measure the risk, the mistaken conclusion is that one can manage the risk.
I'm trying to not be overly swayed by this one article, but certainly it's going to be something I try to keep in main memory while at the workshop.  Hope it helps inform your thinking too.

BTW (late addition): I just realized this post ends on a bit of a down note and I don't want to leave you there.  If you can make it to page 8 you'll find Pederson and Langer pivoting towards their recommended solutions to replace risk management-based decision making.  You'll see these fall into 3 P's: Politics, 2) Practicality and Pervasiveness.  I myself haven't made it there yet but intend to before nightfall ... tomorrow.

P.S. have you ever tried boxing? I have a little, and it's a blast, and super hard, and exhausting.  But you know one thing it's easier than?  That's right, you've got it.

Photo credit: Wikimedia Commons


Ben said...

Oy. Please, don't put *any* credence into that Brookings paper. It's the most absurd piece of rubbish I've ever seen. It goes to show that, just because one may be smart enough to conduct forensic analysis on malware, one may still not be qualified to talk about other topics.

All of the assertions made in that paper are either false or grossly antiquated. Their "unknown unknowns" attack is a red herring. We can do a LOT of good risk analyses today on many topics. Also, this "unknown unknown" nonsense stems from a threat- and vulnerability-centric worldview, which ignores what's far more important: the people, processes, and technology being defended ("assets"). As it turns out, there are really only a finite things that can happen to your assets (this is why the CIA triad has been, and continues to be, so effective).

To give you a more concrete place to look... FAIR (Factor Analysis of Information Risk) more than adequately accounts for, and defeats, arguments such as the ones they presented. Using it, you factor out the various components of a risk estimation, all based on a well-defined scenario. Your overall risk analysis (e.g., against your organization) will consider multiple scenarios. FAIR can be used very effectively to make decisions within a given scope, and it can be scaled to make broader strategic decisions as well. It turns out that when you factor "risk" into several components, we actually have more than enough hard data to produce reasonable estimations (using ranges, not point values).

This is just one methodology. There are several others that work reasonably well, and also address the arguments made in the Brookings paper.

More importantly, though, beyond all their nonsense, is this key point:

Prescriptive regulations mean businesses and business leaders abdicate their responsibilities as decision-makers, which not only hinders their ability to come up with innovative solutions, but it also creates an enablement culture that undermines accountability (in the business and in the market) and leads to failure. We've been living this for years, and we know that a prescriptive approach is woefully inadequate. Look at FISMA+NIST and how pwnd the public sector has been. Look at PCI and all the major breaches that have still occurred. And so on.

The best approach going forward is to put the responsibility back onto the business, and then to hold the business accountable for negligent and commercially unreasonable practices.

Andy Bochman said...

That's funny, I found it to be a very well written article. Agree or disagree with some of its points, it makes its case methodically and is supported by lots of references to reputable sources. Sounds like the authors must have hit a nerve to elicit "It's the most absurd piece of rubbish I've ever seen." Because it certainly is not.

Ben said...

Well-written is not the same as correct or accurate. It was based on a LOT of faulty assertions and assumptions that have long ago been debunked. For example, the "unknown unknowns" criticism is a total red herring. It implies that because you can't have perfect knowledge, you can't assess risk (or even make a reasonable decision). Nothing, of course, could be further from the truth, and it's absurd to think otherwise. If their statement were true, then nobody would ever make a decision.

An example: You get in your car and drive to the grocery store to get groceries. And then you drive home. There are many inherent risks in undertaking this activity, and there's no way to know all the specifics (lots of "unknown unknowns"). However, we all make decisions like this every single day, and we do so implicitly. Does this somehow invalidate risk management? Absolutely not. Moreover, the "unknown unknowns" ends up being untrue as there are ways to turn the viewpoint around and genericize considerations in order to provide a more-than-adequate picture (e.g., what sort of things can happen to me? car could breakdown, car could get into an accident, driver could experience a medical issue, etc.).

Their assertion in the executive summary is also false; nobody has been pushing a risk-management approach via a federal framework, and especially not from feds to private sector. RMF is *not* a risk management framework. It tries to masquerade as one, but it's incredibly faulty.

And so on...

The interesting thing is that I fully agree with their conclusion that the EO framework is stupid and won't make anything better. The feds can't figure out how to adequately secure their own environment, and yet think they can (or should) tell private industry how to do this? Laughable. Moreover, they are once again obsessed with "best practices" and a "prescriptive approach." These are very bad things.

Why Langner and Pederson focused on the risk management angle I don't know, because of all the things that's the least egregious bit. It wholly detracts from their main point that the EO won't make things better. Frankly, their arguments against risk management are just idiotic, since all of business is run by risk management principles. It seems they don't understand human decision-making (again, an example of where they should perhaps stick to what they know). The problem is that businesses are either turning a blind eye toward ITC risks, or they're intentionally making bad decisions regarding ITC risks. In either case, a voluntary framework will do nothing to "fix" this problem. We need actual regulations with actual teeth to remind businesses that they're responsible for managing the entirety of operational risk, not just the pieces they historically managed (ignoring the vast majority of risk represented by ITC pervasiveness).

So, sorry Andy, but it is absurd rubbish, and it's not because it "hit a nerve" - it's because it's dangerously ignorant, based on decades-old criticisms and myths that have long ago been debunked, and because their arguments actually go against some of the most effective and promising emerging practices in ICT and business management.