Security Risk Assessments

Risk assessments are one of the main tasks I am faced with day-to-day with my security work. Most of the time it comes in as a request to “approve” some architecture. In my head, this is basically a security risk assessment where I also getting to decide for the business that the benefit of allowing the architecture is greater than the associated risk. Usually I can just make a quick judgment call and give a “thumbs up” or help the user with tweaking his architecture to produce an acceptably secure solution, but sometimes I have to write out my reasoning for disapproving something. Then the mangers that end up deciding that we move forward on the given architecture have adequate knowledge of the risk.

Recently I had started using Microsoft’s DREAD framework for performing written security risk assessments. I liked it because at the end of my assessment I can deliver a number to my management. Low numbers are good, and high numbers are bad. For about two months and about six written assessments (not many, I know) I was very happy with the numbers the DREAD framework was giving me when I used it. I felt the numbers were adequately expressing the risk to the organization.

Then, earlier this month, I did one more for a project that I felt held insignificant risk to the organization, but I did not give it the usual rubber stamp approval because the project ignored some good practices that I wanted to insist upon (when I say ignored I mean thumbed their nose up at me when I told them they should do it). I thought that if they were going to go against my recommendations that we should all have an accurate risk assessment of what that meant. However, the resulting score out of DREAD was high, and as I said before, does not accurately represent the true risk of this project. The reason DREAD failed here was that its score is the result of an addition instead of a multiplication of the various risk factors (most models I have seen use multiplication), so one low score means little in the face of some other high factors.

I scratched my head, delivered my DREAD report along with my thoughts that the score was higher than the true risk, and moved along to other projects. Today, however, I came across this article by Richard Bejtlich. In it he provides some criticism of FAIR, another security risk assessment framework currently being promoted by its originator(s). What really caught my attention is that he essentially argues that risk assessment is the wrong path for us in the security industry. I have to admit it took me a long time to climb on board with the whole risk assessment bandwagon myself, but I have never seen someone with this much experience argue against it. As I read I realized that I never really believed in the whole risk assessment model, and that I really only use it as a means of selling security to my management—an important task, no doubt, but is it perhaps the wrong approach? Certainly managers need this to be able to reduce risk into dollars and cents for the business?

The problem is that risk assessments are based on guesses. Not just any guesses, though; they are based on wild-ass guesses. We are supposed to say “do X instead of Y for COST dollars against our budget or else we can expect hackers to come in and cost us GUESS.” But maybe they will not know we are here and hack someone else instead. The reality is that we have no idea what the odds of being breached are, and we have no idea what the cost will be if we are breached. We have no idea what the motivation of our attacker(s) might be, and we have no idea what that motivation will spur them to do. What are the odds of being breached in some manner we did not think of? After all, if we had thought of it we probably would have closed the door on that mechanism, right? So what good is a risk assessment on a problem we already know about?

We really need to be focusing on best practices and engineering the systems to be as paranoid of everything as possible. To encrypt or not to encrypt? You encrypt! To trust or not to trust? If you have to ask, don’t trust! Why can we not build systems like this every time? For an organization that lives and breathes encryption, the incremental cost of encrypting data and connections dwindles toward zero over time. You just do encryption exactly like you just drink coffee. Does anyone lament that we no longer use Telnet now? There was a fixed cost for moving people to SSH, and now the cost to keep everyone using SSH is negligible. The costs of security should be focused on new threats instead of ignoring them until we can finish securing our browsers from the latest Javascript abuse (and whatever else plagues us this week). And best of all, those pesky auditors can become your friends, and you can answer questions about your security practices without trying to dance around the truth. Wouldn’t that be nice?

  • You can skip to the end and leave a comments. Trackback is currently closed.
  • Trackback URI:
  • Comments RSS 2.0

Leave a Reply

You must be logged in to post a comment.