Quick note from Sam: this is a guest blog by IAN FARQUHAR. Ian is a senior, technical executive in our Australian offices with tremendous exposure to what RSA customers experience and with a great technical background and history. He is a joy to work with, and I know you’ll like his voice and what he has to say about Layer 8 and above. With that, I’ll let Ian take it from here…
Engineering Security Solutions at Layer 8 and Above
by Ian Farquhar
Figure one below is based on a real tool. The name has been changed to protect the guilty, but it’s not the only tool which asks users questions like this:
Tools which help by “dumbing it down” actually makes the problem worse:
Finally, here’s the actual dialog box the user is responding to:
Ok, yes, I am being facetious here. But only a little.
(Kudos to the folks at Atomsmasher for their cool dialog box creator webpage which I used to make the above. See http://atom.smasher.org/error/.)
Many years ago, I came across a comment in a support call log which concluded “Fault isolated in Layer 8.” I asked for clarification. “User error,” I was told smugly, by the call log’s author. I also remembered an old acronym from more than a decade before: PICNIC. “Problem In Chair, Not In Computer.”
It’s a truism in computer security that users are bad at analysing information security risk. In this blog, I am going to suggest that many if not most users are making sensible and comprehensive risk judgements, and the fault lies with security architectures which focus solely on technology. Let me argue this by example:
Bob has been told by his boss to write a sensitive company file onto a USB stick. Bob knows that this is frowned upon by IT, and somewhere in that 150+ pages of security policy he vaguely recalls agreeing to when he joined the company, it might even be banned.
Choice 1: Bob can refuse, and tell the boss he can’t do it, because it “might” be against company policy.
Choice 2: Bob can invest a couple of hours finding the security policy, reading it, understanding it, and then applying it to this situation and saying no to the boss.
Choice 3: Bob recalls that he’s never actually heard of anyone getting disciplined for breaking the policy, that IT probably won’t know even if he did, that he has a lot of things to do that afternoon. Bob decides just to do as his boss asks.
With the possible exception of government and military/intelligence operations, few organisations aggressively enforce IT security policies. Furthermore, employment confidentiality laws in many legal jurisdictions make it very difficult to publicize disciplinary actions taken against employees over IT security breaches, so even if they’re happening, few people know.
Looking personally at all options, Bob’s biggest risk vector is disappointing his boss, who writes his performance review and can probably fire him. Options (1) and (2) both disappoint his boss, so (3) is unquestionably the rational choice from Bob’s point of view.
In a recent editorial, Bruce Schneier looks at this problem from the policy perspective, but arrives at the same conclusion, and suggests that strong policy enforcement is the solution.
As security professionals, we have historically tended to design our controls into technology. We are most comfortable engineering solutions between layers 1 and 7 of the OSI stack, or solutions which work within the confines of the computer, and don’t extend to the occupant of the chair.
The dialog box in figure 1 is a classic example of that sort of thinking. This clearly shows a security specialist a very suspicious change is being requested, but the average user (1) will perceive it as preventing them from reviewing what they believe to be a tender (ie. “doing their job”), and (2) probably won’t be understood by the average user anyway. But it seems to work if they hit “yes”, and hey, even if it wipes the PC, a nice man from IT will fix it.
There’s something very wrong with that.
If we extend the model informally, it provides a structure to think about defence in depth which includes human and organisational factors. After all, that’s what the OSI stack was created to achieve: it’s a way of modelling the interaction between components. We’re just informally adding more layers to consider. Let’s call it the OSI and Human Interconnection Model.
Layer 8 is now the “Human Layer”. This is where we engineer solutions and architectures which allow for the human factors, psychology and sociology.
For example, we can engineer GUIs which support sensible, informed security choices. A good introduction to this sort of design can be found in Dr. Peter Gutmann’s AUSCERT 2008 paper: “Things that make us stupid: Why security user interfaces lead to insecure user actions.” (Available here.)
Phishing is a classic example of the failure to engineer strong Layer 8 solutions. I still regularly receive emails from organisations which invite me to log into portals using links embedded in the email, which don’t go to the organization’s main website. Those logins often aren’t even protected by SSL, or use in-house CAs. How long have we known about Phishing? Yet the practice continues.
Or… I am still waiting for someone to explain the rational basis for me swiping my credit card into an EFTPOS terminal I have never seen before. The best I’ve heard is “it looks sort-of like an EFTPOS machine”, which isn’t really a good metric of trustability. I will be writing a blog post on trusting hardware in coming months, as this is an area of growing interest to the security community.
But it’s also not all green-fields solutions to Layer 8 problems. When properly deployed, Data Loss Prevention (DLP) is a Layer 8 solution. Why? Because DLP is fundamentally about how users manage information. When deployed properly, a DLP implementation has four very beneficial outcomes:
- DLP gives security operations the visibility of how information is really used in the organisation (as opposed to how they think it’s used).
- DLP educates users to make the correct choices, according to the organisation’s information security policy. It creates an environment of continual improvement of security posture.
- DLP allows the identification of users who require further assistance.
- DLP allows the easy deployment of innovative security solutions.
Going back to the example above, let’s assume that Bob’s organisation installed DLP. Bob goes to write the file to the USB stick, but DLP on the endpoint analyses the file and assesses it as risky. DLP policy blocks the write, and displays a notification telling Bob why. Included in the dialog box is a link for further information, which Bob clicks on to discover that the company can provide him with an encrypted USB stick.
Bob goes to his boss, explains that DLP has blocked the write to a USB device. Both decide that an encrypted USB stick achieves the same outcome, in a secure and company-approved manner. This is a win-win situation, and a classic example of an innovative security solution deployed using DLP. Sure, Bob could have used an encrypted stick before, if he’d known about it, and if he could have been bothered. It’s the DLP implementation which makes the correct choice also the easiest choice.
Layer 9 is the organization.
Organisations are complex entities, and few people indeed have a handle on just how complex they are. As they get larger, organizational complexity increases exponentially against linear organization growth, because the number of linkages, data flows and relationships exponentially multiply.
Policy is an organization-level control. When policy works, it is a very powerful tool. When it doesn’t, it is ignored, or worse, becomes a cost.
Many years ago I worked for a major Unix workstation and server vendor. This company had an IT policy which forbade the connection of any Microsoft Windows system to the company’s network, without approval from the corporate vice president
At the same time, this company had a finance policy of encouraging employees to use their personally owned PCs for work purposes, and even provided company-paid anti-virus and firewall software to install to mitigate the risk of malware, which was a requirement of the finance policy. All of these PCs ran Windows.
Policy schizophrenia. So what was the end-result? People used their PC’s without VP approval, but didn’t bother to install the software provided. The first policy was stupid, and people ignored both it and the second actually sensible policy. It was a terrible outcome, because the security team burned resources every malware attack, when those people without AV software caused problems around the company.
This is more common than one might think, which is where eGRC (Governance, Risk and Compliance) solutions like RSA Archer make such a strong difference. They allow the management of organizational complexity, the reviewing of lower-level security controls for both consistency and compliance with organizational goals, and support tracking remediation workflows which fix any issues observed. If a particular issue is constantly causing compliance problems, what is the root cause? Often it is the policy which needs changing, not the processes which are causing the violation. A capable eGRC suite can make a big difference.
Finally, Layer 10: the legal and external compliance layer. This is where an external organization – government or non-government – specifies requirements which the organization must comply with. Often there are penalties for non-compliance, and sometimes those penalties drive controls right down the stack.
As a community of security specialists, we need to start thinking about the human as a part of our architecture, about the company that human works in, and the environment(s) in which that company operates.
The CEO of a company I worked for once said this to the IT Security Team: “putting you guys in charge of an IT deployment is like putting a lawyer in charge of a marketing campaign”. Both we and the legal team said “ouch!” simultaneously. But he was right: in pursuing strong technical solutions, we had ignored both the needs and the potential of solutions which extended up to the business.
This is possible. It’s doable. The idea of the OSI and Human Interconnection Model exist purely to facilitate discussion, but if the end result is a more resilient design, it has achieved its goals. After all, defence in depth doesn’t stop at the keyboard.
by Ian Farquhar
 The Five Fallacies of Corporate Information Security Policies: (1) that the user has read it, (2) that the user understood it, (3) that the user can remember it, or find it when they need it (4) that the user can apply it to an arbitrary situation, and (5) that the user cares or sees security as their job (the last fallacy could be the subject of a blog post all by itself).