I spoke at the RSA Security Summit in Munich recently, giving a presentation on “Best Practices in Breach Readiness” that my colleague Rashmi Knowles and I have been working on. We drew heavily on our experiences in working with customers. But we also took advantage of resources like the recommendations in the recent series of SBIC reports, the OT Alliance report, PWC recommendations from their 2014 global surveys and the “Ten Steps” guidance from the UK government
Among the best practices we listed, of course, was the education and engagement of the user community in security. But the closing session at the Summit in Munich, a demonstration by Leo Martin, an ex-agent from Germany, of the techniques that a skilled investigator uses in reading people, got me thinking about the challenge of what we are asking of even the most savvy users. As it happens, I’ve also been reading Christopher Hadnagy’s book Social Engineering: the Art of Human Hacking, which provides a detailed exploration of – and training in, if truth be told – the techniques and skills of social engineering.
At the end of his book, Harnagy says “Being aware of the tactics attackers use will surely keep you from falling victim to them.” But in fact his book shows the real difficulty that all of us, even when aware of social engineering tactics, have in resisting them. I’m sure that education is vitally important, including the scenario-based training that we use at EMC and that Harnagy advocates. But there’s nothing sure about resisting social engineering.
So what do we do?
Much as I believe in the importance of education, it’s clear that even the most expert user has to be supported by processes and technology that reduce the opportunities for an adversary to succeed. We need capabilities that enhance our ability to recognize, combat and confront social engineering. There are clearly technologies that help. The more than 10-year history of RSA’s Cybercrime Intelligence Service has shown that collaborative efforts across enterprises can reduce the amount of phishing and other social engineering attacks that users have to contend with. Technologies that aid in detecting and removing attackers, like honeypots, clearly help as well. Systems that mitigate the consequences of a socially engineering attack, like adaptive authentication that increases the factors that an attacker must compromise and security analytics that detects anomalous patterns, are also important.
Harnagy’s book, perhaps contrary to his intent, leaves me certain that staying completely secure and protected is impossible. As Bruce Schneier showed in Liars and Outliers, trust is essential to us and so we will always be vulnerable to social engineering. But in recognizing this, we have the opportunity to use technology and processes that decrease our level of exposure, that help us to recognize an adversary and that mitigate the impact when we make a mistake. After all, we don’t blame one another when our ability to see is less than we need or want: instead, we develop not only corrective lenses, but also microscopes and telescopes and MRI and CAT scans and a host of other technologies. We need to do the same to counteract our human limitations in responding to social engineering.