As the head of security at the Bank of England, you would think that John Scott and his family would be the most safety conscious people in the room. But even the most safety conscious people can make mistakes, Scott said. New statesmanA conference on cybersecurity in financial services yesterday morning. As he explained, just a couple of months ago, he and his wife realized that she had left her house keys at the front door all night.
Rather than revoke her key privilege or punish her for what could have been a serious security breach, Scott acknowledged that mistakes do happen. In fact, Scott admitted, he had done the same just a few weeks earlier.
This anecdote was his first analogy to his talk on cybersecurity culture and his role at the Bank of England. His theory: It’s time for companies to prioritize their security policies with the reality of human nature.
Scott argued that cybersecurity policy should be based on “encouraging” people to make the right decisions, and to do this, we need to understand people – what they need and why they make mistakes. As Scott explained, most cybersecurity approaches across institutions are based on a boundary view — implementing a three-clicks-and-you-go policy for phishing emails, resulting in too many IT departments becoming known as “Department of No”. But as Scott pointed out, effective cybersecurity policies don’t work that way.
Instead, it should be a human approach. Scott pointed out what he called “cognitive biases,” which is how people process information to understand why people make mistakes: “We do what we did before, as much as possible, because every time we work with basic principles is tiresome. “
These biases lead us to make mistakes because, of course, not all tasks are the same. These can be subconscious thoughts or reactions, such as “I have to think quickly,” “too much / insufficient information,” or “What should I remember?” And this, in turn, leads to errors, oversights, or errors in judgment.
Scott also explored the human approach to risk assessment as a contributing factor. Just as humans are generally more afraid of sharks than mosquitoes, although globally the risk of death is much higher when encountering a mosquito, people often underestimate the risk of cyber threats. Sharks look and feel scarier. In cyberspace, the bad things that happen when you click on a phishing link may not look or be intimidating. In fact, you may never see the consequences or realize that it happened, as cyberattacks are often complex enough for companies to be unable to trace the point of infection.
Scott also noted that we forget that there are people who deliberately try to exploit employees and rely on our tendency to make mistakes. These are not equal rules of the game – there are seasoned hackers who oppose people for whom “cybersecurity” is just a corporate buzzword. Despite this, companies still devote much more time and resources to protecting technology and servers, rather than training and empowering their employees.
So what does the human approach look like? Scott talked about the need to protect processes, such as requiring a logout, to limit the likelihood of human error. Second, companies should review their current procedures. Employees should have checklists to follow, but they also need to ensure that these checklists are repeatable and appropriate for different scenarios.
In this case, Scott pointed to the hacking of the Bank of Bangladesh account in the US Federal Reserve – when the hackers recalled the first transaction, there were errors in the code. The Federal Reserve followed the procedure outlined in its checklists – giving the bank an hour to validate a withdrawal request after the initial refund, and then releasing the funds after confirmation. Unfortunately, they did not take into account that it is evening in Bangladesh, and this is an unlikely time for such a departure. If this consideration were included in their checklist, the transaction could have been prevented.
Finally, and perhaps most importantly, Scott advocated “treating people like adults.” Just as he and his wife made an honest mistake, there is no need to punish employees if they do the same. As Scott quotes security author Lance Spitzner, “people are not the weakest link, they are the main vector of attack.” Errors are part of human nature, and threat actors are aware of this, so by training and empowering employees, we limit the propensity for serious violations. After all, do you really think that after all this they will leave the key in the door again?