Why humans make bad security decisions
Consideration of both actual and perceived levels of security are equally important, as are the interpretive frames of what being ‘safe and secure’ means in a given context. The security exchanges of animals are instinctive, but Humans respond to the feeling, rather than the reality, of a threat.
Despite more time and money being spent on understanding security and stability than ever before, it remains an elusive and highly subjective notion. There is no a state of absolute security out in the ether somewhere, waiting to be achieved, and we should continue to remind ourselves that 'safe and secure' is completely dependent on the operating context. One need only look at the "acceptable level of violence" permissible during the Northern Ireland Troubles for a stark illustration of this.
In that sense then, security amounts to essentially two distinct but interrelated notions; there is the perceived feeling of security, and the actual level of security. The truism – that you can feel secure, even if you’re not, and you can be secure, even if you don’t feel as though you are – speaks to this.
As a criminology undergrad, I looked at the discrepancy between the fear of crime vs. the actual risk of victimisation among different groups within Sheffield’s red-light district. As a scholar of terrorism studies, and later as a CVE practitioner, the distinction between feeling and reality was crucial when evaluating the impact of counter extremism interventions, and the resonance of alternative narratives amongst target audiences. Where, why, and how differing concepts diverge or converge remains crucial to a sophisticated understanding of what it practically means to be ‘safe and secure’ for different people, in different places, at different times. It is the essential underpinning for effective design, implementation and evaluation of all human security interventions.
Economic tools can further add to our understanding of what it means to improve security and stability. Not in the sense that security should be monetised, but rather it can viewed as a transaction or exchange: whenever you achieve a higher level of security, you’re inevitably exchanging it for something else. This might be at the individual, community, municipal, national or transnational-level, but to achieve greater security we will necessarily exchange time and resources. In some case we are even willing to exchange far more important things, such as societal cohesion, civil liberties, or our country's global economic or political standing. The question then is not necessarily whether X will make you safer or more secure, but whether X is worth the exchange of Y in pursuit of that end.
In the wild, the security exchanges of animals are instinctive: continue to eat this tasty grass or flee the rustle in the bushes that may herald the arrival of an approaching predator. Humans, however, are rather poor at making these exchanges, tending to respond to the feeling, and not the reality of a perceived security threat. We downplay ordinary and common risks, and we place greater emphasis on spectacular or rare risks ꟷ such as the risk of driving versus terrorism. We also perceive threats from unknown sources as more acute than those from familiar ones ꟷ such as the fear of sexual assault from strangers, when the risk of rape in the home is significantly higher. We view personified and branded risks as more hazardous, whilst under-estimating risks we can control and over-estimating risks we cannot. Our cognitive bias’ routinely affect the quality of our security exchanges and result in bad decisions.
Another interesting psychological phenomenon is the availability heuristic, where we estimate probability based on how easily we can bring illustrative examples to mind. In that sense our media consumption play an important role here, as our perception of likelihood is easily skewed by sensationalist reporting of extremely rare risks. This is further compounded by being more amenable to stories, narratives and anecdotes, than we are to data, evidence and facts. Essentially we prefer a good story everytime. Also, when it comes to risk assessment on the fly, we are far better at calculating lower numbers. Gauging the risk of one-in-ten or one-in-sixty is manageable for the most part. But anything above one hundred, and especially if asked to weigh up large numbers like one-in-ten-thousand, or one-in-twenty-million, we simply cannot compute what this really means, and so classify the risk in the ‘almost never going to happen’ bracket.
What cognitive bias, fears, prejudices, disinformation, echo chambers, moral-panics, and extremist ideologies do is to act as reality filters, and so the feeling and the reality of safety, risk, victimisation and security become incongruent, resulting in either a false sense of security or heightened sense of insecurity. They create an uninformed, incomplete, or otherwise inadequate framing. Therefore, security may have two notions: feeling and reality, but both hinge on our framing of the situation. Our perceptions of security can change, evolve and be refocused by our framing, and it is this lens that is so easily distorted.
When designing security and stability programmes in fragile and conflict-affected settings, these theoretical models allow us to better approach many of the challenges faced, and assess the real and perceived impact of interventions. We can view framing as the interpretation of reality, with the potential to supersede and alter our feelings of security. These frames are learnt from our family, friends, peer group, community leaders, cultural and religious figures, societal norms, the media, elected officials and even those we maybe interact with virtually, but frames also develop from our own lived experiences and empirical understanding. Frames eventually fade into the background as interpretations sync with feelings, and world-views are accepted as the 'true' reality. The instinctive becomes the familiar, converging and aligning with our feelings.
Most human security interventions engage a range of stakeholders. These actors often harbour their own agendas and vested interests, and will try and influence these security exchanges, undermining or marginalising particular frames, whilst promoting and advocating others. Deeply entrenched frames – such as the worldviews espoused by extremist ideologies – can become difficult to displace, particularly if they have come to align with ones feeling of security. Distinguishing between emotion, interpretation and reality becomes really problematic. Here our confirmation bias is key, where we willingly accept information which confirms or validates our beliefs, but reject, deny or de-legitimise information that contradicts or disproves our frames. This goes to the heart of why fake-news and disinformation have gain so much traction in such a short space of time, and explains why alternative narrative P-CVE strategic communications programmes are so en vogue currently.
Despite security not being the static condition we often assume it to be, for practitioners designing human security programmes, consideration of the actual level and perceived feeling of security are equally important, as are the interpretive frames of what it means to ‘safe and secure’ in a given context. The closer our feeling of security gets to the reality, the better our security exchanges become. Technical models and academic theories that inform our security framing are not these abstract, esoteric notions that have limited real-world application. They are an important way to bring these feeling and reality closer together, and form the backbone of any well conceived human security focused programme.