By David Levinger, Senior Fellow, ISRS
Published in IISP Pulse Magazine
Within the evolving cyber threat and countermeasure landscape, human behavioural weakness persists as a key challenge for the security of enterprises. Corporate risk policies increasingly emphasise the importance of identifying emerging threats and dealing with them before they become crises. Yet despite recognition of the complexity of challenges presented by the milieu of asymmetric, covert and highly networked enemies, involving insider, lone, group and state actors, adoption of the appropriate assessment and mitigation methodologies continues to lag. These remain largely likelihood-impact based and rarely address the weakest security link: human behaviour.
While technical threats continue to evolve, the techniques of persuasion used to exfiltrate information from and recruit the cooperation of humans remain variations of those earlier deployed as mentalism, propaganda, high-pressure sales, and boiler-room scams. The constant target is the human brain as they rely on universal traits, such as curiosity, distraction, naivety, fear and greed, that exist to varying degrees within all individuals, and may be further amplified by organisational and personal drivers.
Organisations first need to examine how passive vulnerabilities, which may enable their enrolment as unwitting participants, are being generated within their human resources. To do this requires assessment of how incentives, mental states intrinsic to its corporate culture and the situational pressures to which each set of actors or agents are exposed, may contribute towards hackable traits. These factors will vary across roles and the corporate hierarchy, with leaders equally vulnerable to compromise. A single tweet by a CEO, gamed to an imprudent response by trolls, may be as damaging as a data breach.
Second, the corporate policies, cultural factors and incentives that generate active behavioural vulnerabilities must be recognised. Perceived injustice may seed motives of revenge. Extreme pressure to achieve delivery deadlines may, in turn, open up individuals to the bait of phishing that offers an alluring fix. The lure of excessive reward may cause inappropriate risk-taking or deliberate breaches of company policy.
Recognising active and passive vulnerabilities, there is much that organisations can do to assess these, as well as related gaps in capabilities, processes and control structures. Self-awareness training can help to alert individuals to when they are being targeted and to build physical, logical and emotional resistance to those exploits. The use of ongoing evaluation and interview methods can reveal underlying factors such as exceptionally low or high self-esteem. An active focus on leadership behaviour can lessen fear and resentment during times of change, and assist individuals and teams to diffuse issues before they materialise.
To achieve persistent digital resilience in the face of changing cyber threats requires that companies and individuals achieve hyper-awareness of behaviour, and self-inoculation with a natural state of amber alert to potential deception and attack by both internal and external actors. As non-deterministic software agents based on machine learning are granted increasingly powerful, autonomous and opaque roles as process controllers and interfaces within organisations, analogous assessments will be needed to ensure that their goal-functions and behaviours are not generating biases and vulnerabilities that can be easily gamed.