Well-intended cybersecurity measures may have consequences that go unnoticed to the person who deployed them – causing harms for legitimate users, which could be especially severe to vulnerable populations. With his human-centred cybersecurity research, Simon Parkin puts the spotlight on these unintended harms.
It seems so obvious. To protect a company from phishing – in which attackers send fraudulent messages to trick employees into revealing sensitive information – the IT department installs software to automatically detect and filter such messages. But this may encourage system users to become complacent about the e-mails that do reach them, while such anti-phishing software certainly is not infallible. ‘Insecure norms, in this case creating a reliance on technical controls, are one of seven categories of unintended harms that we have identified,’ says Simon Parkin, assistant professor in the cybersecurity team of the faculty of Technology, Policy and Management at TU Delft. ‘Additional costs are another. These are, for example, induced when providing more and more training for employees so they can detect phishing attacks. Consequently, those trained will now have to dedicate more time to cybersecurity rather than their paid work. Not to mention that they may also suffer from the unfair expectation of now somehow being able to successfully detect every highly-crafted malicious email.’
Blunt instrument
Whereas the harm in the previous example was limited to the company and (some of) its employees, a well-intended measure can have much farther-reaching consequences. To counter romance scams, an online dating website may throw up additional barriers affecting all users in a certain region – both legitimate and malicious – because of an attacker profile that is too general. ‘This is what we characterise as being the blunt nature of many cybersecurity controls,’ Parkin explains. ‘With cybersecurity prominently in the news, we may be inclined to deploy additional controls. “The more the better” is a common belief. But there isn’t even this check if it will harm legitimate users. With our work we wish to challenge the assumption that “if I mean well when I configure a cybersecurity control, then it can only achieve good things”.’
We should not burden non-technical users with a complex set of instructions.'
Vulnerable groups
In a first effort to stifle such unintended consequences, Parkin, and a group of co-authors with various backgrounds in the computer and social sciences, developed a framework as a kind of exercise to go through. ‘The framework consists of questions,’ Parkin says. ‘They are framed such that they prompt consideration of a range of categories of potential harmful consequences of a countermeasure – on a platform’s infrastructure as well as its users and their behaviour.’ The framework explicitly encourages vulnerable populations to be considered. ‘Before deploying a control, one should check what the needs are of a particular group, so that the control fits with how they use information technology. For example, it may be prudent to send out advice on how to secure devices, but we should avoid burdening non-technical users with an overly complex set of instructions that is beyond their ability. Likewise, well-meaning security experts may ask businesses to implement controls that are more approachable for well-resourced, larger organisations, but not for small start-ups with only a few employees, so this needs to be considered.’
The IT point of view
In follow-up research, Parkin specifically considers the most immediate decision makers who might be able to do something about these socio-technical harms of cybersecurity controls. ‘If we were to pitch our framework to an IT-manager or cyber risk owner, they may think our framework has merit but not realise how it relates to what they do,’ he says. ‘We therefore considered how sociotechnical harms can be addressed from within risk management processes that are likely more familiar to them. Looking at it from an infrastructure-system view, where might they see evidence of these harms in their system?’
Unburden the user
Whereas some harms from well-meaning controls may be indefensible, other harms may be planned for and controls made acceptable, to keep the benefits they will produce. ‘It is not only our aim to bring unintended harms to light, but also to allow them to be taken into account in common risk management practices,’ Parkin says. ‘Rather than bringing decision making to a complete standstill, we can consciously choose to reduce the risk, transfer it, or accept it.’ With passwords, for example, there has been a recent shift in reducing the burden on legitimate users, towards identifying suspicious behaviour by means of background monitoring of networks – looking for patterns of suspicious attempts to log in. ‘Rather than adding yet another authentication policy and asking the user to really, really, really, prove it is them, it increases security and unburdens the user.’
It feels like within Leiden-Delft-Erasmus we’ll be able to address a lot of open challenges in this area by bringing together people specialised in cybercrime, security policy, risk management and governance.’
Involving all stakeholders
Before joining TU Delft, Parkin was a senior research associate and teaching fellow at University College London (UCL). ‘There, I was very much involved in evidence-gathering of sociotechnical aspects of cybersecurity. Here at TU Delft, I can combine the people element with understanding what is going on in these managed systems. Someone being asked to follow instructions to remove malicious software from their smart device, for instance, and then observing network data to check if it has actually been remediated.’ In his continuing research he looks for ways in which the challenges of unintended harms can be framed such that they foster decision making involving all stakeholders – from cyber risk owners to representatives of the people they will affect. ‘It feels like within Leiden-Delft-Erasmus we’ll be able to address a lot of open challenges in this area by bringing together people specialised in cybercrime, security policy, risk management and governance.’
Widely applicable
Over the years there have been plenty of examples of where businesses might be driving customers away while only wanting to secure their website – for example, people not being able to book events tickets because they couldn’t make sense of those weird CAPTCHA-characters that prove you are a human and not an automated "bot”. It becomes much more problematic if such unwanted consequences correlate with essential online services, such as government websites or financial institutions. So, fair to say that Parkin’s research is widely applicable. Moreover, with unintended harms leading to users having to change their behaviour or even being lost from the system, you may almost expect every IT department to start hiring a sociologist. ‘That is the other side of IT playing a role in everything we do, it truly is a people issue,’ Parkin says. ‘Although awareness of the impacts of bad security solutions on people is growing, there still is a lot to do.’
Read more interviews in the Cyber Security Research Collection
Text: Merel Engelsman