Skip to main content

Human Factors in Information Security - Errors & Violations

Human failures are often described as Slips, Lapses, Mistakes and Violations. These are grouped into two categories: Errors and Violations. The difference here is the intent - violations result from conscious decisions to disregard policies and procedures, whereas errors have no malicious intent. Also, violations often involve more than one form of misconduct, whereas errors are often isolated.

Don Turnblade has stated that in his experience "well trained staff had a 3.75% unintentional non-compliance rate; they did not realize that installed software compromised data security. About 0.4% of end users were intentionally non-compliant, generally willful persons with strong technical skill or organizational authority who were unaccustomed to complying with computing restrictions."

So what are the different types of error? Dealing with each in turn, we have Slips, Lapses and Mistakes.
  • Slips - actions not carried out as intended, e.g. pressing the wrong key by accident. Slips usually occur at the task execution stage.
  • Lapses - missed actions or omissions, e.g. forgetting to log out, or a step in a configuration process.
  • Mistakes - occur due to an incorrect intention, whilst believing it to be correct, i.e. they are deliberate actions with no malicious intent, e.g. misconfiguration of a firewall. Mistakes usually occur at the planning stage.
So who causes the error or violation and how do we combat them? Slips and Lapses are usually the fault of the user, but can be mitigated by making it more difficult for the user to make the error, e.g. by having confirmation dialogs for slips and better training for lapses. Mistakes tend to be the fault of designers and are slightly more difficult to combat as designer education is required or outside technical expertise needs to be brought in. However, this doesn't always solve the problem if they don't have the skills and knowledge required. Finally, violations can often be laid at the door of the managers. It is often the case that a culture of violations is accepted by senior management, who fail to impose proper sanctions or take the threat seriously.
All of these have to be dealt with to have a secure system and most of it boils down to having proper user education and training in place.

Comments

  1. Its interesting you frame the discussion in Human Factors terms. A big issue in designing interactive systems is the 'mental model' that uses have. That is the internal representation a user has of a system. Mental models give some depth to a users understanding of how the different parts of a system interrelate and consequently how it will behave given novel inputs or conditions.
    Mental Models can be difficult things to establish in a domain as intangible / complex as software but without them peoples understanding (or their ability to predict outcomes) is very brittle - a system appears to either do what its always done or is inexplicable different.
    I think the consequence of lacking good mental models in security is that people are unable to make judgments about the risks associated with their actions. Judgments get very binary with risks being either under or over estimated, neither of which are helpful.
    The response of the security functionality in systems often compounds this difficulty turning decisions into ok / cancel types of choices with little effort to inform the user of the potential consequences.
    I think if there was one goal for helping manage 'lapses' & 'violations' it should be to help users make informed decisions (informed in the sense of an awareness of the risks) rather than a paradigm based on just controlling and simplifying. Neither dumbing things down or automating too much in the background to second guess a user intent appear to be sustainable strategies. If anything they just make the impact of users less predictable.

    ReplyDelete

Post a Comment

Popular Posts

Coventry Building Society Grid Card

Coventry Building Society have recently introduced the Grid Card as a simple form of 2-factor authentication. It replaces memorable words in the login process. Now the idea is that you require something you know (i.e. your password) and something you have (i.e. the Grid Card) to log in - 2 things = 2 factors. For more about authentication see this post . How does it work? Very simply is the answer. During the log in process, you will be asked to enter the digits at 3 co-ordinates. For example: c3, d2 and j5 would mean that you enter 5, 6 and 3 (this is the example Coventry give). Is this better than a secret word? Yes, is the short answer. How many people will choose a memorable word that someone close to them could guess? Remember, that this isn't a password as such, it is expected to be a word and a word that means something to the user. The problem is that users cannot remember lots of passwords, so remembering two would be difficult. Also, having two passwords isn't real

How Reliable is RAID?

We all know that when we want a highly available and reliable server we install a RAID solution, but how reliable actually is that? Well, obviously, you can work it out quite simply as we will see below, but before you do, you have to know what sort of RAID are you talking about, as some can be less reliable than a single disk. The most common types are RAID 0, 1 and 5. We will look at the reliability of each using real disks for the calculations, but before we do, let's recap on what the most common RAID types are. Common Types of RAID RAID 0 is the Stripe set, which consists of 2 or more disks with data written in equal sized blocks to each of the disks. This is a fast way of reading and writing data to disk, but it gives you no redundancy at all. In fact, RAID 0 is actually less reliable than a single disk, as all the disks are in series from a reliability point of view. If you lose one disk in the array, you've lost the whole thing. RAID 0 is used purely to speed up dis

Trusteer or no trust 'ere...

...that is the question. Well, I've had more of a look into Trusteer's Rapport, and it seems that my fears were justified. There are many security professionals out there who are claiming that this is 'snake oil' - marketing hype for something that isn't possible. Trusteer's Rapport gives security 'guaranteed' even if your machine is infected with malware according to their marketing department. Now any security professional worth his salt will tell you that this is rubbish and you should run a mile from claims like this. Anyway, I will try to address a few questions I raised in my last post about this. Firstly, I was correct in my assumption that Rapport requires a list of the servers that you wish to communicate with; it contacts a secure DNS server, which has a list already in it. This is how it switches from a phishing site to the legitimate site silently in the background. I have yet to fully investigate the security of this DNS, however, as most