Skip to main content

Internal cyber attacks - more thoughts

I presented on a panel today at the European Information Security Summit 2015, entitled 'Should you launch an internal cyber attack?' We only had 45 minutes and I thought I'd share some of my thoughts, and what I didn't get to say, here.

Firstly, as we all know, the concept of a network perimeter is outdated and there is a real blurring of whether devices should be considered internal or external these days. It's not just about BYOD, but most organisations provide laptops for their employees. These laptops get connected at home, airports, hotels, etc. Any number of things could have happened to them during that time, so when they are reconnected to the network, they may have been compromised. For this reason, it should be every system for itself, to a certain extent, in the network, i.e. assume that the internal machines are compromised and try to provide reasonable levels of security anyway.

Secondly, the user is the weakest link. It has been said many times that we spend our time (and budget) on protecting the first 2000 miles and forget about the last 2 feet. This is less and less true these days, as security departments are waking up to the fact that education of the users is critical to the security of the information assets. However, the fact still remains that users make mistakes and can compromise the best security.

So, should we launch internal cyber attacks against ourselves? Yes, in my opinion - for several reasons.

Internal testing is about audit and improvements. If we launch an internal Pentest or Phishing attack, we can see the effectiveness of our controls, policies and user education. The critical point is to not use the results as an excuse to punish or name and shame - this is not Big Brother looking to punish you. If a user does click on a link in a Phishing email then we should see it as our failure to educate properly. If a user bypasses our controls then our controls haven't been explained properly or they are not appropriate (at least there may be a better way).

An example was discussed on the panel about people emailing a presentation to their home email account to work on it from home. In the example, this was a breach of policy and, if the categorisation of the presentation is confidential or secret, then they shouldn't be doing this. However, rather than punish the user immediately, try asking why they felt that they needed to email it to their home computer. Was it that they don't have a laptop? Or their laptop isn't capable enough? Or that they think they are doing a good thing by emailing it so that they don't have to take their corporate laptop out of the office as they know they're going to the pub for a couple of hours and are worried about it getting stolen? There are motivations and context to people's decisions. We see, and usually focus on, the effects without stopping to ask why did they do it? Most people are rational and have reasons for acting as they do. We need to get to the heart of those reasons.

Education is critical to any security system and as security professionals we need to learn to communicate better. Traditionally (and stereotypically) security people are not good at communicating in a clear, non-technical, non-jargon-filled way. This has to change if we want people to act in a secure way. We have to be able to explain it to them. In my opinion, you have to make the risks and downsides real to the user in order to make them understand why it is that we're asking them to do or not do something. If you just give someone a directive or order that they don't understand then they will be antagonistic and won't follow it when it is needed, because they don't see the point and it's a hassle. If they understand the reasoning then they are likely to be more sympathetic. Nothing does this better than demonstrating what could happen. Hence the internal attacks.

The next question we have to ask ourselves is what constitutes the internal part of an internal attack. Is it just our systems, or does it include all those third party systems that touch our data? I could quite happily write a whole blog post on outsourcing to third parties and the risks, so I won't delve into it here.

I do also have to say that it worries me that we seem to be educating our users into clicking on certain types of unsolicited emails that could easily be Phishing attacks. An example that I used was the satisfaction or staff survey that most companies perform these days. These often come from external email addresses and have obscured links. To my mind we should be telling our users to never click on any of these links and report them to IT security. Why shouldn't they ask our advice on an email they're unsure about? We're the experts.

One final point was suggested by a speaker, which I think is a good idea. If we educate users about the security of their family and assist them with personal security incidents and attacks as if they are those of our company, then we are likely to win strong advocates.

Comments

Popular Posts

Coventry Building Society Grid Card

Coventry Building Society have recently introduced the Grid Card as a simple form of 2-factor authentication. It replaces memorable words in the login process. Now the idea is that you require something you know (i.e. your password) and something you have (i.e. the Grid Card) to log in - 2 things = 2 factors. For more about authentication see this post . How does it work? Very simply is the answer. During the log in process, you will be asked to enter the digits at 3 co-ordinates. For example: c3, d2 and j5 would mean that you enter 5, 6 and 3 (this is the example Coventry give). Is this better than a secret word? Yes, is the short answer. How many people will choose a memorable word that someone close to them could guess? Remember, that this isn't a password as such, it is expected to be a word and a word that means something to the user. The problem is that users cannot remember lots of passwords, so remembering two would be difficult. Also, having two passwords isn't real

How Reliable is RAID?

We all know that when we want a highly available and reliable server we install a RAID solution, but how reliable actually is that? Well, obviously, you can work it out quite simply as we will see below, but before you do, you have to know what sort of RAID are you talking about, as some can be less reliable than a single disk. The most common types are RAID 0, 1 and 5. We will look at the reliability of each using real disks for the calculations, but before we do, let's recap on what the most common RAID types are. Common Types of RAID RAID 0 is the Stripe set, which consists of 2 or more disks with data written in equal sized blocks to each of the disks. This is a fast way of reading and writing data to disk, but it gives you no redundancy at all. In fact, RAID 0 is actually less reliable than a single disk, as all the disks are in series from a reliability point of view. If you lose one disk in the array, you've lost the whole thing. RAID 0 is used purely to speed up dis

Trusteer or no trust 'ere...

...that is the question. Well, I've had more of a look into Trusteer's Rapport, and it seems that my fears were justified. There are many security professionals out there who are claiming that this is 'snake oil' - marketing hype for something that isn't possible. Trusteer's Rapport gives security 'guaranteed' even if your machine is infected with malware according to their marketing department. Now any security professional worth his salt will tell you that this is rubbish and you should run a mile from claims like this. Anyway, I will try to address a few questions I raised in my last post about this. Firstly, I was correct in my assumption that Rapport requires a list of the servers that you wish to communicate with; it contacts a secure DNS server, which has a list already in it. This is how it switches from a phishing site to the legitimate site silently in the background. I have yet to fully investigate the security of this DNS, however, as most