Sunday, 8 May 2016

File Deletion versus Secure Wiping (and how do I wipe an SSD?)

When is a deleted file actually removed from your device, or at least when does it become unrecoverable? It turns out that this question isn't always easy to answer, nor is a secure file deletion easy to achieve in all circumstances.

To better understand this we have to start from the basic principle that when you delete a file on your computer you are only deleting the pointer to the file, not the actual data. The data on your hard disk drive (HDD) is stored magnetically in sectors on platters that spin round inside the HDD (we'll come onto SSDs in a bit). So, how does the computer know where to look for your file? It has a table of indexes such as the File Allocation Table (FAT) or Master File Table (MFT) in NTFS. When you delete a file in your OS, all you are actually doing is removing its entries from the table of indexes so your OS can't find it any more and doesn't know it's there. However, all the data is still stored on the disk and IS STILL RECOVERABLE! Tools like Piriform's Recuva can scan your disk for orphaned files and file fragments and allow you to recover them.

So, how do you actually securely delete a file so that it is unrecoverable? The most common way to securely delete a file is to overwrite it one or more times with other data before removing the entries in the index table. Different schemes for overwriting the data exist from NIST, the US DoD, HMG, Australian Government, etc. These usually consist of 1-3 rounds of writing all zeros, all ones or random patterns to the sectors, i.e. physically overwriting the data on the disk before 'deleting' it. There are many tools available to securely delete files and securely wipe drives according to these requirements.

Excellent, we've solved the problem of secure file deletion. Or have we? Well, no. There are usually some hidden areas of drives such as bad sectors that haven't actually failed, Host Protected Area (HPA), Device Configuration Overlay (DCO), etc. Interestingly, with DCO it is possible that you have a significantly bigger HDD capacity than is reported by the drive. Some manufacturers will sell bigger HDDs with the capacity artificially reduced for a variety of reasons. However, the important point here is that there are areas of the drive that you cannot normally access, but that may contain remnants of your data.

What of Solid State Drives (SSD)? Are they easier or harder to securely wipe? It turns out that they are much harder to wipe. SSDs can store your data anywhere and the controllers are programmed to 'wear' the drive evenly by keeping track of areas that get a lot of use and moving data around on the drive. So, assuming you keep roughly the same file size, when you edit your file on an HDD the original physical sectors will usually get overwritten with the new version. However, with SSDs, it is likely that the new version will be written to new areas of the disk, leaving the originals intact. It is very difficult to know where an SSD actually writes your data. They also have many hidden areas as above as well as capacity used to cope with failing sectors or evening up the wear. The long and tall of it is that if you use software to overwrite the file, like you would on an HDD, you probably haven't overwritten the data at all, but you will have reduced the life of your drive.

So how do we secure delete a file on an SSD? There aren't that many manufacturers of SSDs and most of them provide utilities to securely wipe their drives using the ATA Secure Erase (SE) command, which is a firmware supported command to securely wipe the whole drive, releasing electrons from the storage cells, thus wiping the storage. That's just wiped our whole drive though; how do I wipe just a file? Well, you can't really. You either wipe the drive or don't bother.

There is a 'gotcha' here as well though. I said earlier that there aren't many SSD manufacturers, but if you go to buy one there seem to be loads. Well, people like HP and IBM rebrand other people's SSDs (I believe they use Crucial). What's the harm in this? Well, they will sometimes re-flash the firmware to have their own feature set. That means that the original manufacturer's Secure Erase software may not work on them and the IBMs and HPs don't always provide an alternative (other than the traditional overwriting you would do on an HDD).

There must be something you can do though, surely? Well, yes there is. If you first encrypt your drive, or use file-level encryption, then the data that is on the drive should be unrecoverable (assuming you haven't stored the keys on the drive as well). This is actually your best bet for an SSD, but also does no harm on a traditional HDD.

OK, so if I want to get rid of a drive that is End of Life, what should I do? If it's an HDD, you should secure wipe it by overwriting the whole drive several times as described above, degauss it (i.e. using electro magnets to wipe the magnetic data on the platters) and then shred the drive. Yes, I did say shred the drive... into tiny pieces. You can get some impressive machinery to do this, or use a service to shred them on site for you. What about SSDs? Use the ATA Secure Erase function from the manufacturer's software and then shred them as before (just make sure the shredding process actually destroys the chips so they can't be re-floated onto another board to read them).

Monday, 4 January 2016

SC: Video Interview: Bankers v hackers

Security professionals can't afford to work in isolated bubbles when the attackers are openly sharing information about system vulnerabilities...

Watch my video interview for SCMagazine here.

Monday, 9 November 2015

Black Box versus White Box testing and when to use them

I have recently been speaking to many security professionals and asking them about black box and white box testing. I have used it as an interview question on many occasions as well. People's answers are varied and interesting, but I thought I would share my views briefly here.

Firstly, what are black box testing and white box testing, or grey box testing for that matter? Simply put, a black box test is one where the tester has no knowledge of the internal structure or workings of the system and will usually test with security protections in place. They may not even be given credentials to a system that requires authentication. This would be equivalent to what a hacker would have access to.

The opposite extreme is a white box test, where the tester has full knowledge of the system and access to the code, system settings and credentials for every role, including the administrator. The tester will likely be testing from inside the security perimeter. Grey box testing sits somewhere in the middle, where the tester will have knowledge of the functionality of the system and the overall components, but not detailed knowledge. They will usually have credentials, but may still test with some security controls in place.

So, when would you use the different levels of testing? Personally, I think that grey box testing is neither one thing nor the other and holds little value. For me, the motivation behind black box testing is compliance, whereas the motivation behind white box testing is security. With a white box test you are far more likely to find security issues, understand them and be able to fix or mitigate them effectively, so why wouldn't you do it? The black box test is supposedly what a hacker would see, but they have far more time, so it isn't even representative. The only reason to perform a black box test is to pass some audit that you are afraid you might fail if you perform a full white box test, in my opinion.

If you actually want to be secure, then make sure you always commission white box tests from your security testers.

Thursday, 30 April 2015

Improving Usability AND Security - it is possible?

I believe so, but only if security teams start to listen to what's important to the usability experts and adapt the security provision accordingly. As many have said before, there is no such thing as 100% security and we don't even necessarily want governmental levels of security for everything. Security provision should be appropriate to the systems and the information it protects.

I have worked on several projects with user experience designers and it has really changed my approach to securing systems. One particular project I was brought in to work on was having problems because the UX team were refusing to put in additional security measures and the security team were refusing to let them go live. To cut a long story short, it turns out that there are known drop-out rates for registrations or user journeys based on the number of fields people have to fill in and how many clicks they have to do. So, the requirements from the security team meant that the drop-out rates would be so high the service wasn't going to work. How can you deliver a secure service in this instance? Well we split the registration journey and allowed the first transaction with lighter weight security information. This won't work in all cases, but the idea is the same - what security is appropriate for this system?

The key here is to understand the user journey. Once you understand this, you can categorise the individual journeys and the information used. Not all journeys will access the same level of information and not all information has the same sensitivity. Authentication should be appropriate to the journey and information. Don't make the user enter loads of authentication information all the time or to do the most simple task. Some user journeys won't actually need authentication at all. For those that do, you should consider step-up authentication - that is simple authentication to begin with, but as the user starts to access more sensitive information or make changes/transactions that are high risk, ask them for additional credentials. For example, a simple username and password could be used for the majority of user journeys, but perhaps a one-time token for more high-risk journeys.

It is possible to have both usability and security. In order for this to work though, you have to:
  • understand the user journeys
  • ensure that it is usable most of the time for most tasks
  • categorise the information and set appropriate access levels
  • use step-up authentication for high-risk tasks rather than make the whole service hard to use
  • use risk engines transparently in the background to force step-up authentication or decline transactions/tasks when risk is above the acceptable threshold

Friday, 20 February 2015

EU Commission Working Group looking at privacy concerns in IoT

The Article 29 Working Group advising the EU Commission on Data Protection has published their opinion on the security and privacy concerns of the Internet of Things. A couple of interesting quotes come from this document and it points to possible future laws and regulations.
"Many questions arise around the vulnerability of these devices, often deployed outside a traditional IT structure and lacking sufficient security built into them."
"...users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific."
One thing is for sure, privacy is likely to get eroded further with the widespread adoption of IoT devices and wearables. It is critical that these devices, and the services provided with them, have security built in from the start.

Tuesday, 10 February 2015

Internal cyber attacks - more thoughts

I presented on a panel today at the European Information Security Summit 2015, entitled 'Should you launch an internal cyber attack?' We only had 45 minutes and I thought I'd share some of my thoughts, and what I didn't get to say, here.

Firstly, as we all know, the concept of a network perimeter is outdated and there is a real blurring of whether devices should be considered internal or external these days. It's not just about BYOD, but most organisations provide laptops for their employees. These laptops get connected at home, airports, hotels, etc. Any number of things could have happened to them during that time, so when they are reconnected to the network, they may have been compromised. For this reason, it should be every system for itself, to a certain extent, in the network, i.e. assume that the internal machines are compromised and try to provide reasonable levels of security anyway.

Secondly, the user is the weakest link. It has been said many times that we spend our time (and budget) on protecting the first 2000 miles and forget about the last 2 feet. This is less and less true these days, as security departments are waking up to the fact that education of the users is critical to the security of the information assets. However, the fact still remains that users make mistakes and can compromise the best security.

So, should we launch internal cyber attacks against ourselves? Yes, in my opinion - for several reasons.

Internal testing is about audit and improvements. If we launch an internal Pentest or Phishing attack, we can see the effectiveness of our controls, policies and user education. The critical point is to not use the results as an excuse to punish or name and shame - this is not Big Brother looking to punish you. If a user does click on a link in a Phishing email then we should see it as our failure to educate properly. If a user bypasses our controls then our controls haven't been explained properly or they are not appropriate (at least there may be a better way).

An example was discussed on the panel about people emailing a presentation to their home email account to work on it from home. In the example, this was a breach of policy and, if the categorisation of the presentation is confidential or secret, then they shouldn't be doing this. However, rather than punish the user immediately, try asking why they felt that they needed to email it to their home computer. Was it that they don't have a laptop? Or their laptop isn't capable enough? Or that they think they are doing a good thing by emailing it so that they don't have to take their corporate laptop out of the office as they know they're going to the pub for a couple of hours and are worried about it getting stolen? There are motivations and context to people's decisions. We see, and usually focus on, the effects without stopping to ask why did they do it? Most people are rational and have reasons for acting as they do. We need to get to the heart of those reasons.

Education is critical to any security system and as security professionals we need to learn to communicate better. Traditionally (and stereotypically) security people are not good at communicating in a clear, non-technical, non-jargon-filled way. This has to change if we want people to act in a secure way. We have to be able to explain it to them. In my opinion, you have to make the risks and downsides real to the user in order to make them understand why it is that we're asking them to do or not do something. If you just give someone a directive or order that they don't understand then they will be antagonistic and won't follow it when it is needed, because they don't see the point and it's a hassle. If they understand the reasoning then they are likely to be more sympathetic. Nothing does this better than demonstrating what could happen. Hence the internal attacks.

The next question we have to ask ourselves is what constitutes the internal part of an internal attack. Is it just our systems, or does it include all those third party systems that touch our data? I could quite happily write a whole blog post on outsourcing to third parties and the risks, so I won't delve into it here.

I do also have to say that it worries me that we seem to be educating our users into clicking on certain types of unsolicited emails that could easily be Phishing attacks. An example that I used was the satisfaction or staff survey that most companies perform these days. These often come from external email addresses and have obscured links. To my mind we should be telling our users to never click on any of these links and report them to IT security. Why shouldn't they ask our advice on an email they're unsure about? We're the experts.

One final point was suggested by a speaker, which I think is a good idea. If we educate users about the security of their family and assist them with personal security incidents and attacks as if they are those of our company, then we are likely to win strong advocates.

Friday, 8 August 2014

Security groups should sit under Marketing, not IT

Ok, so I'm being a little facetious, but I do think that putting Security departments under IT is a bad idea, not because they don't naturally fit well there, but because usually it gives the wrong impression and not enough visibility.

Security is far more wide reaching than IT alone and touches every part of the business. By considering it as part of IT, and utilising IT budgets, it can be pigeonholed and ignored by anyone who wouldn't engage IT for their project or job. Security covers all information, from digital to paper-based and is concerned with aspects such as user education as much as technology.

There is a clear conflict of interest between IT and Security as well. Part of the Security team's function is to monitor, audit and assess the systems put in place and maintained by the IT department. If the Security team sits within this department then there can be a question over the segregation of duties and responsibility. In addition to this, Security departments can end up competing with other parts of IT for budget. How well does this work when project budgets are allocated to one department responsible for producing new features and fixing the vulnerabilities in old ones?

The Security department should answer directly to the board and communicate risk, not technology. It is important that they are involved with all aspects of the business from Marketing, through Procurement and Legal, to the IT department. You will, more often than not, get a much better idea of what the business does and what's important to it by sitting with the Marketing team than with the IT team. Hence the title of this post.

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust