Friday, 24 February 2017

Cyber Security Predictions for 2017

I was asked to sit on a panel of experts, gaze into the crystal ball and make my predictions for what 2017 holds in store for cyber security, which got me thinking. Let's start with more breaches, more ransomware, more cyber security jobs, wage increases for security professionals, more 'qualified' professionals who don't really know what they're doing but have a piece of paper and, of course, vendors making even more money out of Fear, Uncertainty and Doubt (FUD). However, none of those is terribly interesting or any different from 2016, or 2015 for that matter, or indeed anything other than trends in the industry.

So what does 2017 hold in store for us in the security industry and is there anything new to worry about? Well an obvious one to call out is the EU's General Data Protection Regulation (GDPR). So what is GDPR? Well, GDPR replaces the previous data protection directive and aims to improve and harmonize data protections for EU citizens. This will impact non-EU companies that hold data on EU citizens as well as EU companies and agencies. Why is this such a big thing? Well, the regulation increases accountability and responsibility on companies, makes it law to disclose breaches and increases potential fines up to €20m or 4% of global turnover from the previous year, whichever is greater.

When does it come into effect? 25th May 2018. So why talk about it as a prediction in 2017? Companies will have to be prepared well before this date and vendors will start working towards selling services specifically aimed at GDPR compliance this year. The problem I have with this is that I believe companies will take their collective eye off the ball and be so busy with GDPR that they won't keep pace with the changes in technology and threat landscape.

I also believe that fines should be handed out more readily. Too often we have companies suffering a breach saying that they were compliant and it must have been an 'advanced attack' or 'nation state' actor. This is mostly complete rubbish! What's actually happening is that people do whatever gives them a tick in the compliance box without paying any mind as to whether it actually makes them secure. They use compliance as an insurance policy instead of following the principles to make themselves more secure. Most breaches occur through the same broad issues as a decade ago (or more). Frankly, if, for example, you have an OWASP Top 10 in your web app/service and you are breached, you should have the full fine thrown at you and those in charge should face negligence charges. There is simply no excuse for such well-known vulnerabilities to exist in live systems. Another point to remember with GDPR is that Brexit won't make us immune in Britain as the Information Commissioners Office (ICO) has already committed to it, so companies will have to prepare.

What else could we see in 2017? The IT industry is embracing DevOps, continuous integration, Platform as a Service (PaaS), software defined networks and, of course, agile. Many of these systems or vendor offerings have poor or non-existent security models. That industry needs to catch up; fast. In my opinion, the reason why we haven't seen more issues with these technologies is that they haven't, until now, been adopted by the big target companies, e.g. the banks. This is changing now and I think we'll see more focus on these technologies over the course of this year in situations where security is of high importance.

This isn't just about the technologies though, agile and the speed of deployment will change the way security professionals have to work. Gone are the days when the security professional has time to assess a solution at their leisure and fully test and assure it before go-live. I think threat modelling is going to become more important in this arena. Threat models can be built ahead of time and applied to new systems as they are developed. The emphasis then has to be on preventing the threat scenario as a whole (through a layered approach) not focusing on every single individual vulnerability/weakness. Basic security hygiene has to be brought up to an acceptable level across the board to enable this new way of working, as we can't rely on stopping a project whilst we fix every bit of it.

Something else I think will become more prevalent is big data and behavioural analytics. Companies are now starting to realise the power of big data and this is spilling over into the security industry. Some security teams are now employing data analysts and setting them anomaly detection problems or running behavioural reports on their employees, which is one of the best ways to catch the rogue insider. These are interesting developments and this type of data analysis is the future of security (alongside more traditional technologies and policy as well).

What else? I think that third party suppliers, the supply chain and smaller businesses will start to become more heavily targeted as the main targets get harder to breach. Smaller businesses can't usually afford the experienced cyber security teams that are required to secure them. So, they turn to vendors to sell them a silver bullet... on a budget. That's not going to work. Actually, basic security hygiene doesn't have to cost that much and doesn't require huge pay-outs to vendors. It does take expertise though and that is in short supply. As an industry I think we could do more to help smaller businesses with things like best practices and Security Technical Implementation Guides (STIGs) before the epidemic hits.

Finally, my fifth prediction is that we will start to see more attacks on connected systems, such as connected vehicles, building management systems, IoT devices, etc. I have worked with vehicle manufacturers and those involved in smart cities and smart homes/offices, and I can safely say that security is not top of their agendas - safety may be, but not security. Unfortunately, a lack of security can lead to a lack of safety in these cases, but I think a few harsh events will happen before the lessons are learned. Will 2017 be the year for this? Possibly not, as I think adoption of the technologies may not quite be there yet, but if we don't start dealing with it now we'll be in for a whole world of pain later.

Wednesday, 8 February 2017

The Threat Landscape Roundtable

I was invited along to SC Media's roundtable on The Threat Landscape last week and they have written an article on it. I was also interviewed and appear in their video summary. The article and video can be found here: https://www.scmagazineuk.com/roundtable-the-threat-landscape/article/635652/

Wednesday, 1 February 2017

The one question to ask a security team that will tell you if their company is secure

Well, okay, it won't actually tell you whether they are secure or not and there are other questions you could ask, but the point is you can tell a lot about a company's security by how they answer security questions. I was recently at a security round table and the conversation turned to third parties and how you can assure yourself of their security. Some advocated scoring companies or certifications, while others advocated sending questionnaires. The argument against questionnaires is that they are a point in time view of the organisation. However, you can ask process and policy based questions and you can tell a lot from how they answer.

So, what is the question that will reveal all? Well, as I said it's not one question as such, more a type of question. It should be about something basic, some security control you're sure they have because everyone does. For example:

Why do you have a firewall?

Probable answers:
  • "because everyone has one"/"because the course I went on said I should have one"/"because my last organisation has one and they are very secure" - bad answer, you're not thinking about controls or security, but instead just buying popular products or whatever the vendor sells you and undoubtedly have a false sense of security
  • "because our PCI/ISO/HIPAA/Other certification says we have to" - bad answer, you're ticking boxes and chasing compliance rather than actually trying to be secure
  • "well, a firewall is part of a secure layered architecture and enables segregation at the network level, restricting the ingress and egress... etc." - okay answer, at least you know what it does and may understand its limitations
  • "our threat modelling has identified threat actors and attack scenarios that can be mitigated, in part, by introducing a firewall at this location in our network" - good answer, you understand the technology, you are thinking how to deploy it, what technologies could help you secure your assets and what are the best projects/controls you can spend your limited budget on to reduce risk

I have done (and still do) many third party assessments and I do advocate asking them questions rather than just trusting someone else's word or a rating/certification of some sort, but I'm mostly interested in how they answer questions. I've seen too many 'compliant' companies say "We're secure, the U.S. Government uses us!" or "All the high street banks use our service!", yet fail close inspection and have glaring weaknesses or vulnerabilities.

Trust your own judgement; ask them a question. And if you're a third party, ask yourself the question... with all your controls.

Sunday, 8 May 2016

File Deletion versus Secure Wiping (and how do I wipe an SSD?)

When is a deleted file actually removed from your device, or at least when does it become unrecoverable? It turns out that this question isn't always easy to answer, nor is a secure file deletion easy to achieve in all circumstances.

To better understand this we have to start from the basic principle that when you delete a file on your computer you are only deleting the pointer to the file, not the actual data. The data on your hard disk drive (HDD) is stored magnetically in sectors on platters that spin round inside the HDD (we'll come onto SSDs in a bit). So, how does the computer know where to look for your file? It has a table of indexes such as the File Allocation Table (FAT) or Master File Table (MFT) in NTFS. When you delete a file in your OS, all you are actually doing is removing its entries from the table of indexes so your OS can't find it any more and doesn't know it's there. However, all the data is still stored on the disk and IS STILL RECOVERABLE! Tools like Piriform's Recuva can scan your disk for orphaned files and file fragments and allow you to recover them.

So, how do you actually securely delete a file so that it is unrecoverable? The most common way to securely delete a file is to overwrite it one or more times with other data before removing the entries in the index table. Different schemes for overwriting the data exist from NIST, the US DoD, HMG, Australian Government, etc. These usually consist of 1-3 rounds of writing all zeros, all ones or random patterns to the sectors, i.e. physically overwriting the data on the disk before 'deleting' it. There are many tools available to securely delete files and securely wipe drives according to these requirements.

Excellent, we've solved the problem of secure file deletion. Or have we? Well, no. There are usually some hidden areas of drives such as bad sectors that haven't actually failed, Host Protected Area (HPA), Device Configuration Overlay (DCO), etc. Interestingly, with DCO it is possible that you have a significantly bigger HDD capacity than is reported by the drive. Some manufacturers will sell bigger HDDs with the capacity artificially reduced for a variety of reasons. However, the important point here is that there are areas of the drive that you cannot normally access, but that may contain remnants of your data.

What of Solid State Drives (SSD)? Are they easier or harder to securely wipe? It turns out that they are much harder to wipe. SSDs can store your data anywhere and the controllers are programmed to 'wear' the drive evenly by keeping track of areas that get a lot of use and moving data around on the drive. So, assuming you keep roughly the same file size, when you edit your file on an HDD the original physical sectors will usually get overwritten with the new version. However, with SSDs, it is likely that the new version will be written to new areas of the disk, leaving the originals intact. It is very difficult to know where an SSD actually writes your data. They also have many hidden areas as above as well as capacity used to cope with failing sectors or evening up the wear. The long and tall of it is that if you use software to overwrite the file, like you would on an HDD, you probably haven't overwritten the data at all, but you will have reduced the life of your drive.

So how do we secure delete a file on an SSD? There aren't that many manufacturers of SSDs and most of them provide utilities to securely wipe their drives using the ATA Secure Erase (SE) command, which is a firmware supported command to securely wipe the whole drive, releasing electrons from the storage cells, thus wiping the storage. That's just wiped our whole drive though; how do I wipe just a file? Well, you can't really. You either wipe the drive or don't bother.

There is a 'gotcha' here as well though. I said earlier that there aren't many SSD manufacturers, but if you go to buy one there seem to be loads. Well, people like HP and IBM rebrand other people's SSDs (I believe they use Crucial). What's the harm in this? Well, they will sometimes re-flash the firmware to have their own feature set. That means that the original manufacturer's Secure Erase software may not work on them and the IBMs and HPs don't always provide an alternative (other than the traditional overwriting you would do on an HDD).

There must be something you can do though, surely? Well, yes there is. If you first encrypt your drive, or use file-level encryption, then the data that is on the drive should be unrecoverable (assuming you haven't stored the keys on the drive as well). This is actually your best bet for an SSD, but also does no harm on a traditional HDD.

OK, so if I want to get rid of a drive that is End of Life, what should I do? If it's an HDD, you should secure wipe it by overwriting the whole drive several times as described above, degauss it (i.e. using electro magnets to wipe the magnetic data on the platters) and then shred the drive. Yes, I did say shred the drive... into tiny pieces. You can get some impressive machinery to do this, or use a service to shred them on site for you. What about SSDs? Use the ATA Secure Erase function from the manufacturer's software and then shred them as before (just make sure the shredding process actually destroys the chips so they can't be re-floated onto another board to read them).

Monday, 4 January 2016

SC: Video Interview: Bankers v hackers

Security professionals can't afford to work in isolated bubbles when the attackers are openly sharing information about system vulnerabilities...

Watch my video interview for SCMagazine here.

Monday, 9 November 2015

Black Box versus White Box testing and when to use them

I have recently been speaking to many security professionals and asking them about black box and white box testing. I have used it as an interview question on many occasions as well. People's answers are varied and interesting, but I thought I would share my views briefly here.

Firstly, what are black box testing and white box testing, or grey box testing for that matter? Simply put, a black box test is one where the tester has no knowledge of the internal structure or workings of the system and will usually test with security protections in place. They may not even be given credentials to a system that requires authentication. This would be equivalent to what a hacker would have access to.

The opposite extreme is a white box test, where the tester has full knowledge of the system and access to the code, system settings and credentials for every role, including the administrator. The tester will likely be testing from inside the security perimeter. Grey box testing sits somewhere in the middle, where the tester will have knowledge of the functionality of the system and the overall components, but not detailed knowledge. They will usually have credentials, but may still test with some security controls in place.

So, when would you use the different levels of testing? Personally, I think that grey box testing is neither one thing nor the other and holds little value. For me, the motivation behind black box testing is compliance, whereas the motivation behind white box testing is security. With a white box test you are far more likely to find security issues, understand them and be able to fix or mitigate them effectively, so why wouldn't you do it? The black box test is supposedly what a hacker would see, but they have far more time, so it isn't even representative. The only reason to perform a black box test is to pass some audit that you are afraid you might fail if you perform a full white box test, in my opinion.

If you actually want to be secure, then make sure you always commission white box tests from your security testers.

Thursday, 30 April 2015

Improving Usability AND Security - it is possible?

I believe so, but only if security teams start to listen to what's important to the usability experts and adapt the security provision accordingly. As many have said before, there is no such thing as 100% security and we don't even necessarily want governmental levels of security for everything. Security provision should be appropriate to the systems and the information it protects.

I have worked on several projects with user experience designers and it has really changed my approach to securing systems. One particular project I was brought in to work on was having problems because the UX team were refusing to put in additional security measures and the security team were refusing to let them go live. To cut a long story short, it turns out that there are known drop-out rates for registrations or user journeys based on the number of fields people have to fill in and how many clicks they have to do. So, the requirements from the security team meant that the drop-out rates would be so high the service wasn't going to work. How can you deliver a secure service in this instance? Well we split the registration journey and allowed the first transaction with lighter weight security information. This won't work in all cases, but the idea is the same - what security is appropriate for this system?

The key here is to understand the user journey. Once you understand this, you can categorise the individual journeys and the information used. Not all journeys will access the same level of information and not all information has the same sensitivity. Authentication should be appropriate to the journey and information. Don't make the user enter loads of authentication information all the time or to do the most simple task. Some user journeys won't actually need authentication at all. For those that do, you should consider step-up authentication - that is simple authentication to begin with, but as the user starts to access more sensitive information or make changes/transactions that are high risk, ask them for additional credentials. For example, a simple username and password could be used for the majority of user journeys, but perhaps a one-time token for more high-risk journeys.

It is possible to have both usability and security. In order for this to work though, you have to:
  • understand the user journeys
  • ensure that it is usable most of the time for most tasks
  • categorise the information and set appropriate access levels
  • use step-up authentication for high-risk tasks rather than make the whole service hard to use
  • use risk engines transparently in the background to force step-up authentication or decline transactions/tasks when risk is above the acceptable threshold

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust