Monday, 9 November 2015

Black Box versus White Box testing and when to use them

I have recently been speaking to many security professionals and asking them about black box and white box testing. I have used it as an interview question on many occasions as well. People's answers are varied and interesting, but I thought I would share my views briefly here.

Firstly, what are black box testing and white box testing, or grey box testing for that matter? Simply put, a black box test is one where the tester has no knowledge of the internal structure or workings of the system and will usually test with security protections in place. They may not even be given credentials to a system that requires authentication. This would be equivalent to what a hacker would have access to.

The opposite extreme is a white box test, where the tester has full knowledge of the system and access to the code, system settings and credentials for every role, including the administrator. The tester will likely be testing from inside the security perimeter. Grey box testing sits somewhere in the middle, where the tester will have knowledge of the functionality of the system and the overall components, but not detailed knowledge. They will usually have credentials, but may still test with some security controls in place.

So, when would you use the different levels of testing? Personally, I think that grey box testing is neither one thing nor the other and holds little value. For me, the motivation behind black box testing is compliance, whereas the motivation behind white box testing is security. With a white box test you are far more likely to find security issues, understand them and be able to fix or mitigate them effectively, so why wouldn't you do it? The black box test is supposedly what a hacker would see, but they have far more time, so it isn't even representative. The only reason to perform a black box test is to pass some audit that you are afraid you might fail if you perform a full white box test, in my opinion.

If you actually want to be secure, then make sure you always commission white box tests from your security testers.

Thursday, 30 April 2015

Improving Usability AND Security - it is possible?

I believe so, but only if security teams start to listen to what's important to the usability experts and adapt the security provision accordingly. As many have said before, there is no such thing as 100% security and we don't even necessarily want governmental levels of security for everything. Security provision should be appropriate to the systems and the information it protects.

I have worked on several projects with user experience designers and it has really changed my approach to securing systems. One particular project I was brought in to work on was having problems because the UX team were refusing to put in additional security measures and the security team were refusing to let them go live. To cut a long story short, it turns out that there are known drop-out rates for registrations or user journeys based on the number of fields people have to fill in and how many clicks they have to do. So, the requirements from the security team meant that the drop-out rates would be so high the service wasn't going to work. How can you deliver a secure service in this instance? Well we split the registration journey and allowed the first transaction with lighter weight security information. This won't work in all cases, but the idea is the same - what security is appropriate for this system?

The key here is to understand the user journey. Once you understand this, you can categorise the individual journeys and the information used. Not all journeys will access the same level of information and not all information has the same sensitivity. Authentication should be appropriate to the journey and information. Don't make the user enter loads of authentication information all the time or to do the most simple task. Some user journeys won't actually need authentication at all. For those that do, you should consider step-up authentication - that is simple authentication to begin with, but as the user starts to access more sensitive information or make changes/transactions that are high risk, ask them for additional credentials. For example, a simple username and password could be used for the majority of user journeys, but perhaps a one-time token for more high-risk journeys.

It is possible to have both usability and security. In order for this to work though, you have to:
  • understand the user journeys
  • ensure that it is usable most of the time for most tasks
  • categorise the information and set appropriate access levels
  • use step-up authentication for high-risk tasks rather than make the whole service hard to use
  • use risk engines transparently in the background to force step-up authentication or decline transactions/tasks when risk is above the acceptable threshold

Friday, 20 February 2015

EU Commission Working Group looking at privacy concerns in IoT

The Article 29 Working Group advising the EU Commission on Data Protection has published their opinion on the security and privacy concerns of the Internet of Things. A couple of interesting quotes come from this document and it points to possible future laws and regulations.
"Many questions arise around the vulnerability of these devices, often deployed outside a traditional IT structure and lacking sufficient security built into them."
"...users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific."
One thing is for sure, privacy is likely to get eroded further with the widespread adoption of IoT devices and wearables. It is critical that these devices, and the services provided with them, have security built in from the start.

Tuesday, 10 February 2015

Internal cyber attacks - more thoughts

I presented on a panel today at the European Information Security Summit 2015, entitled 'Should you launch an internal cyber attack?' We only had 45 minutes and I thought I'd share some of my thoughts, and what I didn't get to say, here.

Firstly, as we all know, the concept of a network perimeter is outdated and there is a real blurring of whether devices should be considered internal or external these days. It's not just about BYOD, but most organisations provide laptops for their employees. These laptops get connected at home, airports, hotels, etc. Any number of things could have happened to them during that time, so when they are reconnected to the network, they may have been compromised. For this reason, it should be every system for itself, to a certain extent, in the network, i.e. assume that the internal machines are compromised and try to provide reasonable levels of security anyway.

Secondly, the user is the weakest link. It has been said many times that we spend our time (and budget) on protecting the first 2000 miles and forget about the last 2 feet. This is less and less true these days, as security departments are waking up to the fact that education of the users is critical to the security of the information assets. However, the fact still remains that users make mistakes and can compromise the best security.

So, should we launch internal cyber attacks against ourselves? Yes, in my opinion - for several reasons.

Internal testing is about audit and improvements. If we launch an internal Pentest or Phishing attack, we can see the effectiveness of our controls, policies and user education. The critical point is to not use the results as an excuse to punish or name and shame - this is not Big Brother looking to punish you. If a user does click on a link in a Phishing email then we should see it as our failure to educate properly. If a user bypasses our controls then our controls haven't been explained properly or they are not appropriate (at least there may be a better way).

An example was discussed on the panel about people emailing a presentation to their home email account to work on it from home. In the example, this was a breach of policy and, if the categorisation of the presentation is confidential or secret, then they shouldn't be doing this. However, rather than punish the user immediately, try asking why they felt that they needed to email it to their home computer. Was it that they don't have a laptop? Or their laptop isn't capable enough? Or that they think they are doing a good thing by emailing it so that they don't have to take their corporate laptop out of the office as they know they're going to the pub for a couple of hours and are worried about it getting stolen? There are motivations and context to people's decisions. We see, and usually focus on, the effects without stopping to ask why did they do it? Most people are rational and have reasons for acting as they do. We need to get to the heart of those reasons.

Education is critical to any security system and as security professionals we need to learn to communicate better. Traditionally (and stereotypically) security people are not good at communicating in a clear, non-technical, non-jargon-filled way. This has to change if we want people to act in a secure way. We have to be able to explain it to them. In my opinion, you have to make the risks and downsides real to the user in order to make them understand why it is that we're asking them to do or not do something. If you just give someone a directive or order that they don't understand then they will be antagonistic and won't follow it when it is needed, because they don't see the point and it's a hassle. If they understand the reasoning then they are likely to be more sympathetic. Nothing does this better than demonstrating what could happen. Hence the internal attacks.

The next question we have to ask ourselves is what constitutes the internal part of an internal attack. Is it just our systems, or does it include all those third party systems that touch our data? I could quite happily write a whole blog post on outsourcing to third parties and the risks, so I won't delve into it here.

I do also have to say that it worries me that we seem to be educating our users into clicking on certain types of unsolicited emails that could easily be Phishing attacks. An example that I used was the satisfaction or staff survey that most companies perform these days. These often come from external email addresses and have obscured links. To my mind we should be telling our users to never click on any of these links and report them to IT security. Why shouldn't they ask our advice on an email they're unsure about? We're the experts.

One final point was suggested by a speaker, which I think is a good idea. If we educate users about the security of their family and assist them with personal security incidents and attacks as if they are those of our company, then we are likely to win strong advocates.

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust