Monday, 9 November 2015

Black Box versus White Box testing and when to use them

I have recently been speaking to many security professionals and asking them about black box and white box testing. I have used it as an interview question on many occasions as well. People's answers are varied and interesting, but I thought I would share my views briefly here.

Firstly, what are black box testing and white box testing, or grey box testing for that matter? Simply put, a black box test is one where the tester has no knowledge of the internal structure or workings of the system and will usually test with security protections in place. They may not even be given credentials to a system that requires authentication. This would be equivalent to what a hacker would have access to.

The opposite extreme is a white box test, where the tester has full knowledge of the system and access to the code, system settings and credentials for every role, including the administrator. The tester will likely be testing from inside the security perimeter. Grey box testing sits somewhere in the middle, where the tester will have knowledge of the functionality of the system and the overall components, but not detailed knowledge. They will usually have credentials, but may still test with some security controls in place.

So, when would you use the different levels of testing? Personally, I think that grey box testing is neither one thing nor the other and holds little value. For me, the motivation behind black box testing is compliance, whereas the motivation behind white box testing is security. With a white box test you are far more likely to find security issues, understand them and be able to fix or mitigate them effectively, so why wouldn't you do it? The black box test is supposedly what a hacker would see, but they have far more time, so it isn't even representative. The only reason to perform a black box test is to pass some audit that you are afraid you might fail if you perform a full white box test, in my opinion.

If you actually want to be secure, then make sure you always commission white box tests from your security testers.

Friday, 20 February 2015

EU Commission Working Group looking at privacy concerns in IoT

The Article 29 Working Group advising the EU Commission on Data Protection has published their opinion on the security and privacy concerns of the Internet of Things. A couple of interesting quotes come from this document and it points to possible future laws and regulations.
"Many questions arise around the vulnerability of these devices, often deployed outside a traditional IT structure and lacking sufficient security built into them."
"...users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific."
One thing is for sure, privacy is likely to get eroded further with the widespread adoption of IoT devices and wearables. It is critical that these devices, and the services provided with them, have security built in from the start.

Tuesday, 10 February 2015

Internal cyber attacks - more thoughts

I presented on a panel today at the European Information Security Summit 2015, entitled 'Should you launch an internal cyber attack?' We only had 45 minutes and I thought I'd share some of my thoughts, and what I didn't get to say, here.

Firstly, as we all know, the concept of a network perimeter is outdated and there is a real blurring of whether devices should be considered internal or external these days. It's not just about BYOD, but most organisations provide laptops for their employees. These laptops get connected at home, airports, hotels, etc. Any number of things could have happened to them during that time, so when they are reconnected to the network, they may have been compromised. For this reason, it should be every system for itself, to a certain extent, in the network, i.e. assume that the internal machines are compromised and try to provide reasonable levels of security anyway.

Secondly, the user is the weakest link. It has been said many times that we spend our time (and budget) on protecting the first 2000 miles and forget about the last 2 feet. This is less and less true these days, as security departments are waking up to the fact that education of the users is critical to the security of the information assets. However, the fact still remains that users make mistakes and can compromise the best security.

So, should we launch internal cyber attacks against ourselves? Yes, in my opinion - for several reasons.

Internal testing is about audit and improvements. If we launch an internal Pentest or Phishing attack, we can see the effectiveness of our controls, policies and user education. The critical point is to not use the results as an excuse to punish or name and shame - this is not Big Brother looking to punish you. If a user does click on a link in a Phishing email then we should see it as our failure to educate properly. If a user bypasses our controls then our controls haven't been explained properly or they are not appropriate (at least there may be a better way).

An example was discussed on the panel about people emailing a presentation to their home email account to work on it from home. In the example, this was a breach of policy and, if the categorisation of the presentation is confidential or secret, then they shouldn't be doing this. However, rather than punish the user immediately, try asking why they felt that they needed to email it to their home computer. Was it that they don't have a laptop? Or their laptop isn't capable enough? Or that they think they are doing a good thing by emailing it so that they don't have to take their corporate laptop out of the office as they know they're going to the pub for a couple of hours and are worried about it getting stolen? There are motivations and context to people's decisions. We see, and usually focus on, the effects without stopping to ask why did they do it? Most people are rational and have reasons for acting as they do. We need to get to the heart of those reasons.

Education is critical to any security system and as security professionals we need to learn to communicate better. Traditionally (and stereotypically) security people are not good at communicating in a clear, non-technical, non-jargon-filled way. This has to change if we want people to act in a secure way. We have to be able to explain it to them. In my opinion, you have to make the risks and downsides real to the user in order to make them understand why it is that we're asking them to do or not do something. If you just give someone a directive or order that they don't understand then they will be antagonistic and won't follow it when it is needed, because they don't see the point and it's a hassle. If they understand the reasoning then they are likely to be more sympathetic. Nothing does this better than demonstrating what could happen. Hence the internal attacks.

The next question we have to ask ourselves is what constitutes the internal part of an internal attack. Is it just our systems, or does it include all those third party systems that touch our data? I could quite happily write a whole blog post on outsourcing to third parties and the risks, so I won't delve into it here.

I do also have to say that it worries me that we seem to be educating our users into clicking on certain types of unsolicited emails that could easily be Phishing attacks. An example that I used was the satisfaction or staff survey that most companies perform these days. These often come from external email addresses and have obscured links. To my mind we should be telling our users to never click on any of these links and report them to IT security. Why shouldn't they ask our advice on an email they're unsure about? We're the experts.

One final point was suggested by a speaker, which I think is a good idea. If we educate users about the security of their family and assist them with personal security incidents and attacks as if they are those of our company, then we are likely to win strong advocates.

Friday, 8 August 2014

Security groups should sit under Marketing, not IT

Ok, so I'm being a little facetious, but I do think that putting Security departments under IT is a bad idea, not because they don't naturally fit well there, but because usually it gives the wrong impression and not enough visibility.

Security is far more wide reaching than IT alone and touches every part of the business. By considering it as part of IT, and utilising IT budgets, it can be pigeonholed and ignored by anyone who wouldn't engage IT for their project or job. Security covers all information, from digital to paper-based and is concerned with aspects such as user education as much as technology.

There is a clear conflict of interest between IT and Security as well. Part of the Security team's function is to monitor, audit and assess the systems put in place and maintained by the IT department. If the Security team sits within this department then there can be a question over the segregation of duties and responsibility. In addition to this, Security departments can end up competing with other parts of IT for budget. How well does this work when project budgets are allocated to one department responsible for producing new features and fixing the vulnerabilities in old ones?

The Security department should answer directly to the board and communicate risk, not technology. It is important that they are involved with all aspects of the business from Marketing, through Procurement and Legal, to the IT department. You will, more often than not, get a much better idea of what the business does and what's important to it by sitting with the Marketing team than with the IT team. Hence the title of this post.

Saturday, 24 May 2014

eBay's Weak Security Architecture

Well eBay are in the news due to their breach of 145 million users' account details. There are a few worrying things about this breach, beyond the breach itself, that point to architectural issues in eBay's security.

The first issue is that a spokeswoman (according to Reuters) claimed "that it used 'sophisticated', proprietary hashing and salting technology to protect the passwords." This sounds very much like security through obscurity, which doesn't work. So, either they are using a proprietary implementation of a publicly known algorithm, or they have created their own. Both of these situations are doomed. As always, no one person can think of all the attacks on an algorithm, which is why we have public scrutiny. Even the best cryptographers in the world can't create new algorithms with acceptable levels of security every time. Do eBay have the best cryptographers in the world working for them? I don't believe so, but I could be wrong.

Also, if their argument is that hackers don't know the algorithm so can't attack it, then I'm fairly sure they're wrong there too. Even if the algorithm was secure enough to stand up to analysis of the hashes only, as hackers have eBay staff passwords perhaps they also have access to the code! If, on the other hand, they have their own implementation of a public algorithm I have to question why? Many examples are available of implementations that have gone wrong and introduced vulnerabilities, e.g. Heartbleed in OpenSSL. Do they think they know better?

The second issue is that they don't seem to encrypt Personally Identifiable Information (PII). This is obviously an issue if a breach should occur, but, admittedly, doesn't solve all problems as vulnerabilities in the web application could still expose the data. However, it is likely to have helped in this situation.

Finally, and most importantly, how did gaining access to eBay staff accounts give attackers access to the data? Database administrators shouldn't have access to read the data in the databases they manage. Why would they need it? Also, I would hope that there are VPNs between the corporate and production systems with 2-factor authentication. So how did they get in? Well, either eBay don't use this standard simple layer of protection, they leave their machines logged into the VPN for extended periods or they protect the VPN with the same password as their account.

Even if eBay do implement VPNs properly with 2-factor authentication, the production servers shouldn't have accounts on them that map to user accounts on the corporate network. Administrative accounts on production servers should have proper audited account control with single use passwords. Administrators should have to 'sign out' an account and be issued with a one-time password for it by the security group responsible for Identity and Access Management (IAM).

All this leads me to think that eBay have implemented a weak security architecture. 

Monday, 10 June 2013

Denial of Service (DoS) and Brute-Force Protection

Recently it has become clear to me that, although the terms Denial of Service (DoS), Distributed Denial of Service (DDoS) and Brute-Force are used by many, people don't really understand them. This has caused confusion and problems on more than one project, so I thought I would write my thoughts on their similarities, differences and protection mechanisms.

A Denial of Service is anything that happens (usually on purpose, but not necessarily) that takes a service off line or makes it unavailable to legitimate users. This could range from a hacker exploiting a vulnerability and taking the service off line, to someone digging up a cable in the road. However, a Denial of Service could also be triggered by legitimate use of a service without any 'vulnerabilities'. Consider a service that performs operations on large sets of data that take a few seconds to complete. If I put in multiple requests for this service then I could tie it up and make it unresponsive for several minutes. Similarly, consider a website that has a page with a large video or flash animation on it. Again, relatively few requests for this resource could make the server slow and unresponsive. DoS is not just about hackers finding vulnerabilities.

Distributed Denial of Service, on the other hand, is a deliberate attempt by someone to deny service by performing large numbers of requests from a large number of hosts at once. Whilst it is relatively easy to spot a single host attempting a large number of requests and block them, it can be hard to pick up on many hosts making few requests and harder to block them. There are many solutions to combat DDoS by caching content and providing high bandwidth to large numbers of nodes, such as those available from the likes of Akamai. However, logic flaws or lengthy processing in the application can only really be fixed by the application developers.

Brute-Force, on the other hand, has nothing to do with DoS or making a service slow or unavailable. I was amazed that people didn't know this! Brute-Force is all about submitting a, usually, large number of requests to a service to obtain information that was not intended by the developer. An example would be having no account lockout after several incorrect login attempts. It would then be possible to try a whole dictionary or even every character combination to eventually find the password for a user. This is an example of Brute-Force, but there are many others, such as finding database versions, telephone numbers, transactions, parcel delivery addresses, etc. This can only really be stopped with application logic.

Monday, 29 April 2013

The Disconnect between Security and Senior Management

There is often a fundamental disconnect between security professionals and senior management. As I have stated in a previous post about slips, mistakes and violations, if senior management don't 'buy in' to security then nor will the rest of the organisation and ultimately it will fail. Middle management want to be senior management and will model themselves on them, often seeing the breaking of rules as a mark of status. So, it is vital that senior management lead by example.

Unfortunately, it is often very hard to get senior management to 'buy in' to this concept and not have a 'them-and-us' attitude of there being those rules that apply to the rest of the organisation and those that apply to them. This is as much the fault of the security professionals as senior management though. Security professionals have spent so long saying "no" to everyone and stalwartly refusing to budge or see someone else's point of view that people have stopped listening and taking note. To be honest, rightly so.

If you want someone to change their point of view or come round to your way of thinking, by far the easiest way is to sell it to them as a positive thing that will be beneficial to them and 'bring them with you' rather than dictate. Saying "no" all the time is not positive and will ultimately fail as people will stop listening. Make it personal to them and put it in terms they understand. Relating security to risk and money will usually be more successful.

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust