Wednesday, 19 December 2012

Pentests Don't Make You Secure

I was asked to provide details of the 'Penetration Testing Phase' for a particular project by someone who was putting together a Test Approach Document today. The categories I was asked to fill in were:
  • Objective of the phase
  • Responsibility & Authority
  • Dependencies, risks & assumptions
  • Entry & Exit criteria
When discussing what they really wanted it became clear that they didn't know what a penetration test was or why we do them. The questions and document were set up expecting a deliverable from the pentest itself. The report was being treated as the deliverable without any thought of why a report was being produced or how it will be used. It was a tick in the box - "We require a pentest to be able to go live, so if we've had the report we can tick that box and move on."

Pentesting is not an end in itself. Pentesting is a standard, finite snapshot of the security of a system, which, if taken in isolation as a goal, is fairly useless. Pentests don't make you secure. Performing a pentest and having a report with lots of pretty colours and charts saying that high and critical vulnerabilities exist is only any good if you then remediate or mitigate those vulnerabilities. You could pentest your system every month, but if you never change anything in the system, every report will be the same and you will be as much at risk as you were before you had the pentest done. Indeed, you are likely to get progressively worse results as new vulnerabilities are discovered all the time.

The test and report themselves don't do anything for security. A pentest is used by security professionals to inform and shape a project and decisions. The actions taken based on the findings from a pentest are what improve your security and help you identify the best use of finite resources or, at the very least, enable you to understand the risk. Do you need to perform a pentest? Absolutely you do in order to understand the threat landscape properly and identify vulnerabilities, but it's what you then do with that knowledge that is important and will make you more secure (or not).

Friday, 16 November 2012

Web Hosting Security Policy & Guidelines

I have seen so many websites hosted and developed insecurely that I have often thought I should write a guide of sorts for those wanting to commission a new website. Now I have have actually been asked to develop a web hosting security policy and a set of guidelines to give to project managers for dissemination to developers and hosting providers. So, I thought I would share some of my advice here.

Before I do, though, I have to answer why we need this policy in the first place? There are many types of attack on websites, but these can be broadly categorised as follows: Denial of Service (DoS), Defacement and Data Breaches/Information Stealing. Data breaches and defacements hurt businesses' reputations and customer confidence as well as having direct financial impacts.

But surely any hosting provider or solution developer will have these standards in place, yes? Well, in my experience the answer is no. It is true that they are mostly common sense and most providers will conform to many of my recommendations, but I have yet to find one that, by default, conforms to them all.

Site Categorisation

There are several different categories of hosting and several different ways to categorise sites, with different requirements. However, in my opinion, sites should be categorised based on the information that they contain and the level of interaction allowed. Sites should then be logically and physically separated into their categories.

Sites can be categorised as brochure sites if they have static content or do not collect information. These sites can then further be categorised into public or private depending on whether the data that they contain is public or not. Sites within these categories may be co-hosted with other sites in the same category, but the two categories should be segregated.

Sites can be classed as data collection apps if they collect sensitive or personally identifiable information (PII) from the user. Sites within this category should be hosted on their own servers with no co-hosting and be segregated from all other sites. The data must be stored on separate segregated database servers that are secured and firewalled off.

Finally, any site with even more sensitive data on it or company secrets should be hosted internally if you have the expertise in house.

Hosted Environment

The following list is an example of the requirements for secure web hosting. It is not necessarily complete, but if you do not have the following then you may have issues in the future. All websites and web applications must:
  • be hosted on a dedicated environment - the hosting machine may be virtual or physical, but must not be shared with any 3rd parties. Multiple websites and applications from the same company may be hosted on the same machines according to the categories above
  • have DDoS protection in place
  • have AV running and configured properly on the server along with appropriate responses and reporting
  • be hosted behind a Web Application Firewall (WAF) to protect against common attacks, plus allow the ability to configure it for specific services
  • be hosted on security hardened Operating Systems (OS) and services to an agreed build standard
  • be subject to regular and timely patching of the OS and services
  • be subject to regular security testing and patching of any Content Management System (CMS) in a timely manner if used
  • be subject to active monitoring and logging by the provider for security breaches and reporting to/from the organisation
  • have formal incident management processes for both identifying and responding to incidents
  • not be co-hosted with additional public services beyond HTTP/HTTPS (e.g. no public FTP)
  • not allow DNS Zone Transfers
  • use proper public verified SSL certificates - with a preference for Extended Validation (EV) certificates
  • ensure that management services and ports are on different IP addresses and domain names preferably, but must not be available through the normal login or visible on the website
  • ensure that administrative interfaces and services are restricted to certain IP addresses at least, but make use of client-side certificates or two factor authentication (2FA) if possible
  • ensure staging servers are available for test and development, which must not be shared with live sites and should be securely wiped at the end of testing as soon as the site is deployed live
  • ensure staging and test environments are not available on the public Internet or, if there is no alternative, they must be devoid of branding and sensitive information in all ways and restricted as above
  • be built on a tiered architecture, or at least the database (DB) server must not sit on the same server as the web front end, must not be accessible from the Internet and must be securely segregated from the front end
  • use encrypted storage for all sensitive information, (e.g. passwords and sensitive information)

Hosting Services

It is up to the hosting provider and third party developers, but should be backed up by specific contractual clauses, to ensure that:
  • the site is backed up regularly off site in a secure location using encrypted media where the keys are stored separately from the media and able to be restored in a reasonable time frame with a suitable rotation and retention policy
  • hardware and media that has reached the end of its life is securely destroyed
  • all sites are made available for pentesting prior to going live and at regular intervals
  • all vulnerabilities considered of medium risk and above should be remediated prior to go-live
  • all sites are available for on-going regular automated Vulnerability Assessments
  • domain names, code and SSL certificates are registered to the company and not a third party
  • there are agreed processes for identifying approved personnel to authorise changes
  • change management processes that track all changes are in place along with rollback and test plans
  • capacity and bandwidth are actively managed and monitored
  • all management actions are accountable (unique accounts allocated to individuals)
  • all management should be through secure ingress from trusted locations
  • egress filtering should be in place to block all non-legitimate traffic

Sunday, 21 October 2012

Here come the Security Police

Security teams often attract antagonism from the business that they are supposed to serve, appearing as self-appointed policemen in a police state. This is unhelpful and not what we are or should be aiming for. Security departments should be providing a secure environment in which business users are free to do what they want. Obviously this environment will have boundaries, but they must be agreed with the business and not just imposed arbitrarily.

Take an example from children's play areas, children should be safe within the confines of the soft play area and not too much harm will come to them. They can run around and play whatever game they like as long as they stay within the boundaries. Children can't wear shoes in a soft play area as they may hurt another child, but this doesn't stop them from doing what they want as the play area has been engineered so that they don't need shoes to stop them from hurting their feet or getting wet and dirty.

The same principles can be applied to security. If we build a safe and secure environment that has everything that people need within it already then they are free to do what they want and need, and are far less likely to break the rules or circumvent security controls. The architecture has to be secure and services should be tailored to the business functions and not just imposed by the security teams. A good example is to provide a Choose Your Own device (CYO) offering to avoid the problems of Bring Your Own (BYO) or the restrictions of imposing a single device. It is possible to support a range of devices and then even offer a restricted service on some further devices, but allow the users choice.

In the end there will always be a certain amount of policing required, but if, as a security professional, you are spending most of your time in that role then your network, architecture and attitude are wrong.

Friday, 13 July 2012

Bank Card Phone Scam - new version of an old technique

There is a new take on an old phone scam currently hitting people. The old scam was to pretend to be the telephone company and phone someone saying that they are about to be cut-off if they don't pay a smallish amount by card over the phone immediately. If people don't believe them they are actually encouraged to hang-up and then try to make a call. When they hang-up and then pick the phone up again it is dead. How do they do this? Well it's actually very simple - the scammer doesn't hang-up, they just put their phone on mute. The call was never torn down.

So, what's the 'new take' on this scam? Well, they are now hitting bank and credit card customers. The scammers now pretend to be from the bank and start asking for card details, etc. If you get suspicious (or even sometimes prompted by the scammer themselves) you are encouraged to hang up and call them back on the telephone number shown on the back of your card. They then provide you with an extension number or a name to ask for.

When you hang up they do not, similar to before. However, this time they play the sound of the dialling tone to you until you start 'dialling' the number. All they have to do is wait for you to finish dialling the number then play the ringing tone to you. All the while they haven't hung up and you haven't dialled your bank at all. The scammers then 'answer' the phone and pass you to the person you were speaking to before. You now think you're speaking to your bank.

You did the right thing, but were still trapped. What can you do about this? My suggestion is to call back on a different line. Call your bank back on your mobile, not the landline you first received the call on.

Wednesday, 13 June 2012

HTTP Header Injection

Sometimes user input may be reflected in the HTTP Response Header from the server. If this is the case we may want to inject additional headers to perform other tasks, e.g. setting extra cookies, redirecting the user's browser to another site, etc. One example of this is a file download from a website with a user defined filename that I tested.

The web application took a user inputted description for a dataset that was used in several places. It was passed through several layers of validation for output to the screen and to a CSV file for download. However, it was also used as the filename for the CSV download and was not subject to enough validation. The filename was written to the HTTP headers as an attachment, e.g.:
Content-Disposition: attachment; filename="output.csv"
However, if we want to add a redirect header to the response from the server then we have to manipulate the filename/description. If we add a CRLF (carriage return line feed – i.e. a new line) then we can add a new header, such as:
Refresh: 0; url=http://www.google.com/#q="password.csv"
This will redirect the user's browser to the URL after 0 seconds, i.e. give them no chance to abort it. We need to send the CRLF ASCII character codes to the server to force it to put a new line in. This can be achieved by adding %0d%0a (CRLF) into the description. In this case the .csv" was added to the end automatically, which could be ignored by the malicious website or used as in this example above. So the full description becomes:
output.csv" %0d%0aRefresh: 0; url=http://www.google.com/#q="password
The output of this in the HTTP Header is:
Content-Disposition: attachment; filename="output.csv"
Refresh: 0; url=http://www.google.com/#q="password.csv"
In this case, though, I came up with a problem. If I used the above injection I got the following error:
Error 500: Invalid LF not followed by whitespace
It turns out that the character set is not properly dealt with by the web server. You cannot just add a space after the codes either as this will appear as a space at the beginning of the header line that we are injecting, which is interpreted by the browser as a continuation of the previous header line. The solution came from https://www.aspectsecurity.com/blog/to-redacted-thanks-for-everything-utf-8/ where overly long data is inserted knowing that it will be truncated to the correct codes. The following codes will be truncated to the CRLF:
%c4%8a
%c8%8a
%cc%8a
Now the working attack payload becomes:
output.csv" %cc%8aRefresh: 0; url=http://www.google.com/#q="password
The simplest way to fix this is to use a hardcoded output filename, e.g. output.csv. The user can change this when they download it if they want. Otherwise, more sophisticated validation is required to look for certain character codes and sequences.

Monday, 30 April 2012

Security standards are like getting a driving license

When will people learn that compliance does NOT equal security? I blogged about this back in September 2009. Recently Global Payments has suffered a breach despite being PCI-DSS compliant (article from The Register)

Security standards, and being assessed against them, are like getting a driving license. Passing your driving test means that you have achieved a minimum standard of driving, but it doesn't mean that you are a good driver or that you will never have an accident. The same is true of compliance to a particular standard - it doesn't mean that you can be any less vigilant about security or that you will never be compromised, it just means that you have met an agreed minimum level.

People forget that the PCI-DSS is only concerned about payment card data and won't necessarily look at all systems and processes. It is perfectly possible that a system is legitimately considered out of scope, but that the compromise that system allows a platform to attack a system that is within scope. The penetration tests performed are usually more focused on external access to PCI data as well. What if I can compromise the administrator's laptop though? Attacks from more adept hackers won't always go straight for the target; there are often easier ways.

PCI-DSS, and any other standard, should not even be considered the minimum requirement. It should be a given that the organisation will pass their compliance as they should be aiming so far beyond the standards. I realise that resources are not unlimited, but that doesn't mean that you should be satisfied with scraping through audits. If fewer resources were wasted trying to fudge results to pass compliance then more could be spent on actually securing the environment and compliance would be practically automatic.

The goal is a secure, trusted environment, not getting a bit of paper from the auditors.

Friday, 6 April 2012

‘isSuperUser = true’ and other client-side mistakes

Recently I have tested a couple of commercial web-based applications that send configuration details to the client-side to determine the functionality available to the user. These details were sent as XML to a Java applet or JavaScript via Ajax. So, what’s the problem?

The applications in question had several user roles associated with them, from low privilege users up to administrative users. All these users log into the same interface and features are enabled or disabled according to their role. In addition, access to the underlying data is also provided based on their role. However, in both cases, features were turned on and off in client-side code – either XML or JavaScript. One application actually sent isSuperUser = true for the administrative account and isSuperUser = false for others. A simple change in my client-side proxy actually succeeded in giving me access to administrative features.

The other application had several parameters that could be manipulated, such as AllowEdit. This gave access to some features, but I noticed that there were other functions available in the code that weren’t called by the page. It was a simple matter of looking for the differences between the page delivered to the administrator and that delivered to a low privilege user to find the missing code to call the functions. This was duly injected into the page via a local proxy again and new buttons and menus were added that gave administrative functionality enabled by manipulating the parameters sent, as above. Some might argue that this attack isn’t realistic as I needed an administrative account in the first place, but the code injected would work on every install of the application. You only need that access to one installation of the application, which could be on your own machine, then you can copy and paste into any other instance (or you could simply Google for the code).

It shouldn’t be this easy! Anything set on the client can be manipulated by the user easily. The security of a web application must reside on the server, not on the client. Web application developers must start treating the browser as a compromised client and code the security into the server accordingly.

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust