Friday, 26 June 2009

The PCI DSS and Why It's Relevant to Everyone

Many of you will know that PCI DSS stands for the Payment Card Industry's Data Security Standard and most of the rest of you have probably heard of it and wondered what it was. You may immediately say I'm not interested in the Payment Card Industry and want to navigate away, but just before you do, you should know that many of the 12 recommendations are relevant to all. Actually the PCI DSS recommendations are mostly common sense that we should all be implementing anyway. I'll give a quick overview of the big 12 and how these can be applied to all networks in this blog.

According to the PCI Security Standards Council, "The core of the PCI DSS is a group of principles and accompanying requirements, around which the specific elements of the DSS are organized." The 12 recommendations that they put forward can be generalized as follows and should be adhered to by all organisations:

  1. Install and maintain a firewall configuration to protect private/sensitive data
  2. Do not use vendor-supplied defaults for system passwords and other security parameters
  3. Protect stored data
  4. Encrypt transmission of private/sensitive data across open, public networks
  5. Use and regularly update anti-virus software
  6. Develop and maintain secure systems and applications
  7. Restrict access to private/sensitive data by business need-to-know
  8. Assign a unique ID to each person with computer access
  9. Restrict physical access to private/sensitive data
  10. Track and monitor all access to network resources and private/sensitive data
  11. Regularly test security systems and processes
  12. Maintain a policy that addresses information security

Dealing with each very briefly, I’ll try to show how these can be applied to a ‘normal’ network rather than one involved in dealing with payment details. Firstly, we all know that we must have a firewall and block access from the outside world. However, just having a firewall isn’t enough, if it isn’t configured and managed properly. Every port and service open through it must be justified and have clear documentation. There must be policies and procedures in place to control the opening of ports and services through the firewall so that each one can be assessed for its impact on the overall security. Wireless networks should also be separated from wired networks by the firewall – this doesn’t mean that the wireless network has to be unsecured from the Internet or that it has no access to the wired network, just that restrictions and filtering should be in place between them. Obviously, there should be no access from outside the network into the network, as a DMZ should be set up for all public access machines. Finally, personal firewalls should be installed on all mobile machines and hardware firewalls installed at users’ home premises if external access is to be allowed.

The second should be obvious and doesn’t really need any explanation, other than to say disable as many features as you can and don’t allow any configuration or management from outside your network. There are ways to enable secure remote access to the network, thus allowing for management of servers and devices to be achieved from within the network.

The third and fourth requirements go together and mean that strong authentication is required and permissions set appropriately. However, we should also implement encryption on highly sensitive data and removable devices/mobile machines, with proper key-management processes, and Digital Rights Management (DRM) to prevent digital leakage. All data transmitted across public networks should be encrypted (whether sensitive or not).

Is there anyone who doesn’t follow point 5? You’d also think that the zyxst (6th) recommendation would be followed by all organisations, but this isn’t the case. You must have proper patch management procedures as these are critical to securing potential vulnerabilities. However, how many times do you see that the state of the server or service is lagging behind the latest patches? I have seen many commercial outfits developing web applications that are not secure as security is an afterthought and not embedded in the development cycle. This means that, even if you don’t write your own applications, you must check third party applications that you intend to use on your network. Make sure that they have been developed using secure coding guidelines and have been independently tested. An example of not taking it for granted is Apple, who has no formal security program (ref.).

The seventh recommendation sounds obvious again, but it isn’t followed even by many of those covered by the PCI DSS. Restrict access to your data by business need to know. Does your network administrator need to know everyone’s personnel details and payroll? No, so make sure they don’t have access. This is an often overlooked problem, but many organisations rely on people not looking rather than actually protecting the data. How many examples can you quote of companies, and government organisations, losing confidential data that they should never have been able to access? An example springs to mind of a large retailer that lost the names, addresses and credit card numbers of millions of customers, which were copied onto a laptop. Why would you need all this data? Presumably you want it for business intelligence purposes and data analysis. This can, and should, be delivered live through secure channels, not done from a local copy. Why do you need their credit card number though? The point is that it shouldn’t be possible to extract this data and store it on the laptop in the first place. The next two are also related to this and a previous point of requiring strong authentication, which means that you must not allow sharing of user accounts by employees (especially executives giving passwords to PAs!).

Recommendations 10 and 11 deal with monitoring and testing the network. In order to make sure that your system is functioning as expected and that your security mechanisms are appropriate and don’t have vulnerabilities, you need to monitor the network and regularly test it for vulnerabilities. Every time a significant change is made to the network, its impact on the network should be assessed and tested.

The final recommendation is to maintain an information security policy. This must cover all forms of information though, not just electronic. It should be very clear and all employees should be trained in its content and abide by the policy or face punitive measures. Employees should have to sign a copy of the security policy, stating that they have read it, understand its content and agree to abide by it. However, this has no teeth if you do not adequately train all of your employees and update them regularly on the policy and any changes.

The PCI DSS recommendations can and should be adopted by all organisations, not just those involved in payment card transactions.

Tuesday, 16 June 2009

Trojan Keylogger Screensaver Compromising Novell Client for Windows

I've been talking about 2-factor authentication and improving authentication mechanisms for a while now and trying to get companies to implement such solutions. One such organisation uses the Novell client for Windows on Windows XP. When a user attempts to log in they are not required to press Ctrl+Alt+Del. Many forums and reviews state that this is an advantage, as users don't like it! The point of pressing Ctrl+Alt+Del (CAD) on Windows is to stop all applications from running and kill Trojans, etc. Novell have replaced the MSGina.dll with NWGina.dll so that they can capture the CAD key combination. This is the standard way to override the built-in login screen and replace it with a custom one. However, Novell have decided to allow administrators to eliminate the need for the CAD key combination. This, obviously reduces the overall security of the system.

I know that there are many ways to write a keylogger, some more sophisticated than others, but a lot of these low-level system keyloggers can be picked up by AV (Anti-Virus) software. I wanted to show that you don't need to be able to create this type of keylogger to obtain the username/password combination from a system that doesn't require the CAD key combination. Another reason for not going down this route is that I wanted to show that this could be put on a live system, not just in the lab. Therefore, it mustn't trigger the AV when installed or when running. The solution I decided to try was to write a screensaver to mimic the Novell login screen, capture the username/password, write these to a web service and then hand back to the real Novell client.

I have just listed a few things there without explanation. Why don't I just take the details and log the user in myself, then pass the username/password to my web service? Well, this would require either modifications to the NWGina or System calls that would trigger the AV and other protection mechanisms. I wanted this to remain undetected. Basically, a screensaver is just an application that is run automatically, rather than explicitly by the user. Indeed, when you write a screensaver you produce an .exe file and just rename it as .scr. Obviously there are a few things you have to do to make it function as a screensaver, like be full-screen and always on top. You also have to handle mouse and keyboard events to end the screensaver and return control of the desktop to the user. However, instead of immediately exiting our application and returning control, why not pop up another form to mimic the login dialog and fool users into giving us their password? It's easy really.

So what do we do once the user has entered their details? Well, the screensaver writes the data to a simple web handler that I wrote for this purpose. All it does is take an HTTP GET request with URL parameters and return a simple text confirmation to the screensaver. I don't care that it's in plaintext and somebody might read it. Obviously, I would have to anonymize the server so that it couldn't be traced, but I'm not actually trying to deploy this. Once the screensaver has confirmation that the username/password have been logged, then it displays the 'incorrect password' dialog and immediately exits to reveal the real login dialog. In this way, the user will most likely think that they have mistyped their password. As a little trick, I decided to make the ruse more plausible by turning the CAPS Lock on as soon as the user clicks the login button.

The final thing to evade detection was to set a limit as to how many times the Trojan would appear. Nobody mistypes their password the first time they log in every time, so I decided that a 1 in 5 rule would probably be acceptable. Here I assumed that users might accept that they have mistyped their password 1 in 5 times, particularly with the CAPS Lock trick as well. You could go for a lot less than this, but then you'd have to make sure it was on the machine for a while to capture the passwords. Again, this is simple to do, generate a pseudo-random number to determine whether to show the Trojan dialog or exit the program.

Having written the screensaver I had to test it. Well, I have access to a corporate laptop and desktop with the Novell client on and McAfee AV software. It works a treat. Very realistic and not picked up by the AV. I have asked people to look at it and try it (without using their real username/password combination) and they have said that they wouldn't have realised. I'm not sure it's particularly ethical to try this on a real network, so I'll probably leave it there, however there is still the issue of mass distribution of such a Trojan. I don't believe that this will be terribly difficult though, especially if using a bit of social engineering as well.

So, how do you defeat this method? Don't disable CAD, or if it is disabled then press it anyway!

Wednesday, 10 June 2009

How to tell if your Firewall is a full DMZ

Most firewalls have a 'DMZ' setting, but are they actually a full DMZ firewall? DMZ is a term often used in network security, but it can mean two different things to manufacturers and practitioners. Technically, there is no such thing as a DMZ in a firewall architecture, only a screened-subnet firewall, screened-host firewall or an exposed host, but the term is the industry standard when talking about allowing access to information servers (e.g. web, mail, etc.) from the Internet.

So what is a DMZ? DMZ stands for Demilitarized Zone and is, obviously, a military term for the no-go area between two armies where no military activity is allowed. However, in network security terms it is a secure subnet that separates the Internet from the internal machines on your network. This becomes a logical place to implement any Information Servers, as these can be partially opened to the Internet, whilst not allowing direct access to the internal network. So are all DMZ firewalls the same? Well, no they aren't. They range from small SoHo (Small Office/Home Office) routers up to full enterprise-level firewalls. Of course the majority of SoHo routers are not DMZ firewalls at all; their DMZ features are actually exposed hosts, which have less protection than a normal machine and little or no separation from the internal network. This is bad, as any successful attack on the exposed host now has free range over the internal network.

Moving towards proper enterprise firewalls, we still have two different types of firewall often referred to as a DMZ firewall; they are either logical or physical DMZ firewalls. Obviously, the physical DMZ firewall offers the best level of security, but what's the difference? The difference is that they are a screened-host and a screened-subnet firewall respectively. Although your firewall arrives as a single rack-mounted unit, internally it is made up of a set of components - most notably packet filtering routers and a bastion host. The packet filtering routers simply permit or deny access based on IP address, TCP port number or protocol type. More sophisticated functionality is implemented by using proxies on the secure bastion host. The diagram below shows the logical setup of a full DMZ firewall, with a separate screened-subnet (purple) between the insecure Internet (red) and the secure Intranet (green).

The point here is that no communication is allowed directly from the Internet to the Intranet. All traffic can be forced through the Bastion Host to perform URL, content, virus and SPAM filtering, among others. In the screened-host solution, there is no internal packet filtering router or separate screened-subnet. The information servers are logically separated from the information servers, but not physically. They may be on different VLANs (Virtual Local Area Networks) and physical firewall ports, but they are only separated by the logic of these mechanisms. One way to tell if your firewall is a screened-host is to see if you can set an Access Control List (ACL) between your information servers and your internal machines and see if the virus checking will work between them as well. If you can't set up an internal ACL or perform real-time virus checking on internal traffic, then you probably have a screened-host firewall.

The most secure way to implement a firewall is to have the full screened-subnet firewall, depicted above. This solution can have multiple internal connections that are all separate networks connected via the firewall. The advantage of this is that there is physical separation between your information servers and your internal machines. Indeed, it is possible to separate your organisation internally into departments or by access medium, i.e. wired and wireless networks are physically separated and wireless networks are treated as less secure. Another 'best practice' is to have your servers separated from the rest of your network and restrict access to them. If you have an intranet server, for example, why allow more than HTTP (port 80) and HTTPS (port 443) access to it from your network? If we lock the network down in this way we can better halt the spread of malware on our internal network even if we do get infected. This is moving towards bringing the firewall in from the edge of the network to the core, to protect our network as a whole. Remember, firewalls can't protect against traffic that doesn't go through them.

Tuesday, 9 June 2009

The 5 Restoration Phases of a Secure and Dependable System

We all want our systems to be secure and dependable, indeed the two topics are interlinked. Dependability requires high availability management, which has several aspects to it. We can try to achieve Fault Avoidance, with fault prevention and fault removal, but this isn't actually possible in all cases. For example, hard disk drives will have physical wear out due to moving parts, power supplies do not run indefinitely, etc. Therefore, we move towards Fault Acceptance. Fault acceptance relies on fault forecasting, to try to determine the most likely causes of faults, and fault tolerance to enable the system to continue functioning in the event of a fault. With fault tolerance we build redundancy into the system so that faults do not result in system failures. However, there are times when even our most fault tolerant systems will fail. What do we do then? Well, obviously we need to recover as quickly as possible.

The 5 restoration phases of a system are as follows:
  1. Diagnostic Phase - find the fault, diagnose the problem and determine the appropriate course of action to recover
  2. Procurement Phase - identify, locate, transport and physically assemble replacement hardware, software and backup media
  3. Base Provisioning Phase - configure the system hardware and install the base Operating System (OS)
  4. Restoration Phase - restore the entire system from the backup media, including system files and user data
  5. Verification Phase - verify the correct functionality of the entire system as well as the integrity of the user data

It can sometimes be quite hard to diagnose the actual root cause of a fault, as certain faults will sometimes show up in confusing ways. A few months ago I was investigating a problem with a machine that I was told was an OS problem - "Windows keeps blue screening with errors; bl***y Microsoft!" However, on further investigation, it had absolutely nothing to do with the OS and therefore Microsoft. Actually, there was a memory fault. One bank of RAM was faulty and was causing so many errors that the OS couldn't recover. Simply changing the pair of RAM modules sorted the problem out and the machine has been running reliably since. The problem with this type of fault is that it manifests in such a way as to look like a different fault.

The procurement phase can be tricky, as it takes a long time to get components delivered. This is where fault forecasting come in. If we know the most likely faults, then we can keep stock of those hardware components and make sure that any system recovery media is at hand. Obviously, we need backup media to be stored off site as well, but we will need copies on site for quick restoration when the building hasn't suffered damage. Of course, many hardware vendors will offer Service Level Agreements (SLA) as to how quickly they can replace or repair your hardware, but some things you may want to deal with in-house as it will still be quicker than the standard 4 hour fix (or longer).

The final three phases are all centred around restoration of data from backup media. This brings about the point that you must backup your system, not just your data. How long will it take you to install the OS from scratch, install all the additional services and make all the configuration changes? This will take too long. If you have backed up your system state, then this can be restored onto a base OS very much more quickly and without making mistakes or omissions. Another aspect to think about is how have you backed your system up? You need to choose a backup scenario that suits the amount of data and the speed at which you need to recover. Remember that a server with 200GB of data backed up onto a DLT tape drive that supports an average transfer rate of 5MBps, will take over 11 hours to restore, assuming that you restore from the full backup only. Any incremental backups taken since the last full backup will also have to be restored. OK, we can go faster than this, even with tape backup, but the solution needs to fit your system and a live backup solution may be required. This is where virtualisation of servers can help dramatically. Virtualisation is more often sold as 'green IT', being more efficient and cheaper. However, a major benefit is the ability to snapshot running machines and redeploy them in seconds. If one hardware box fails, you can migrate the virtual machine onto another box until the first is fixed. This can take just a few seconds or minutes, depending on the architecture of your solution.

Availability is inversely proportional to the total downtime in the period covered and is usually expressed as a percentage, e.g. 99.9% availability, which equates to around 8 hours 45 minutes downtime per annum. The downtime is the sum of all outages in that period. Therefore, we need to decrease the frequency and length of those outages. The frequency is reduced by building fault tolerant systems and the length by having good restoration policies and practices. It is vital that IT staff know the restoration policy and have practiced it. You need to set out a clear timeline of what has to happen during restoration. Certain systems and services will need to be restored first. Your database server may be the most important server to you, but without the network, DNS, DHCP and directory services no other machines will be able to connect to your server anyway and it may rely on some of those services during start up. Also, don't leave it until you are having to restore a failed system to see if the policy works. You must practice and test the policy to make sure that it does.

Wednesday, 3 June 2009

User-Friendly Multi-Factor Authentication with GrIDsure

I have been tasked with rolling out a trial multi-factor authentication system that must be user-friendly, secure, low-cost and have zero impact on the existing network and users who won't be on the trial. That should be simple!

A trawl round the InfoSecurity Europe show always helps, as you can get the latest state of play from all the major vendors. This year there were the obvious keyring tokens, SmartCards, USB tokens, SMS solutions and some innovative software solutions, including GrIDsure. Before making any decisions about the solution to go for, several things need to be decided, among which are: how much security is required? What is considered user-friendly to a normal, non-technical user? What metrics should we use for authentication?

To answer these we need to look at what authentication is first: Authentication is the binding of an identity to a subject. The subjects we're talking about in this case are users and we're trying to bind their digital identity, i.e. user ID, to them as an individual. There are four metrics we can use to query identity, namely:
  • What they know - e.g. password, PIN, secret/personal information, etc.

  • What they have - e.g. token, SmartCard, phone, etc.

  • What they are - i.e. biometrics

  • Where they are (in space and time) - e.g. particular terminal, logon hours, etc.

The most common form of authentication is username/password, but this is outdated, insecure and, usually, fairly easily cracked (but that's another post). The problem is that each of these on their own is not very secure (yes, even biometrics are not that secure used on their own in a large environment). If we use a combination then we can obtain secure, strong authentication (by strong authentication we mean that we are sure about the identity of this subject as it would be very hard to have all the required metrics otherwise). RSA-type tokens and SmartCards combined with a PIN or password are easy and obvious choices to deploy, but they have issues. They are expensive to deploy, as you have hardware costs as well as the authentication servers. From a user perspective, they have to carry round something else with them. That's not usually too much of a problem, as these are small devices or credit-cards, but you have to have a reader for a SmartCard. Lost or damaged hardware (or tokens that have run out of battery) have to be replaced at a cost.

OK, so we could use SMS, whereby a text message is received by the user containing a one-time password. This is often done by sending one on-demand to the user when they start the login process. What if the network is down? What if there are delays on the network? Well, we can deliver the password up front, i.e. when they log off we can send them the password to use next time they want to login, but then it's hanging around on their phone, which they might lose. So what's the solution? Well, I saw two products that are noteworthy: Swivel Authentication Solutions and GrIDsure.

Swivel's solution relies on either secure SMS or a Turing image (one that is human-readable but not computer-readable). The Turing image will show two rows of numbers, the first is 0 to 9 and the second is the numbers in a random order. The idea is that you have a PIN number that you remember and you read off a one-time PIN from the image. So, if your PIN was 2468 then your one-time PIN for this login attempt would be 7193 from the image below. If someone uses a keylogger, then it will be defeated by this, as that PIN won't be valid next time and it hasn't revealed the base PIN number that you remember. However, if someone uses keylogger and screen capture malware, then it could be read off and deduced - possibly a big ask. However, you could shoulder-surf the pattern as well. That being said, this is very easy to deploy and relatively cheap. Users find it fairly simple to use as well, and it has to be better than a static PIN.

However, there is another solution - GrIDsure. With GrIDsure, you choose a pattern on a grid to remember. The grid size and pattern length is configurable, but the standard is to have a four square pattern on a 5x5 grid. You remember this pattern and then, when you attempt to login, you will be presented with a 'random' grid of numbers. You then read off the numbers in your pattern. For example, if your pattern is the tick that GrIDsure use as their demo and you are presented with the grid shown below, then your one-time PIN number would be 0649. This would change every time you log in. This now defeats keyloggers and shoulder surfing, as each digit will appear more than once. So, even if you see or log the grid and someone typing 0649, there are 81 patterns that would result in this PIN on the grid shown below. It is important to remember that you do not type or click the grid. The grid is just an image; you type the PIN in on a standard keypad.


This can be made more secure by deploying the grid on your mobile phone as a Java applet rather than in the web page, thus defeating screen capture. The grids, in this instance, are seeded from the phone's unique ID, so someone else has to have your actual phone, not just the applet running on their phone. This is true multi-factor authentication, as it requires the physical phone and a pattern that the user knows, without both you cannot authenticate. Also, this is a cheap to deploy solution as most users will already have a mobile phone, and those that don't can drop back onto the single-factor one-time passcode by displaying the grid in a browser. Users also find this very simple to use, as many people already remember their PIN numbers as a pattern anyway. This also 'plays nicely' with other authentication, i.e. you can still use username and password for some users and GrIDsure for others as the desktop client accepts both. Similarly, the web portal is able to interface with virtually any web-based application as a single-sign-on solution. Of course existing users can still hit the services direct and use traditional authentication until this has been deployed to all.

Tuesday, 2 June 2009

Does Smart Grid Open up Covert Communications? - Smart Grid Steganography

The latest Smart Grid technology promises to reduce energy wastage and save money by enabling your household equipment to 'talk' to central servers and neighbouring equipment about their usage and energy requirements. This allows you to highlight how much equipment is costing you to run and how much you could save by turning it off, getting more efficient equipment, running it off-peak, etc. It also enables micro-generation of power and the ability to sell it to the grid or neighbouring properties. There are many privacy issues surrounding this technology, many of which are highlighted by Susan Lyon in her article 'Privacy challenges could stall smart grid'. Obviously, there are much bigger issues than the consumer part of the solution, e.g. the self-healing nature of the power grid due to failure or attack, but the fact remains that people's equipment and homes will be connected to this.

With this in mind, there is another potential issue (or advantage) in my opinion, which people are missing when they talk about security. What about if we aren't attacking the grid or trying to steal energy? Most of the security discussions are centred around being resistant to attack. What if we use the Smart Grid as another network that isn't monitored as well as other channels? We can use it as a communications channel by exploiting valid signals and spurious equipment or usage. This could be a great Steganographic channel (Steganography being the art of hiding a message in a plaintext carrier).

The privacy debate is looking at the control of the release of your information and tracking. However, I can always see what my energy usage/generation is and obtain signals from my smart equipment. I can choose to allow someone else to see this as well (even if it is by telling them what my password is). I can now increase/decrease my energy usage or generation to send coded information to another party. This can get more sophisticated by having a device that can pretend to be a number of household appliances with different energy demands and patterns. If this will transmit it's usage onto the grid, then we can also read this off the grid. Similarly, I can get two devices to communicate usage and pricing levels over the network, that's what it's for. What about if I agree to sell you energy from my micro power plant at a particular price - it could be an absurd price, it doesn't matter so long as it is a valid communication according to the protocols used. The price will be a number presumably, so what if my number is an encrypted message and not a price at all? (Could micro power generation be used for money laundering?)

The protocols to be used for a large-scale Smart Grid are still in development, but they are likely to have similar traits to current networks (indeed Cisco is pushing an IP solution). This means that Steganographic IP traffic is just as possible on this network as on the Internet. It also means that we should be able to send virtually any message we like on the network. The smart grid operators will have to check for well-formed traffic, but we can still conform to the standards and send spurious data as long as we don't attack the network. Can we use this for covert communications?

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust