Friday, 11 December 2009

Contactless Credit Card and ID Card Skimming

This news post was brought to my attention, showing a steel-woven wallet to keep RFID credit cards safe. To some this may sound a bit far fetched and to others nothing new or to worry about, but hear me out.

With new contactless credit cards you can make small purchases without resorting to the Chip-and-PIN transaction that is most common. Instead, you just 'touch' your card on the reader and away you go. The problem with this is that you cannot turn your card off. I can bring the reader to you; I just need proximity. These readers are small and pocketable, and I can read your card without you taking it out of your pocket. The more high-powered my reader, the further away from you I can be to read your card. Initially, the cards gave out the name on the card, the card number and the expiration date. After people showed that it was easy to skim this information off the card, most have removed the cardholder's name from this list. They have also introduced transaction IDs to help protect the cards from being cloned. However, as the introduction of RFID identity cards increases, we will be giving the cardholder's name out again on those.

It has been argued that it is more productive to attack the databases of card details, and still far too easy, so people won't bother trying to read the cards in your wallet. However, I can still obtain a legitimate payment reader and just read your details off and collect your money directly, even if I can't clone the card. In the UK, these contactless transactions are for small amounts of money (£5-£10 typically), but I can collect that from you without your knowledge in a variety of ways. I can use a small pocket device to take a payment from your card directly, but this has to be one at a time and, in some countries, only for small amounts of money. (See the video below for a mobile phone that can process contactless credit card transactions.)




Maybe this isn't worth it to anyone but a petty criminal, but it is relatively easy and cheap. Another way would be to go to a crowded area (public celebrations or gatherings of tens or hundreds of thousands of people for example) and use a high-powered reader to read lots of cards at once. If I can steal £10 from 100,000 people, that's £1m in an afternoon! Somewhere in the region of half a million people gather in Trafalgar Square, and environ, for the New Year celebrations and similar numbers for the Chinese New Year celebrations a little while later.

RFID readers are all over the place and we don't pay them any mind. Shops use RFID readers to catch shoplifters at the entrances. Do you categorically know that they aren't reading your cards instead of catching shoplifters? Would you notice an extra £10 transaction in a shop that you did buy something in? All of our buses and tube stations in London use RFID readers for ticketing. A vast number of commuters will have their Oyster card in their wallet or purse along with their other cards. They don't remove the Oyster card to touch it, they just touch the whole wallet. I could read the other card information at the same time. In fact, do we know that these details aren't logged and just ignored? If they are logged, then maybe we could attack Transport for London's computer system and extract people's credit card details.

Maybe people won't do this with credit cards. What about large banks or other companies that use RFID door entry cards and ID cards? I could read their ID and possibly gain access to their building. Will the ID give me a username to work with? If I can gain physical access to a bank building then I have a huge array of attack vectors at my disposal; it is critical to keep the hackers out.

Even if a hacker can't clone these types of cards, they can still collect information about people and about companies. It is perfectly possible to identify all the employees of a company using RFID cards for door entry, as you can have a high-powered reader near their entrance or at public transport stations. This now gives a social engineer a target and some information to use. Maybe we should all be holding our wallets over the top of shop gates and away from other readers or buy one of the shielded wallets from people like ID Stronghold or Herrington.

Friday, 4 December 2009

Proposed Pseudo-Code for Hacking Process

It is quite common in Information Systems to use pseudo code to describe a process. I have often thought that the same principle can be applied to the process of hacking an organisation, which may help people understand the process and how to protect themselves. Below is my proposal for this pseduo-code for the hacking process. This is very much a work in progress. I would welcome feedback on it and I will update it as suggestions are made or as I feel it needs revising.

organisation = proposed target
organisation.footprint(value, effort, risk)
profit = value - (effort * risk)
if profit > 0 then
  organisation.enumerate()
  select attack_type
    case DoS
      engage_botnet(myBotnet)
      myBotnet.launchDDoS(organisation)
    case Access
      organisation.gainAccess(myAccount)
      myAccount.Elevate()
      organisation.installBackdoor(myAccount)
      organisation.cleanUP()
  end select
else
  exit
end if

This highlights the fact that we only need enough security to make it not worthwhile attacking, i.e. it will cost the attacker more to compromise our system than they will gain. Who would spend £1m to get £10 worth of information? We don't need (indeed cannot have) absolute security, just enough to protect our system. It also highlights another interesting point. Perhaps we should make our countermeasures public - not the actual implementational details or versions, but the fact that they are in place, e.g. that we have an IDS. Consider blatant versus hidden CCTV cameras. Cameras in plain sight deter most criminals, whereas hidden cameras spy on criminals while they perpetrate their crime.

We want to make the risk of being caught/prosecuted value high, so that hackers require a higher value:effort ratio, which we won't give them. Given two identically protected organisations with the same value, would you attack the one that doesn't monitor activity or the one that does?

Obviously, the above is very vague and doesn't provide methods to complete these tasks, but that is not the point of this post. Backdoors and Trojans are usually relatively easy to install if you have the right level of access to the system, so much of your security is going to hang on stopping a hacker from gaining access. Gaining access to an organisation is usually performed in one of four main ways:
  • Malware
  • Sniffing
  • Direct Attack
  • Social Engineering
There are ways to protect yourself, up to a point. Some of the most critical are:
  • Installing Antivirus/Antispyware
  • Latest OS and Application Patches
  • Enterprise-level firewall, with IDS/IPS, AV, etc.
  • Personal firewalls on all mobile devices
  • Secure, hardened configurations
  • Browser lock-down
  • Encrypted communication (e.g. SSL/TLS, VPN, etc.)
  • User Education!

The last point is often overlooked as a critical security practice. Please feel free to comment.

Thursday, 26 November 2009

Blackboard (in)Security

The University recently recently paid for a vulnerability assessment and penetration test, which came back saying that, apart from a few minor things, everything was fine and secure. I take issue with this finding for several reasons, most of which I won't go into here. Now, I haven't actually seen the report produced by the company, but I have had verbal reports from the IT technicians that 'nothing serious' was found.

The University uses a hateful product called Blackboard as a Virtual Learning Management System. This is a web-based application allowing access to learning materials, grades, etc., from anywhere in the world. The problem is that it doesn't use an encrypted connection and uses a simple Session ID cookie to assert that you are an authenticated user. There are two problems with this. Firstly, if I capture your cookie and send it with my HTTP request, then I will be treated as you and can see or do anything as you. Secondly, and much more importantly, is that the username and password are sent in plaintext!

I shouldn't have to explain why this is such a bad idea, but I can't understand why this wasn't picked up as a major security hole. A simple packet sniffer will pick up anyone's username and password, giving full access to the network and other services, such as email, home directories, etc. The trouble is that it's not just students who login to this service, all the academics and admin staff do as well. You can imagine what could be done by grabbing a lecturer's username and password.

How easy is it to actually launch a sniffing attack? Well, surprisingly easy (unless you are a pen tester, in which case it won't surprise you at all). Consider the fact that people do connect to this service from public wireless hotspots or from shared networks, such as the halls of residence or the university network itself. It isn't difficult for someone to sniff the network and extract the user's password. 'MAJOR SECURITY WEAKNESS' not 'nothing serious'.

I advise people to connect to Blackboard instances via SSL connections at the very least. It doesn't stop all the attacks, but it will stop simple packet sniffing.

Wednesday, 11 November 2009

Secret Sharing Algorithm for Protecting Files in the Cloud

Data stored in the cloud can be compromised or lost (see my previous post). So, we have to come up with a way to secure those files. We can encrypt them before storing them in the cloud, which sorts out the disclosure aspects. However, what if the data is lost due to some catastrophe befalling the cloud service provider? We could store it on more than one cloud service and encrypt it before we send it off. Each of them will have the same file. What if we use an insecure, easily guessable password to protect the file, or the same one to protect all files? I have often thought that secret sharing algorithms could be employed to good effect in these circumstances instead.

What are secret sharing algorithms? They are algorithms that will share a secret between several parties, such that none of them can know the secret without the help of others. Either all or a subset of them will need to get together and put their parts together to obtain the original secret. A simplistic solution can be achieved by XORing the secret with a random number, then giving the result to one party and the random number to the other. Neither one can find out what the secret was without the other. To retrieve the secret they only need to XOR the two parts together again. This can be extended to any number of parties.

A more sophisticated way would be to allow the secret to be retrieved from a subset of the parts distributed. In the previous example, if any of the parties loses their part, or refuses to disclose it, then nobody can reveal the secret. This isn't much good if one of our cloud service providers fails. On the other hand, if we can share the secret between three people, but only require any two to regenerate the original, then we have some redundancy. This is an example of a (k,n) threshold scheme with k=2 and n=3.

How do we achieve this though? Well, Adi Shamir proposed a simple secure secret sharing algorithm. It is based on drawing graphs. To uniquely define a straight line, you need two points on that line. Similarly, to define a parabola you need three points. A cubic requires four, etc. So, we can distribute points on a line to each party we want to share the secret with. The order of the line will determine how many of them need to get together to regenerate it. So, we could define a random straight line and distribute three points on it to three different parties. However, only two of them need to get together to regenerate the original secret.

We set up a (k,n) threshold scheme by setting the free coefficient to be the secret and then choosing random numbers for each of the other coefficients. The polynomial then becomes the following:


where a0 is our secret. Now we can distribute points on the line to each of the n parties simply by calculating y for a series of different values for x. We can use the Lagrange Basis Polynomials to reconstruct the equation of the line from k points. However, we do not need to reconstruct the whole line, we are only interested in the free term. This simplifies the equations that we need to use. For example, if we have a straight line, then we only need two points (x0,y0) and (x1,y1). We can then calculate a0 as follows:
Similarly, for a parabola and three points (x0,y0), (x1,y1) and (x2,y2) we have:
This should be fairly simple to implement and use. You would need to sign up to a few cloud services, but you wouldn't have all your eggs in one basket and you wouldn't be reliant on weak passwords.

Wednesday, 28 October 2009

PhoneFactor Security

I was asked recently to look at the security of the PhoneFactor 2-factor authentication solution. If you don't know what it is, then you can find out more here, but essentially you enter your username and password, then they phone you on your pre-defined number and press the # key to validate the authentication. The problem with just pressing the # key is obvious, but they allow you to configure entering a PIN number rather than just pressing the # key. To my mind, there should be no other option than having to type in the PIN number. However, this isn't necessarily a brilliant idea. As I've said before in this blog, a lot of phones log the digits dialled, in which case that PIN isn't secure.

I was also told that the PSTN and GSM networks are secure, so this is a good solution. I'm not sure I agree that PSTN and GSM networks have good security. Analogue PSTN is easy to listen in to with proximity and GSM can theoretically be cracked, and probably will be within 6 to 12 months. So that PIN number isn't really secure. Plus there is the cloned SIM card problem as well.

http://www.mobileindustryreview.com/2009/08/gsm-encryption-can-be-cracked-for-500.html

Having said that, PhoneFactor looks quite good as you enter the PIN on the phone line, not the login dialogue. The problem that Bruce Schneier has referred to is that of a Man-in-the-Middle attack. Most 2-factor authentication methods are susceptible to a MITM attack, including RSA tokens and other hardware tokens. Basically, if I set up a website, for example, to mimic your corporate portal, then you will enter all your details into my page, including your one-time pass code. I will forward them on to the real portal and do whatever I like logged in as you.

The one advantage is that I have to intercept every login attempt, and wait for you to login before I can gain access. Without a 2-factor system, once I've read your username/password combination I can login whenever I like. PhoneFactor would appear to mitigate some of this risk by doing the authentication out of band. However, there is still an attack vector for a MITM attack. In the same way as before, you login to my portal, I forward your credentials, PhoneFactor phone you and you put in your PIN, they enable my session! Obviously, there are other attack vectors as well.

Another potential issue is that you are charged for the phone calls made by PhoneFactor on your behalf. These can be significant costs. In the UK calls to landlines are free, but am I always at my desk when I want to log in? No, I'd want it on my mobile; that will cost me $0.23 per login (East Timor $3.25). So, I could rack up the bill for you company by getting them to call through to someone. If I do this enough times (especially if that person is on holiday in another country with higher charges) I can use up all your credit and none of your users can login.

There is a privacy issue as well. PhoneFactor will know every time you log in or access your bank, etc. How do they protect that data? Do you want them to know that information, even if you do trust they won't accidentally disclose it?

However, I am not against 2-factor authentication. Indeed I think it is a good thing, as users will choose poor passwords, reuse them everywhere and write them down. Similarly, they will give them away to phishing scams. 2-factor authentication removes all of those problems, but by no means is it absolutely secure. PhoneFactor seems OK, but it's not particularly cheap or phenomenally secure. There are some other good software solutions that are pretty cheap as well, and that can combat shoulder-surfing when entering PIN numbers, etc. There are a couple of examples on a blog post I did a couple of months ago: http://blog.rlr-uk.com/2009/06/user-friendly-multi-factor.html

The bottom line is that they are more secure than username/password, but none of them are absolutely secure against all attacks.

Tuesday, 27 October 2009

Security Questions for your Cloud Services Provider

Comodo Vision Video Blog

Cloud Services or Cloud Computing are getting a lot of attention in IT circles, promising cost-effectiveness, flexibility, and time-to-market advantages over traditional alternatives. However, they also increase your security risk by expanding your security perimeter to include that of your service provider. This video blog poses some key questions to ask your Cloud services/Cloud Computing provider regarding data security as well as advice to reduce the risk to your business introduced by Cloud Computing.

See my third video blog for Comodo Vision here.

Thursday, 1 October 2009

APWG Report 1st Half 2009

On 27th September the APWG released their First Half 2009 Phishing Trends Report. This provides some interesting/worrying reading. Most notably is the rise and rise of rogue anti-malware programs.

Rogue anti-malware programs are programs that run on a user's machine and falsely identify malware infections. They then inform users that the malware can be removed by purchasing their anti-malware program. The installed software, in many cases, does absolutely nothing. The malware author has made their money off the user and doesn't care about them or the fact that their machine is left vulnerable to other malware. However, there is another breed of rogue anti-malware that will install other malware onto the user's machine, often adding them to botnets or adding trojans and spyware. According to Panda Labs' Luis Corrons, rogue anti-malware programs are proliferating with "exponential growth. In the first quarter of 2009 alone, more new strains were created than in all of 2008. The second quarter painted an even bleaker picture, with the emergence of four times as many samples as in all of 2008."

Most of these rogue anti-malware programs have a common root - they even look the same. So how come they aren't detected as malware? Well, often they employ server-side obfuscation so that each version is slightly different, thus defeating some signature-based scans. Also, you have to remember that many of these don't perform any malicious actions and, therefore, don't trigger other alarms.

What can we do about rogue anti-malware? Well, simply don't trust anything on the Web saying that you are infected or that they will scan you for free. Do not install any anti-malware from a company that you do not know and always check for validity of links and downloads. There are many companies out there providing free basic anti-malware or sophisticated products for a relatively low price that are legitimate, such as: Panda Security, AVG, Comodo, Symantec, etc. If you do get infected by one of these programs then you need to remove it. Instructions for removing the most common ones can be found at http://www.anti-malware-blog.com/ - N.B. be warned that I have not assessed or validated their instructions and there is no guarantee that they won't cause other problems.

What about the rest of the report? Well, phishing is still on the increase, with reported phishing highs for the first half of the year exceeding those of last year significantly (about 7%). 21,856,361 computers were scanned to determine host infection rates. 11,937,944 were found to be infected (54%), which is an increase of over 66% from the last quarter of 2008. Banking trojan/password stealing crimeware infections rose by more than 186%. Finally, payment services have taken the top spot in the most targeted industry sector from the financial sector, although this is still a close second. To see how this compares, a previous blog post of mine on this shows how things have changed.

For more information about the Anti-Phishing Working Group, to report phishing attacks or to see their reports yourself, visit http://apwg.org/

Monday, 28 September 2009

Telephone and Fax Services Security

In this day of doing everything online, we still rely heavily on services delivered over POTS (Plain Old Telephone Service). Banks and credit card companies still require the telephone to make certain changes, queries and security checks, even though most functions can be performed online. Medical records, bank details, security key order requests, etc., are routinely transferred by facsimile. However, are these secure? Are they more or less secure than doing the same thing online?

I'm not going to talk about the underlying security of POTS, but concentrate on a couple of easy attack vectors on the end device of the user that I have recently observed. A couple of weeks ago, I needed to amend something on one of my credit card accounts (I would tell you which bank, only it's my personal credit card and I don't want phisers knowing which banks I have accounts with). This bank has an automated telephone answering system to make things more efficient and reduce staff required - pretty standard. So I made sure that I was in a room on my own, to prevent eavesdropping my conversation, and dialled the number. The automated system asked me to type in my full credit card number on the keypad.

The problem with doing this is that the telephone will remember these digits as part of the last number dialled. Therefore, all someone would have to do is to recall the last number dialled and read off the credit card number. If they actually dial it they would be put through as the legitimate card holder. Now, admittedly they will probably ask some security questions on the other end before making any changes, but these may consist of simply asking for a date of birth, which is fairly easy to find out. Even if you don't know this information, other information may be given away in the mean time (e.g. who the card holder is as they normally use your name in any greeting). The problem is compounded if you make a call from work, where you will probably be using an exchange. Exchanges will store all the numbers dialled, including any options or credit card numbers entered on the phone's keypad. This log can simply be printed out and your details read off. Of course the number dialled will show which bank you are using as well, although this can also be gleaned from the first 6 digits of your credit card number.

Things can potentially get worse if you use facsimile or fax machines. There are different types of fax machines that work in different ways. Most will keep a log of calls and faxes sent and received. This may or may not be a problem, depending on the level of detail of the log and whether you're typing in credit card details during a phone call made on the machine. However, some fax machines use rolls of pigment on acetate (or similar) to print the fax out when received. The problem here is that these rolls are wound through during printing and that part is never reused (otherwise you will get gaps in your printing). However, what this means is that when you come to throw the roll away once used, it will have a perfect facsimile of everything printed on the machine since the roll was put in, only in negative. To get round this, you must shred, or otherwise destroy, the used roll, not just throw it in the bin.


As to whether this is more or less secure than an online transaction is a difficult question to answer. On the one hand, you often need physical access to the phone or fax machine to get to the logs, although telephone exchanges are often online. Also, sifting through a bin outside a premises isn't that hard and can often be very rewarding. On the other hand, transactions online are encrypted and people are more aware of the security implications in general. However, malware and man-in-the-middle attacks can still thwart this type of transaction, but it does require more skills than sifting through a bin.

Not all data leakage comes from computers, pen drives, etc. Sometimes a seemingly innocuous device can betray your information and breach your security. Unfortunately, you have to think of all possible attack vectors and mitigate the risks. This is why a full information policy that covers all forms of data is required.

Friday, 25 September 2009

Human Factors in Information Security - Errors & Violations

Human failures are often described as Slips, Lapses, Mistakes and Violations. These are grouped into two categories: Errors and Violations. The difference here is the intent - violations result from conscious decisions to disregard policies and procedures, whereas errors have no malicious intent. Also, violations often involve more than one form of misconduct, whereas errors are often isolated.

Don Turnblade has stated that in his experience "well trained staff had a 3.75% unintentional non-compliance rate; they did not realize that installed software compromised data security. About 0.4% of end users were intentionally non-compliant, generally willful persons with strong technical skill or organizational authority who were unaccustomed to complying with computing restrictions."

So what are the different types of error? Dealing with each in turn, we have Slips, Lapses and Mistakes.
  • Slips - actions not carried out as intended, e.g. pressing the wrong key by accident. Slips usually occur at the task execution stage.
  • Lapses - missed actions or omissions, e.g. forgetting to log out, or a step in a configuration process.
  • Mistakes - occur due to an incorrect intention, whilst believing it to be correct, i.e. they are deliberate actions with no malicious intent, e.g. misconfiguration of a firewall. Mistakes usually occur at the planning stage.

So who causes the error or violation and how do we combat them? Slips and Lapses are usually the fault of the user, but can be mitigated by making it more difficult for the user to make the error, e.g. by having confirmation dialogs for slips and better training for lapses. Mistakes tend to be the fault of designers and are slightly more difficult to combat as designer education is required or outside technical expertise needs to be brought in. However, this doesn't always solve the problem if they don't have the skills and knowledge required. Finally, violations can often be laid at the door of the managers. It is often the case that a culture of violations is accepted by senior management, who fail to impose proper sanctions or take the threat seriously.

All of these have to be dealt with to have a secure system and most of it boils down to having proper user education and training in place.

Thursday, 24 September 2009

Personal Mobile Devices Violate Compliance

Computer Weekly recently conducted a survey via Twitter on how many organisations allow their users access to corporate email from their own private phone. Unfortunately, I haven't seen any results from this survey as yet, but it made me think about organisations that do allow private devices to attach to the network, not just mobile phones. I have also had many comments on my blog post entitled 'Mobile Device Data Breaches', which have fed into this post.

In one of those comments, someone pointed out that in their experience users are often a weak link. Isn’t it always the case that users are the weakest link? A poorly educated/trained user can compromise the best security. Unfortunately, I have seen so many organisations that do not adequately train their users or make them aware that there are policies, let alone what they mean to their daily usage of the corporate systems. I have also come across one organisation where a top executive had all the system passwords stored, unencrypted, on his PDA. He didn’t see a problem with this as he always carried it with him!

How many organisations these days have push email onto a mobile? How many of those organisations send sensitive documents around via email? Do they have encryption and password access on those devices? Not many that I’ve seen. The typical Blackberry users that I see have no password or PIN access to their phone, but it does have full access to the corporate mail exchange. These devices also have the ability to store, and even sync, corporate documents. What policies do you have to cover them?

Quoting from ISO-27002:2005 11.7.1: A formal policy should be implemented, and appropriate security measures adopted, for mobile computing and communications activities. Controls should apply to laptop, notebook, and palmtop computers; mobile phones and "smart" phone-PDAs; and portable storage devices and media. Controls include requirements for:

  • physical protection;
  • data storage minimization;
  • access controls;
  • cryptographic techniques;
  • data backups;
  • anti-virus and other protective software;
  • operating system and other software updating;
  • secure communication (e.g., VPN) for remote access; and
  • sanitization prior to transfer or disposal.
The problem is that most organisations do not have adequate policies covering mobile devices. Moving away from mobile phones, are you allowed to plug a USB device into your corporate machine? Many of these devices can store sensitive data and even access the Internet themselves. What about an insecure iPhone connecting to the Internet and leaking data? Most organisations aren't even aware that you can lock down USB usage via tools, but policies should definitely be in place. Alan Goode, from Goode Intelligence, said the following:
"I feel that you can lock down with security policy and tools but this is a complex problem as the combination of mobility and technology diversity, e.g. I can use my iPhone to connect to the enterprise network and store sensitive data on it, is creating a major headache for infosec professionals. As well as the problem with laptops and USB drives we are also seeing a growing use of employee-owned mobile devices, netbooks, games consoles, smart phones, all having IP and WiFi capabilities and all capable of picking up enterprise data and email."
There are a number of things we can do to stop these devices from compromising the network by blocking their use. We can block USB devices from being able to connect unless they are a managed resource, so that users can't just plug anything they bring in from home. All USB devices have an ID, which can be registered with a central authentication server to check before a computer allows it to be used. Of course this needs third-party software, but can be done quite easily. We can also block devices from being able to obtain an IP address or connect to the corporate network in the first place. We shouldn't have a free-for-all attitude on the network. It should be locked down to approved devices only. Only managed devices can connect and they will have to authenticate.

I think it’s asking for trouble to allow users to connect their own private devices to the network or services. I don’t see how you can comply with any standards or your own security policies when allowing this, as you don’t know what’s connected or how it’s configured. Even if they are secure (a very big IF), by not knowing the configuration or being able to audit it, you are surely in violation of any accreditation or certification that you may have because you cannot test or 'prove' your compliance.

Wednesday, 9 September 2009

Compliance does NOT Equal Security

Comodo Vision Video Blog

Responsibility for the notorious Heartland Payment Systems data breach late last year has been debated recently, with Heartland’s CEO suggesting that their PCI auditors let the firm down, while the auditors insist they can’t be responsible for checking absolutely everything. This case brings to light the reality that absolute security is an impossible goal, and that audits are only as good as an organization’s vigilance in following proper security procedures after the audit has been completed.

See my second video blog here.

Friday, 4 September 2009

Mobile Device Data Breaches

Comodo Vision Video Blog

Several recent data breaches at major enterprises and governmental agencies stemmed from the loss or theft of mobile computers and USB drives. While encrypting the data on these devices isn’t a bad idea, the larger question is why was sensitive personal information stored on the mobile device in the first place?

See my first video blog for Comodo Vision here.

Monday, 31 August 2009

Should an Administrator Trust their Users?

The answer is yes and no (note, in this blog, I'm not talking about cryptographic or identity trust, but systems trust). There are two aspects to this. Firstly, do you think your users will deliberately act against your organisation or try to harm the system? This is not usually the case for corporate employees - you also have severe sanctions available if they do. The second aspect is, do you trust your users NOT to make mistakes? Everyone makes mistakes; we're only human. You don't want accidental updates or changes, so in this sense maybe you shouldn't trust your users.

Actually there are three overall approaches to system trust on networks. We can trust all of the people all of the time (bad idea, but much more common than you'd think), trust no one at any time (maybe too excessive and hinder functionality), or we can trust some of the people some of the time. The last one is usually the best strategy to adopt for your network.

Finally, we have to decide on the overall approach to security. Are we permissive or restrictive? In a permissive environment you can do everything, apart from those things on a blacklist. In a restrictive environment, you can do nothing, apart from those things on a whitelist. From a security standpoint restrictive is better, but from a usability standpoint permissive is better. If you can manage the whitelist successfully, this is the better solution and only trust some of the users some of the time.

ATM & Bank Card Security

I recently read an article in New Scientist entitled "Want to clone bank cards? Just press 'print'". They state that it has been discovered that

"... a devious piece of criminal coding that has been quietly at work in a clutch of cash machines at banks in Russia and Ukraine. It allows a gang member to walk up to an ATM, insert a "trigger" card, and use the machine's printer to produce a list of all the debit card numbers used that day, including their start and expiry dates - and their PINs. Everything needed, in fact, to clone those cards and start emptying bank accounts."

This is possible because ATM Terminal vendors have succumbed to financial pressures, and the demand for greater functionality, and moved to using standard modular PC architectures and off-the-shelf operating systems, such as Microsoft Windows and Linux. These ATM devices then become vulnerable to similar malware as their desktop counterparts.

SpiderLabs, part of Trustwave, identified that in this case a new version of the 50KB lsass.exe Windows XP file is loaded onto the system via a compromised Borland Delphi installer utility, isadmin.exe (note, that's LSASS.EXE, not 1SASS.EXE as some have reported). You can view the full report from Trustwave as a PDF here. The legitimate lsass.exe executable is used to cache session data in Windows, so that users don't have to re-enter passwords when receiving new emails or returning to a website, which is essentially what the malware developers want to do with the card data. Actually, this has no place on an ATM, but may not be picked up, due to the fact that it is, by default, on most Windows XP installs.

If a trigger card is not detected, the malware stores the transaction data to a file called tr12 and key or PIN data to a file called k1 in the C:/WINDOWS directory. If a trigger card is detected, then a menu of 10 options is displayed for 10 seconds, with functions including: uninstalling, deleting logs, printing logs via the built-in printer encrypted with DES and possibly the ability to export the data onto the trigger card. This particular malware only works on transactions in US dollars, Russian Rouble or Ukrainian Hryvnia. It is also said that chip-and-PIN cards across Europe are not vulnerable to this malware as the PIN is encrypted in the secure PIN pad.

It has been speculated that deploying the malware was either an inside job or the result of bribes and threats; the reasoning being that an attacker would have to have physical access to the ATM to deploy the malware. However, the ATMs and banking network, although separate from the Internet, have not necessarily been hardened enough. Back on 25th November 2003 the first known case of a worm (Welchia) infecting Windows XP based ATM machines was reported, which used the closed financial network to propagate. This was possible because the ATMs weren't patched by the financial institutions in question. This brings on the whole problem of patch management on ATMs as well as placing greater restrictions on the financial networks. How long will it be before keyloggers are available for chip-and-PIN cards as well?

Friday, 21 August 2009

Wireless Network Security Recommendations

Wireless Networks are still causing businesses problems. By their very nature they are insecure, as they are a broadcast network that frequently extends beyond your physical boundary - remember radio signals don't stop at your door. There ARE security mechanisms to make them secure, but too often these are not implemented properly or are circumvented by users. It is vital that all traffic on the wireless network be encrypted, and connections authenticated, otherwise anyone with a laptop can view all your traffic. There are many mechanisms for achieving this, but at the very least you should use WPA with long pass phrases (not simple passwords) and MAC address authentication.

Don't use WEP; it can be broken easily. I won't bore you with details here, but I refer you to Google instead. However, there are several flaws such as using a linear Integrity Check Value, such that predictable bit-flipping can be used to send invalid messages that will appear to be valid. Secondly, the 40-bit shared secret is 'extended' by use of a 24-bit per-packet Initialising Vector. As any cryptographer will tell you, the more often you use the same key, the easier it is to recover the plaintext (particularly if you have known plaintext, which we do have in the headers of network packets of course). IV collisions happen surprisingly quickly, especially on corporate wireless networks, as they will usually have reasonably heavy load. TKMaxx found this out the hard way when they lost half a million credit card details to a hacker sitting in their car park. This also shows that they almost certainly didn't segregate the traffic and force it through a firewall.

So what can we do about this? Well, all modern equipment will support Wi-Fi Protected Access (WPA) and WPA2. A standard implementation of this is to use a Pre-Shared Key (PSK), i.e. a pass phrase, and the AES block cipher for encryption. This is the minimum requirement for a wireless LAN. Again, don't use simple passwords, as the security of your system is relying on them. You should use long complex pass phrases, with punctuation. Another idea is to encrypt a pass phrase using itself (or another) as a key in an encryption tool; then use the resulting base-64 encoded string as your PSK. However, automatic key negotiation and the use of digital certificates is a better option in a corporate environment (remember for wireless access you can run your own internal certificate server so that you don't incur additional costs).

This doesn't solve everything though. A little while ago the head of a department in an organisation I was involved with decided that he didn't want to have to use the docking station for his laptop as it constrained where he could work in his office. So, he didn't contact the IT department, but instead went to his local IT retailer and bought a cheap wireless access point. He plugged this into the network and, not only did he not configure any security, but he didn't even change the default password on the device. Do you categorically know that you don't have a rogue access point on your network? This can be stopped by using technologies such as 802.1X port-based authentication and a RADIUS server.

Wireless networks also need to be treated as insecure and separated from your wired network via a firewall, with real-time virus checking and an Intrusion Detection System. This doesn't mean that they have to be unprotected themselves; you should still protect them from outside attack by firewalling them off from the Internet. The important point is not to let traffic flow, unchallenged, from the wireless network onto the wired network. This is not often done though. I was in Vienna recently on business and the hotel I was staying at had free wireless access for guests. However, one night I couldn't get access and asked why. I was told that they had switched it off as someone was trying to access their servers (they weren't very proficient or experienced hackers fortunately). The point that I found more worrying was that their public wireless network was directly connected to their servers, which the hold names, addresses and payment details of guests and even the door card programming details! You can imagine what could happen if someone were to get into the servers...

Wireless networks and wired networks should not coexist on the same subnets. This is for two reasons. Firstly, it is easier to attack and, therefore, attach to a wireless network, so you don't know categorically that all stations are legitimate. Secondly, most wireless networks are used to connect mobile devices, such as laptops and netbooks, to the network. Do you know that these haven't picked up any malware whilst not connected to your corporate LAN? You can address the latter with network access control, but that's a different topic. However, all traffic from the wireless network should be treated with a level of suspicion and therefore separated. You don't have to have a separate Internet connection or new wiring to achieve this; VLANs (or Virtual LANs) can solve the problem by logically segregating the traffic into the firewall. This also allows you to provide public wireless access for visitors/customers as you can run two separate, VLANed wireless networks through the same access points onto the network - one with limited access to the corporate LAN and the other with none.

Wireless networks can be implemented securely, but remember to separate your wired and wireless networks and implement secure encryption and authentication.

Friday, 14 August 2009

Data Anonymisation to prevent Data Leakage

With data leaks constantly in the news, I thought I would write a quick blog post about data anonymisation. The problem seems to be that people think it's perfectly acceptable to walk around with sensitive information on mobile devices and removable media. The solution, according to common thought, is to encrypt those devices. This is a solution that should be adopted, but after the more fundamental problem has been addressed. It should not be possible or necessary to store raw sensitive data on mobile devices or removable media!

Assuming that you need the data for business intelligence purposes and that the IT department can't or won't (for some good reason) allow this to be done online through a secure connection, then you must anonymise the data first and then encrypt it. Why do you need to know the names, addresses and credit card numbers of your customers when on the road TK Maxx? Why do you need the names, addresses, dates of birth, national insurance numbers, salaries and bank details of your employees when away from the office UPS? I'm afraid, that the only reason I can think of to have the non-anonymised data is for fraudulent purposes (please send me a comment if you can think of a legitimate reason).

Drawing from Pierangela Samarati's session at IPICS, I'll give a very brief overview of data anonymisation. There are two basic techniques to anonymise data: generalisation and suppression. With generalisation, we use a more general value in place of the specific value, e.g. birth year rather than birth date, postal district rather than full postcode (KT1 rather than KT1 2EE), credit card issuer rather than full credit card number (1234 56** **** **** rather than 1234 5678 9012 3456), etc. Alternatively, we can suppress the sensitive information by removing it totally.

Now there is a whole academic discipline surrounding data anonymity and how to achieve k-anonymity that I won't go into here. I'll just look at what the above means to data such as a normal company might want to use for business intelligence reasons, rather than surveys and data gathering purposes. In this sense, we are trying to protect the privacy of our customers, employees, etc., above all else, rather than have the minimum anonymity possible for the data set. I will use the following table to illustrate the anonymisation.

NameDoBPostcodeCC No.
Alice02/02/64KT1 1AB1234 5678 9012 3456
Bob16/02/64KT1 1BC1234 5678 9012 3467
Charlie08/04/64KT1 1CD1234 6778 9012 3478
David02/04/66KT1 1DE1234 6778 9012 3489
Edgar04/04/66KT1 2AB1234 6778 9012 3490

There are many schemes for anonymising this data, but I'm going to concentrate on Attribute Generalisation combined with Attribute Suppression. This basically means that we will generalise each value at the attribute level (i.e. the same level of generalisation will be applied to all values). Secondly, we will suppress any attribute that uniquely identifies someone. Using minimal generalisation we would get the following table. ('-' denotes suppressed data and '*' denotes generalised data)


NameDoBPostcodeCC No.
-**/02/64KT1 1**1234 5678 9012 34**
-**/02/64KT1 1**1234 5678 9012 34**
--KT1 1**1234 6778 9012 34**
-**/04/66KT1 1**1234 6778 9012 34**
-**/04/66-1234 6778 9012 34**

We have had to suppress Charlie's birthday, because she was the only one born in April 1964. Similarly, Edgar is the only one who lives in KT1 2**. However, we haven't achieved anonymisation here. If we know Charlie was born in April 1964, then this date doesn't appear in the table and only one date is suppressed, so we know her tuple in the table. Similarly, if we know Edgar lives at KT1 2AB, then we know that his is the last tuple. The credit card details should be generalised more than this as well, as others may store the last four digits of a credit card number, so it may be possible to cross reference. Also, why do we need their credit card number for business intelligence? Surely issuer is good enough? So, we can do the following.


NameDoBPostcodeCC No.
-**/**/64KT1 ***1234 56** **** ****
-**/**/64KT1 ***1234 56** **** ****
-**/**/64KT1 ***1234 67** **** ****
-**/**/66KT1 ***1234 67** **** ****
-**/**/66KT1 ***1234 67** **** ****

This gives us a full count of customers, their geographic locations, age and credit card issuer. I suggest that this is enough information to cover most queries that you may wish to run for business intelligence purposes and, therefore, the maximum that should ever be stored on a mobile device or removable media. This data should also be encrypted.

Of course, this doesn't solve all problems. What if you know Edgar was born in 1966? You now know his credit card issuer, which enables you to launch a directed phishing attack on him. Data Anonymisation can fail in the face of attack, particularly when there is external knowledge, which you have no control over. The moral is, don't store sensitive data on mobile devices or removable media. If this really isn't possible to avoid, then you must anonymise it first and encrypt it.

Tuesday, 4 August 2009

Zoomable, Non-Linear PowerPoint Presentations with pptPlex

OK, so many people have asked me how I do my presentations and could they have a link that I've decided to put the links and a short explanation on my blog. My presentations are all done in PowerPoint 2007, but I use a Microsoft Office Labs plug-in called pptPlex. From their website come the following quotes:

"pptPlex uses Plex technology to give you the power to zoom in and out of slide sections and move directly between slides that are not sequential in your presentation."

"...pptPlex can help you organize and present information in a non-linear fashion."

If you don't know what any of this means, then you should ask me to do a presentation :-) or have a look at their videos. It's very simple to install and use. However, remember that you need it to be installed on your presentation machine in order to give the Plex version of the presentation, otherwise it will just show as a normal PowerPoint presentation.

If the pptPlex Ribbon Tab doesn't display in PowerPoint...

There are several reports of the plug-in becoming disabled on some systems and the ribbon tab not displaying. There are solutions on the forums for this, but most of them have an error in the selection of which plug-ins to manage, so I'll quickly give an explanation here. If you have any other problems, don't ask me, use their forums.

  1. Click the (round) Office button and then click on PowerPoint Options
  2. Select Add-Ins from the left
  3. If pptPlex from Microsoft Office Labs appears in the disabled list then carry on, otherwise you have a different problem
  4. Right at the bottom, select Disabled Items from the Manage drop down list box and click Go...
  5. Select the add-in from the list and click Enable, then click OK in the PowerPoint Options dialog (you may need to shut PowerPoint down and start again).

Thursday, 30 July 2009

IPICS OCTAVE-S

OCTAVE-S stands for Operationally Critical Threat, Asset and Vulnerability Evaluation for Small organisations. It is a version of the full OCTAVE methodology aimed specifically at small to medium sized organisations, i.e. those with up to 100 employees. OCTAVE is a risk-based strategic assessment and planning technique for security. It is a top-down approach that is driven by the business's missions and objectives, and is not technology focussed. OCTAVE-S is simply a streamlined version of OCTAVE, with simple worksheets and less expertise required. The outputs of OCTAVE-S should be similar to those of OCTAVE, it is just that it may be possible to shortcut some of the process in smaller orgnisations. OCTAVE itself is designed to be applicable to any organisation, no matter how large.

The Main OCTAVE principles are as follows:
  • Core Information Security Risk Evaluation Principles
    • Self-directed
      • The organisation takes responsibility for the evaluation
      • The organisation makes the decisions
      • Flexible / adaptable in the face of...
        • Changes to best practices
        • Evolution of known threats
        • Technical weaknesses
        • A defined process
          • Responsibilities are set out and assigned to people
          • How activities should be performed is documented
          • Standards are set for documentation/artefacts : tools, worksheets, catalogues etc.
          • A continuous process over time
        • General Risk Management Principles (general principles beyond InfoSec)
          • Forward looking – proactive
            • Identify future asset that may be significant
            • New classes of threat
            • Focus on critical few
              • Resources are always constrained
              • Avoid spreading effort too thinly
              • Integrated management
                • Information security as routine consideration for general business strategy
              • Organisational / Cultural Principles
                • Open Communication
                  • Information sharing : avoidance of blame/judgment
                  • Global perspective
                    • Consult widely and integrate all views
                    • Widen perspective to organisational goals
                    • Based on teamwork
                  To find out more about OCTAVE-S visit the website, where you can download the Implementation Guide, which contains introductory materials as well as the actual guidelines and worksheets.

                  Wednesday, 29 July 2009

                  Lack of true Identity Verification forces need for EV SSL Certificates

                  What are EV SSL Certificates? Simply they are Extended Validation SSL Certificates. What does this mean? Well, simply put the Certification Authority goes to more lengths to validate the identity of the person asking for (paying for) the certificate. The EV SSL Certificate then will make the browser address bar go green to certify that this really is the site you think it is and not a phishing or pharming site. It gives the user a very visual check of the validity of the website that they are using.

                  Isn't this what Digital Certificates were supposed to do in the first place? Yes, but the Certification Authorities were more interested in taking people's money than verifying that they actually were the people in question. This led to almost anybody being able to sign up for a certificate claiming to be almost anybody. This, obviously, isn't a workable situation, so EV SSL Certificates were introduced to do the intended job, only at a higher price than the standard SSL Certificate (in part due to the actual identity validation performed no doubt). That being said, they are not expensive in the grand scheme of things and should be used far more widely than they currently are - for example NatWest still doesn't use an EV SSL Certificate at the time of writing this, instead they have ended up implementing Trusteer's Rapport at a much higher cost.

                  To give you some idea of cost of a digital certificate, Comodo's price is £214 per year (US$359/€359) for a single fully qualified domain name (i.e. per website); this includes their 'Corner of Trust' logo. Compare this with somewhere near $1 per customer for Trusteer's Rapport or even hosting fees and business profits! Admittedly, their cheapest SSL Certificate is only £41.95 per annum, but is £214 too much to ask when you are giving customers peace of mind, assurances over authentication and tackling phishing & pharming?

                  Why aren't normal certificates secure? Well, the problem is that most Certification Authorities don't do very much checking. Usually they check your domain name by sending you an email to an address that has the same domain name extension. All this says is that someone who has access to an email address on that domain wants to set up a secure web server. They don't actually check who you are. They are more interested in whether you will pay than if they should issue the certificate. This was demonstrated at IPICS today, when it was shown that VeriSign had given out a Digital Certificate to someone using the name William Gates! They have also fallen for scams, where they were duped into issuing a code signing certificate for the Microsoft Corporation by someone proving the point that they are not careful enough.

                  I decided to see if this was the case with other organisations, and it is. I have set up an SSL certificate just by being able to view an email sent to an address on that domain. I also wanted to know if I could be Steve Ballmer - the new Bill Gates. So, I set up an email account: Steve.Ballmer@live.co.uk using details about him, such as his year of birth: 1956. I then decided to try Thawte out, as they provide free email certificates for personal use. Sure enough, after entering the data below, I was sent an email to my address with codes to verify myself. I now have a digital certificate to sign emails from Steve.Ballmer@live.co.uk.

                  Surname: Ballmer
                  Forenames: Steve
                  Date Of Birth: 1956/03/24
                  Nationality: the United States
                  Email: steve.ballmer@live.co.uk
                  Where were you born? Detroit
                  Where did you go to school? Detroit Country Day School
                  First company you worked for? Procter & Gamble
                  What is your spouse's Name? Connie Snyder
                  How many children do you have? 3

                  Now, Thawte has a little trick up its sleeve here, which aides security. Before they will assign the name Steve Ballmer to the certificate, I must pass their Web of Trust, i.e. I must convince some other users that I am indeed Steve Ballmer first by meeting them face-to-face. However, if I could supply them with details such as passport number and social security number, then I'd be set. So, I can still sign my email, but if users look closely at the signature and check the certificate, they will see that I haven't been verified. However, if they don't actually look at this carefully, and with knowledge of what it means, then they will be fooled into thinking I really am Steve Ballmer. Why should the ordinary user know about this? Comodo and VeriSign, on the other hand, provide no such backup. So, I can now sign my email as Steve Ballmer. Here's my Public Key for Steve Ballmer from Comodo showing that I really am Steve Ballmer!

                  This isn't really good enough in this day and age of phishing scams.

                  Post Script Edit
                  Two things have happened since writing this blog post. Firstly, I have become aware of an attack on SSL Certificates by using a null value inserted in the domain name to trick the CA into issuing a certificate on an invalid domain. For example, www.natwest.com[null value].phishers.org will result in an SSL certificate being issued for www.natwest.com to the phishers.org site, which will appear valid in many browsers (but not all). Link to blog post. This won't (shouldn't) affect EV SSL Certificates though, only the Domain Validated ones.

                  Secondly, Comodo, to their credit, do admit that this is a problem and are takling it. They have sent me a link via email to a video clip, which in turn links to more information. That can be found here. The bottom line really is that these EV certificates are more secure, don't cost that much and should be the norm. As an industry we should be educating users into recognising and looking for these security features.

                  Monday, 20 July 2009

                  IPICS Risk Assessment Slides

                  These are my slides on Information Security Risk Assessment, presented at the Intensive Programme on Information and Communication Security (IPICS). The topics covered are: the System-Holistic Approach to ICT Security; Risk Assessment approaches, strategies & terminology; Three Card RAG / Obstacle Poker; OCTAVE® - Operationally Critical Threat, Asset and Vulnerability Evaluation.

                  A PDF of the slides can be downloaded from here. (updated)

                  I will publish more information on the topics covered in due course (and if anyone asks). However, more information on Three Card RAG / Obstacle Poker can be found in a previous blog post.

                  Friday, 26 June 2009

                  The PCI DSS and Why It's Relevant to Everyone

                  Many of you will know that PCI DSS stands for the Payment Card Industry's Data Security Standard and most of the rest of you have probably heard of it and wondered what it was. You may immediately say I'm not interested in the Payment Card Industry and want to navigate away, but just before you do, you should know that many of the 12 recommendations are relevant to all. Actually the PCI DSS recommendations are mostly common sense that we should all be implementing anyway. I'll give a quick overview of the big 12 and how these can be applied to all networks in this blog.

                  According to the PCI Security Standards Council, "The core of the PCI DSS is a group of principles and accompanying requirements, around which the specific elements of the DSS are organized." The 12 recommendations that they put forward can be generalized as follows and should be adhered to by all organisations:

                  1. Install and maintain a firewall configuration to protect private/sensitive data
                  2. Do not use vendor-supplied defaults for system passwords and other security parameters
                  3. Protect stored data
                  4. Encrypt transmission of private/sensitive data across open, public networks
                  5. Use and regularly update anti-virus software
                  6. Develop and maintain secure systems and applications
                  7. Restrict access to private/sensitive data by business need-to-know
                  8. Assign a unique ID to each person with computer access
                  9. Restrict physical access to private/sensitive data
                  10. Track and monitor all access to network resources and private/sensitive data
                  11. Regularly test security systems and processes
                  12. Maintain a policy that addresses information security

                  Dealing with each very briefly, I’ll try to show how these can be applied to a ‘normal’ network rather than one involved in dealing with payment details. Firstly, we all know that we must have a firewall and block access from the outside world. However, just having a firewall isn’t enough, if it isn’t configured and managed properly. Every port and service open through it must be justified and have clear documentation. There must be policies and procedures in place to control the opening of ports and services through the firewall so that each one can be assessed for its impact on the overall security. Wireless networks should also be separated from wired networks by the firewall – this doesn’t mean that the wireless network has to be unsecured from the Internet or that it has no access to the wired network, just that restrictions and filtering should be in place between them. Obviously, there should be no access from outside the network into the network, as a DMZ should be set up for all public access machines. Finally, personal firewalls should be installed on all mobile machines and hardware firewalls installed at users’ home premises if external access is to be allowed.

                  The second should be obvious and doesn’t really need any explanation, other than to say disable as many features as you can and don’t allow any configuration or management from outside your network. There are ways to enable secure remote access to the network, thus allowing for management of servers and devices to be achieved from within the network.

                  The third and fourth requirements go together and mean that strong authentication is required and permissions set appropriately. However, we should also implement encryption on highly sensitive data and removable devices/mobile machines, with proper key-management processes, and Digital Rights Management (DRM) to prevent digital leakage. All data transmitted across public networks should be encrypted (whether sensitive or not).

                  Is there anyone who doesn’t follow point 5? You’d also think that the zyxst (6th) recommendation would be followed by all organisations, but this isn’t the case. You must have proper patch management procedures as these are critical to securing potential vulnerabilities. However, how many times do you see that the state of the server or service is lagging behind the latest patches? I have seen many commercial outfits developing web applications that are not secure as security is an afterthought and not embedded in the development cycle. This means that, even if you don’t write your own applications, you must check third party applications that you intend to use on your network. Make sure that they have been developed using secure coding guidelines and have been independently tested. An example of not taking it for granted is Apple, who has no formal security program (ref.).

                  The seventh recommendation sounds obvious again, but it isn’t followed even by many of those covered by the PCI DSS. Restrict access to your data by business need to know. Does your network administrator need to know everyone’s personnel details and payroll? No, so make sure they don’t have access. This is an often overlooked problem, but many organisations rely on people not looking rather than actually protecting the data. How many examples can you quote of companies, and government organisations, losing confidential data that they should never have been able to access? An example springs to mind of a large retailer that lost the names, addresses and credit card numbers of millions of customers, which were copied onto a laptop. Why would you need all this data? Presumably you want it for business intelligence purposes and data analysis. This can, and should, be delivered live through secure channels, not done from a local copy. Why do you need their credit card number though? The point is that it shouldn’t be possible to extract this data and store it on the laptop in the first place. The next two are also related to this and a previous point of requiring strong authentication, which means that you must not allow sharing of user accounts by employees (especially executives giving passwords to PAs!).

                  Recommendations 10 and 11 deal with monitoring and testing the network. In order to make sure that your system is functioning as expected and that your security mechanisms are appropriate and don’t have vulnerabilities, you need to monitor the network and regularly test it for vulnerabilities. Every time a significant change is made to the network, its impact on the network should be assessed and tested.

                  The final recommendation is to maintain an information security policy. This must cover all forms of information though, not just electronic. It should be very clear and all employees should be trained in its content and abide by the policy or face punitive measures. Employees should have to sign a copy of the security policy, stating that they have read it, understand its content and agree to abide by it. However, this has no teeth if you do not adequately train all of your employees and update them regularly on the policy and any changes.

                  The PCI DSS recommendations can and should be adopted by all organisations, not just those involved in payment card transactions.

                  Tuesday, 16 June 2009

                  Trojan Keylogger Screensaver Compromising Novell Client for Windows

                  I've been talking about 2-factor authentication and improving authentication mechanisms for a while now and trying to get companies to implement such solutions. One such organisation uses the Novell client for Windows on Windows XP. When a user attempts to log in they are not required to press Ctrl+Alt+Del. Many forums and reviews state that this is an advantage, as users don't like it! The point of pressing Ctrl+Alt+Del (CAD) on Windows is to stop all applications from running and kill Trojans, etc. Novell have replaced the MSGina.dll with NWGina.dll so that they can capture the CAD key combination. This is the standard way to override the built-in login screen and replace it with a custom one. However, Novell have decided to allow administrators to eliminate the need for the CAD key combination. This, obviously reduces the overall security of the system.

                  I know that there are many ways to write a keylogger, some more sophisticated than others, but a lot of these low-level system keyloggers can be picked up by AV (Anti-Virus) software. I wanted to show that you don't need to be able to create this type of keylogger to obtain the username/password combination from a system that doesn't require the CAD key combination. Another reason for not going down this route is that I wanted to show that this could be put on a live system, not just in the lab. Therefore, it mustn't trigger the AV when installed or when running. The solution I decided to try was to write a screensaver to mimic the Novell login screen, capture the username/password, write these to a web service and then hand back to the real Novell client.

                  I have just listed a few things there without explanation. Why don't I just take the details and log the user in myself, then pass the username/password to my web service? Well, this would require either modifications to the NWGina or System calls that would trigger the AV and other protection mechanisms. I wanted this to remain undetected. Basically, a screensaver is just an application that is run automatically, rather than explicitly by the user. Indeed, when you write a screensaver you produce an .exe file and just rename it as .scr. Obviously there are a few things you have to do to make it function as a screensaver, like be full-screen and always on top. You also have to handle mouse and keyboard events to end the screensaver and return control of the desktop to the user. However, instead of immediately exiting our application and returning control, why not pop up another form to mimic the login dialog and fool users into giving us their password? It's easy really.

                  So what do we do once the user has entered their details? Well, the screensaver writes the data to a simple web handler that I wrote for this purpose. All it does is take an HTTP GET request with URL parameters and return a simple text confirmation to the screensaver. I don't care that it's in plaintext and somebody might read it. Obviously, I would have to anonymize the server so that it couldn't be traced, but I'm not actually trying to deploy this. Once the screensaver has confirmation that the username/password have been logged, then it displays the 'incorrect password' dialog and immediately exits to reveal the real login dialog. In this way, the user will most likely think that they have mistyped their password. As a little trick, I decided to make the ruse more plausible by turning the CAPS Lock on as soon as the user clicks the login button.

                  The final thing to evade detection was to set a limit as to how many times the Trojan would appear. Nobody mistypes their password the first time they log in every time, so I decided that a 1 in 5 rule would probably be acceptable. Here I assumed that users might accept that they have mistyped their password 1 in 5 times, particularly with the CAPS Lock trick as well. You could go for a lot less than this, but then you'd have to make sure it was on the machine for a while to capture the passwords. Again, this is simple to do, generate a pseudo-random number to determine whether to show the Trojan dialog or exit the program.

                  Having written the screensaver I had to test it. Well, I have access to a corporate laptop and desktop with the Novell client on and McAfee AV software. It works a treat. Very realistic and not picked up by the AV. I have asked people to look at it and try it (without using their real username/password combination) and they have said that they wouldn't have realised. I'm not sure it's particularly ethical to try this on a real network, so I'll probably leave it there, however there is still the issue of mass distribution of such a Trojan. I don't believe that this will be terribly difficult though, especially if using a bit of social engineering as well.

                  So, how do you defeat this method? Don't disable CAD, or if it is disabled then press it anyway!

                  Wednesday, 10 June 2009

                  How to tell if your Firewall is a full DMZ

                  Most firewalls have a 'DMZ' setting, but are they actually a full DMZ firewall? DMZ is a term often used in network security, but it can mean two different things to manufacturers and practitioners. Technically, there is no such thing as a DMZ in a firewall architecture, only a screened-subnet firewall, screened-host firewall or an exposed host, but the term is the industry standard when talking about allowing access to information servers (e.g. web, mail, etc.) from the Internet.

                  So what is a DMZ? DMZ stands for Demilitarized Zone and is, obviously, a military term for the no-go area between two armies where no military activity is allowed. However, in network security terms it is a secure subnet that separates the Internet from the internal machines on your network. This becomes a logical place to implement any Information Servers, as these can be partially opened to the Internet, whilst not allowing direct access to the internal network. So are all DMZ firewalls the same? Well, no they aren't. They range from small SoHo (Small Office/Home Office) routers up to full enterprise-level firewalls. Of course the majority of SoHo routers are not DMZ firewalls at all; their DMZ features are actually exposed hosts, which have less protection than a normal machine and little or no separation from the internal network. This is bad, as any successful attack on the exposed host now has free range over the internal network.

                  Moving towards proper enterprise firewalls, we still have two different types of firewall often referred to as a DMZ firewall; they are either logical or physical DMZ firewalls. Obviously, the physical DMZ firewall offers the best level of security, but what's the difference? The difference is that they are a screened-host and a screened-subnet firewall respectively. Although your firewall arrives as a single rack-mounted unit, internally it is made up of a set of components - most notably packet filtering routers and a bastion host. The packet filtering routers simply permit or deny access based on IP address, TCP port number or protocol type. More sophisticated functionality is implemented by using proxies on the secure bastion host. The diagram below shows the logical setup of a full DMZ firewall, with a separate screened-subnet (purple) between the insecure Internet (red) and the secure Intranet (green).

                  The point here is that no communication is allowed directly from the Internet to the Intranet. All traffic can be forced through the Bastion Host to perform URL, content, virus and SPAM filtering, among others. In the screened-host solution, there is no internal packet filtering router or separate screened-subnet. The information servers are logically separated from the information servers, but not physically. They may be on different VLANs (Virtual Local Area Networks) and physical firewall ports, but they are only separated by the logic of these mechanisms. One way to tell if your firewall is a screened-host is to see if you can set an Access Control List (ACL) between your information servers and your internal machines and see if the virus checking will work between them as well. If you can't set up an internal ACL or perform real-time virus checking on internal traffic, then you probably have a screened-host firewall.

                  The most secure way to implement a firewall is to have the full screened-subnet firewall, depicted above. This solution can have multiple internal connections that are all separate networks connected via the firewall. The advantage of this is that there is physical separation between your information servers and your internal machines. Indeed, it is possible to separate your organisation internally into departments or by access medium, i.e. wired and wireless networks are physically separated and wireless networks are treated as less secure. Another 'best practice' is to have your servers separated from the rest of your network and restrict access to them. If you have an intranet server, for example, why allow more than HTTP (port 80) and HTTPS (port 443) access to it from your network? If we lock the network down in this way we can better halt the spread of malware on our internal network even if we do get infected. This is moving towards bringing the firewall in from the edge of the network to the core, to protect our network as a whole. Remember, firewalls can't protect against traffic that doesn't go through them.

                  Tuesday, 9 June 2009

                  The 5 Restoration Phases of a Secure and Dependable System

                  We all want our systems to be secure and dependable, indeed the two topics are interlinked. Dependability requires high availability management, which has several aspects to it. We can try to achieve Fault Avoidance, with fault prevention and fault removal, but this isn't actually possible in all cases. For example, hard disk drives will have physical wear out due to moving parts, power supplies do not run indefinitely, etc. Therefore, we move towards Fault Acceptance. Fault acceptance relies on fault forecasting, to try to determine the most likely causes of faults, and fault tolerance to enable the system to continue functioning in the event of a fault. With fault tolerance we build redundancy into the system so that faults do not result in system failures. However, there are times when even our most fault tolerant systems will fail. What do we do then? Well, obviously we need to recover as quickly as possible.

                  The 5 restoration phases of a system are as follows:
                  1. Diagnostic Phase - find the fault, diagnose the problem and determine the appropriate course of action to recover
                  2. Procurement Phase - identify, locate, transport and physically assemble replacement hardware, software and backup media
                  3. Base Provisioning Phase - configure the system hardware and install the base Operating System (OS)
                  4. Restoration Phase - restore the entire system from the backup media, including system files and user data
                  5. Verification Phase - verify the correct functionality of the entire system as well as the integrity of the user data

                  It can sometimes be quite hard to diagnose the actual root cause of a fault, as certain faults will sometimes show up in confusing ways. A few months ago I was investigating a problem with a machine that I was told was an OS problem - "Windows keeps blue screening with errors; bl***y Microsoft!" However, on further investigation, it had absolutely nothing to do with the OS and therefore Microsoft. Actually, there was a memory fault. One bank of RAM was faulty and was causing so many errors that the OS couldn't recover. Simply changing the pair of RAM modules sorted the problem out and the machine has been running reliably since. The problem with this type of fault is that it manifests in such a way as to look like a different fault.

                  The procurement phase can be tricky, as it takes a long time to get components delivered. This is where fault forecasting come in. If we know the most likely faults, then we can keep stock of those hardware components and make sure that any system recovery media is at hand. Obviously, we need backup media to be stored off site as well, but we will need copies on site for quick restoration when the building hasn't suffered damage. Of course, many hardware vendors will offer Service Level Agreements (SLA) as to how quickly they can replace or repair your hardware, but some things you may want to deal with in-house as it will still be quicker than the standard 4 hour fix (or longer).

                  The final three phases are all centred around restoration of data from backup media. This brings about the point that you must backup your system, not just your data. How long will it take you to install the OS from scratch, install all the additional services and make all the configuration changes? This will take too long. If you have backed up your system state, then this can be restored onto a base OS very much more quickly and without making mistakes or omissions. Another aspect to think about is how have you backed your system up? You need to choose a backup scenario that suits the amount of data and the speed at which you need to recover. Remember that a server with 200GB of data backed up onto a DLT tape drive that supports an average transfer rate of 5MBps, will take over 11 hours to restore, assuming that you restore from the full backup only. Any incremental backups taken since the last full backup will also have to be restored. OK, we can go faster than this, even with tape backup, but the solution needs to fit your system and a live backup solution may be required. This is where virtualisation of servers can help dramatically. Virtualisation is more often sold as 'green IT', being more efficient and cheaper. However, a major benefit is the ability to snapshot running machines and redeploy them in seconds. If one hardware box fails, you can migrate the virtual machine onto another box until the first is fixed. This can take just a few seconds or minutes, depending on the architecture of your solution.

                  Availability is inversely proportional to the total downtime in the period covered and is usually expressed as a percentage, e.g. 99.9% availability, which equates to around 8 hours 45 minutes downtime per annum. The downtime is the sum of all outages in that period. Therefore, we need to decrease the frequency and length of those outages. The frequency is reduced by building fault tolerant systems and the length by having good restoration policies and practices. It is vital that IT staff know the restoration policy and have practiced it. You need to set out a clear timeline of what has to happen during restoration. Certain systems and services will need to be restored first. Your database server may be the most important server to you, but without the network, DNS, DHCP and directory services no other machines will be able to connect to your server anyway and it may rely on some of those services during start up. Also, don't leave it until you are having to restore a failed system to see if the policy works. You must practice and test the policy to make sure that it does.

                  Wednesday, 3 June 2009

                  User-Friendly Multi-Factor Authentication with GrIDsure

                  I have been tasked with rolling out a trial multi-factor authentication system that must be user-friendly, secure, low-cost and have zero impact on the existing network and users who won't be on the trial. That should be simple!

                  A trawl round the InfoSecurity Europe show always helps, as you can get the latest state of play from all the major vendors. This year there were the obvious keyring tokens, SmartCards, USB tokens, SMS solutions and some innovative software solutions, including GrIDsure. Before making any decisions about the solution to go for, several things need to be decided, among which are: how much security is required? What is considered user-friendly to a normal, non-technical user? What metrics should we use for authentication?

                  To answer these we need to look at what authentication is first: Authentication is the binding of an identity to a subject. The subjects we're talking about in this case are users and we're trying to bind their digital identity, i.e. user ID, to them as an individual. There are four metrics we can use to query identity, namely:
                  • What they know - e.g. password, PIN, secret/personal information, etc.

                  • What they have - e.g. token, SmartCard, phone, etc.

                  • What they are - i.e. biometrics

                  • Where they are (in space and time) - e.g. particular terminal, logon hours, etc.

                  The most common form of authentication is username/password, but this is outdated, insecure and, usually, fairly easily cracked (but that's another post). The problem is that each of these on their own is not very secure (yes, even biometrics are not that secure used on their own in a large environment). If we use a combination then we can obtain secure, strong authentication (by strong authentication we mean that we are sure about the identity of this subject as it would be very hard to have all the required metrics otherwise). RSA-type tokens and SmartCards combined with a PIN or password are easy and obvious choices to deploy, but they have issues. They are expensive to deploy, as you have hardware costs as well as the authentication servers. From a user perspective, they have to carry round something else with them. That's not usually too much of a problem, as these are small devices or credit-cards, but you have to have a reader for a SmartCard. Lost or damaged hardware (or tokens that have run out of battery) have to be replaced at a cost.

                  OK, so we could use SMS, whereby a text message is received by the user containing a one-time password. This is often done by sending one on-demand to the user when they start the login process. What if the network is down? What if there are delays on the network? Well, we can deliver the password up front, i.e. when they log off we can send them the password to use next time they want to login, but then it's hanging around on their phone, which they might lose. So what's the solution? Well, I saw two products that are noteworthy: Swivel Authentication Solutions and GrIDsure.

                  Swivel's solution relies on either secure SMS or a Turing image (one that is human-readable but not computer-readable). The Turing image will show two rows of numbers, the first is 0 to 9 and the second is the numbers in a random order. The idea is that you have a PIN number that you remember and you read off a one-time PIN from the image. So, if your PIN was 2468 then your one-time PIN for this login attempt would be 7193 from the image below. If someone uses a keylogger, then it will be defeated by this, as that PIN won't be valid next time and it hasn't revealed the base PIN number that you remember. However, if someone uses keylogger and screen capture malware, then it could be read off and deduced - possibly a big ask. However, you could shoulder-surf the pattern as well. That being said, this is very easy to deploy and relatively cheap. Users find it fairly simple to use as well, and it has to be better than a static PIN.

                  However, there is another solution - GrIDsure. With GrIDsure, you choose a pattern on a grid to remember. The grid size and pattern length is configurable, but the standard is to have a four square pattern on a 5x5 grid. You remember this pattern and then, when you attempt to login, you will be presented with a 'random' grid of numbers. You then read off the numbers in your pattern. For example, if your pattern is the tick that GrIDsure use as their demo and you are presented with the grid shown below, then your one-time PIN number would be 0649. This would change every time you log in. This now defeats keyloggers and shoulder surfing, as each digit will appear more than once. So, even if you see or log the grid and someone typing 0649, there are 81 patterns that would result in this PIN on the grid shown below. It is important to remember that you do not type or click the grid. The grid is just an image; you type the PIN in on a standard keypad.


                  This can be made more secure by deploying the grid on your mobile phone as a Java applet rather than in the web page, thus defeating screen capture. The grids, in this instance, are seeded from the phone's unique ID, so someone else has to have your actual phone, not just the applet running on their phone. This is true multi-factor authentication, as it requires the physical phone and a pattern that the user knows, without both you cannot authenticate. Also, this is a cheap to deploy solution as most users will already have a mobile phone, and those that don't can drop back onto the single-factor one-time passcode by displaying the grid in a browser. Users also find this very simple to use, as many people already remember their PIN numbers as a pattern anyway. This also 'plays nicely' with other authentication, i.e. you can still use username and password for some users and GrIDsure for others as the desktop client accepts both. Similarly, the web portal is able to interface with virtually any web-based application as a single-sign-on solution. Of course existing users can still hit the services direct and use traditional authentication until this has been deployed to all.

                  Welcome to the RLR UK Blog

                  This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

                  Tag Cloud

                  Blog Archive

                  Twitter Updates

                    follow me on Twitter

                    Purewire Trust