Tuesday, 28 December 2010

True Random Numbers from Random.org

Much of security relies on randomness - encryption keys should be random and random passwords are more secure than dictionary words or predictable sequences. The problem is, how do we generate a random number?

Well, actually, this is a trick question. The answer is that you can't generate random numbers, but you can observe them. Most programming languages give you a random number generator, so why not just use that? Well, it's not actually a random number generator, but a Pseudo-Random Number Generator (PRNG), or more accurately a Pseudo-Random Sequence Generator (PRSG). Given the same seed value, it will produce the same output every time. Try seeding the random number function in your favourite programming language then run your program a few times. You should see the same numbers coming out each time.

The reason for this is the function used to produce random numbers is just a mathematical formula that takes an input and gives an output. To have a random number out, you need a random starting value. Most will seed themselves on the clock, but this isn't random; it isn't even unpredictable. A simplistic example of a PRNG, as given by Knuth in his seminal books, is as follows:

X = (a*X+c) mod m

Random number = X/m for some suitable large prime number m and fixed values a and c both less than m (indeed c is usually a small number <10).

This can be seeded by setting X to the seed value and will give the same sequence of pseudo-random numbers out, as can be seen. However, it isn't random. If I know your seed value I can recreate your sequence of numbers. If you seed it on the clock it is often possible to work out a window of opportunity and obtain a range of seed values. Admittedly, this could be large, but an exhaustive search of these would be quicker than breaking the code that relies on them in many cases. Recently, a large Linux distribution was found to have a flaw in its key-generation that introduced a major weakness into the RSA public-key codes generated on those machines. This was due to predictability of the keys and a lack of randomness.

So, what can we do? We can observe randomness in the natural world. Random.org uses background white noise as a source of randomness. This gives good randomness and distribution of numbers. They offer several options to generate random numbers, sequences or even passwords. An example of their random number service is given below. I'm not saying that they are the best option or the only option, but you must use truly random numbers in your cryptography and secure systems.

Saturday, 18 December 2010

HDD Tools & Other Malware Removal

Recently I had someone come to me with their laptop saying that they had a new anti-virus program that they didn't remember installing and that 'other things' on their laptop didn't seem to work any more. The same thing happened to a corporate desktop machine I was asked about a couple of weeks later, that was originally running McAfee. Finally, two days ago I saw another corporate machine running McAfee that was saying that it had a hard drive failure. A tool, called HDD Tools, then automatically ran to diagnose the problem and stated that if they purchased the full HDD Tools product then it could fix the problem.

Each of these was a piece of malware that had infected the machine and was trying to get the user to enter their credit card details into a website so that money can be taken from their account and maybe their card cloned. These malware programs go along with the fake anti-virus software that the APWG have reported a huge rise in recently. These are a collection of programs that purport to be useful software and will fix a problem that you are experiencing. The truth of the matter is that it is that software that is causing you the problems in the first place and paying them the money will just cause more trouble.

The later breed of scams using these supposedly useful tools do one of two things in general - either they cost you money and they may try to clone your card, or they will enlist your machine in a botnet. For a discussion of this type of malware and its proliferation, see this blog post.

Here I wanted to tell you a couple if simple steps to remove this type of software if you do get infected. Obviously, it should go without saying that you should do everything possible to try to avoid getting infected in the first place rather than try to recover from it - the damage may already be done. However, there are many similarities between them and you will need to remove them. A simple procedure can often work to get rid if them as follows (N.B. this will not always work and if you make a mistake you can make things much worse).
  1. Detect that you have rogue software (malware) on your system. This isn't always hard as, in the case of HDD Tools, it will keep popping its window up and not allow you access to the C:\ drive of your computer. Other things to look for are if your AV product doesn't work or if you get strange services appearing in the Task Manager. Often the malware will stop Task Manager from running by disabling the Run command and the right-click functionality on the Taskbar. However, sometimes you can still get access to it by pressing Ctrl+Shift+Esc. Look for processes with random names; HDD Tools uses a number like 20418112.exe, that will change with each infected machine.
  2. Reboot your machine in Safe Mode by pressing F8 during boot (that's the function key F8, NOT press the F key then the 8 key). If your machine boots up normally then you didn't press F8 early enough - you need to reboot again. You need to go into Safe Mode as the malware will prevent you from deleting it normally by having a background service running. This will not be started in Safe Mode, so you can remove it.
  3. Once in Safe Mode, you can begin to remove the malware. First, you need to find out what the malware is and where it's stored. Mostly the malware won't appear in the list of installed programs so can't simply be uninstalled, but that's worth a check. However, sometimes this can be used to reinfect the machine, so be wary. If you have rogue software on your system then it will probably have created some kind of shortcut on the desktop or in the Start menu to make it seem legitimate. Have a look at where this points to and what the name of the file is. Other places to look are in the Startup folder and in the list of services installed on the machine. In the case of HDD Tools, it installs a desktop icon and Start menu folder.
  4. Navigate to the path in Windows Explorer to find all the files you are looking to remove, but don't remove them yet. HDD Tools stores its files in your temporary folder, e.g. C:\Documents and Settings\username\AppData\Local Settings\temp, or C:\users\username\AppData\Local\temp. You may need to show hidden files and folders to be able to see this folder in Windows Explorer by changing the settings in Folder Options.
  5. Run a file search on your machine to see if there are any other instances of those files anywhere else for you to remove.
  6. Run Regedit from the Run... command to open up the registry editor (Warning: messing around with the registry can ruin your machine). Now run a search for the filename in the registry.
  7. You will need to go through the registry to remove all references to the malware and any keys that it has created. In general, if the only entries in the key relate to the malware you can remove the key, otherwise just remove the values. These often appear in HKEY_LOCAL_MACHINE or HKEY_CURRENT _USER in the Software\Microsoft\Windows\CurrentVersion key.
  8. Once you have deleted these, you can go back and delete the original files.
  9. Reboot normally and check your machine.

It is still possible to get infected even if you have a properly managed device with Anti-virus software installed. The problem is that they are not 100% effective. See 'How secure is your AV Product?'

Wednesday, 22 September 2010

McAfee Secure Short-URL Service Easy to Foil

McAfee have launched a Beta URL shortening service with added security features. As Brett Hardin pointed out they are a little late to the game. However, there are so many abuses of URL shortening services that I commend them for trying.

Basically, what their service does is allow you to create short easy URLs (like any other service). However, unlike other services, when you click on the link, it opens a frames page with the content in the bottom frame and the McAfee information in the top frame. This information includes details about the domain you are connecting to, the type of company it's registered to and a big green tick or red cross to tell you whether the site is safe or not. This is decided by their 'Global Threat Intelligence', which will block known bad URLs and phishing sites. That's good, if it works.

I said above that I commend them for trying to provide this service. There are some obvious failings in their solution though, that render their protections useless other than to make it easier for people to phish users as the page has the McAfee stamp of approval. Below is their site working properly to block a known bad phishing URL.

As you can see, this site was blocked and marked as a phishing URL, which it was. Excellent, it's working! Hold on a minute though. Have a look at the screenshot below where I can access the same URL through their service by embedding it in an iframe. I now get the big green tick and I'm told that it is safe. You can see from the source that the iframe is showing the exact same URL as was blocked before. Incidentally, the page says that the site is a Business Internet Services company, which is extremely misleading as I can assure you that this wasn't put on a domain run by a Business Internet Services company.

Also, what about if I code my page to not accept being in a frames page? Then the service falls down again. The screenshot below is of Twitter accessed through this service. The problem is that I can hide all sorts of other links in the page to fool McAfee and the user won't see them. I know McAfee will block these URLs in time, but they will only be blocking the host page and they will have to block all of them. Also, if you click on a link within the page that directs you to another domain, then that is not checked, so I could just redirect you to a phishing URL and you'll still get the big green tick.

It's a nice idea, but it just doesn't work. Interestingly, other services also have some security in them. TinyURL, for example, wouldn't allow me to create a short URL for this phishing site in the first place as it was recognised as such. McAfee happily let me produce the short URL, they just blocked it later - not such a good strategy in my opinion. I know that a new phishing URL would fool TinyURL as well, but I particularly chose a URL that had been around for the best part of a month to give them a chance and I think TinyURL has done better. Incidentally, TinyURL also allowed me to produce a short URL for my test page. One good thing about TinyURL is the preview facility, but that doesn't protect me against a site that looks like the real thing.

Moral: follow any links at your own risk and don't think that a green tick makes it safe!

Friday, 23 July 2010

IPICS 2010 Network Security Slides

My slides on Network Security and Steganography, presented at the Intensive Programme on Information and Communication Security (IPICS) 2010 can be downloaded below. The topics covered under Network Security are: Access COntrol Devices, Firewalls, Network Protection, Network Authentication Protocols, TLS, VPNs & Remote Access. The Steganography slides cover examples of: Image, Network, HTTP and Twitter Steganography.

A PDF of the Network Security slides can be downloaded from here.

A PDF of the Steganography slides can be downloaded from here.

Saturday, 26 June 2010

System Recovery with Comodo's Time Machine

Comodo's Time Machine is a software application that runs on your Windows machine and periodically (either manually or automatically) takes snapshots of your system. You are then able to roll back to any of these snapshots in the future. Indeed you can jump backwards and forwards in the tree and new branches appear as you make changes to the system.

The idea behind it is that if you suffer any problems with corrupted software, malware, etc., then you can roll back to a known good state and start again. You can lock snapshots so that they don't get deleted and then clear out the ones that you don't want to keep any more. This is quite important, especially if you take automatic snapshots. You have to remember that every change made to the computer (i.e. every time you run it or change a file) the changes are stored. When a new snapshot is created, if you change a file you will have a new version on your system as well as the old one. Due to this, it requires a fair amount of space on your system. However, the upsides are fairly obvious.

I have been using it quite a lot recently on test boxes while performing testing of security software against various malware and other attacks. It enabled me to perform a test, roll back to the pre-test state and perform it again or try another attack from a fresh system. It greatly reduced the testing time for certain attacks as I wasn't having to deal with an imaging server, etc. For the normal user, however, this does mean that if you get infected with malware or something else goes wrong with your system, you can very quickly and easily roll back to a previous state and carry on working.

There are a few issues to keep in mind though. Firstly, as I've already mentioned, the space required can be quite large if you keep taking snapshots and don't clear previous ones off the system. Secondly, if you roll back your system, you won't have access to any new files or software that you have put on the system - you will need to roll forward again to get at these. Finally, I did have one or two occasions where the restore failed. When I say the restore failed, I mean one snapshot failed so that I couldn't boot into it. At the boot stage I had to select another snapshot to boot from. I could always find a snapshot the did work, but it is slightly worrying that there were occasions when the one I wanted wouldn't boot. This could be due to the fact that I was installing various service packs, updates and malware onto the system and switching between them many times, but it is still worth noting that you will require a full system backup and you must backup all your data regularly.

Of course there are other products out there that do the same thing and some reviewers say that they are better (e.g. Acronis). However, I found Comodo's Time Machine very easy to use and it is free. I'm not necessarily endorsing Comodo's product; I'm saying that this type of software is worth a look for keeping your systems running.

Wednesday, 16 June 2010

Twitter Steganography

I have recently been thinking about Steganography again and various carriers as well as applications. For those of you that don't know what Steganography is, it simply means 'hidden writing' from the Greek. Some examples of steganography are: tatooing the scalps of messengers and then waiting for their hair to grow back; writing a message on the wood of a wax tablet before pouring the wax in; 'invisible inks'; pin pricks above characters in a cover letter; etc. Basically, we have a 'cover', which could be an image, passage of text, etc., that we are happy for anyone to see and a message that we want to hide within it so that it is undetectable. It turns out that this last part is quite hard.

Anyway, I thought I'd look at techniques to embed data within Twitter as it is popular now and people are starting to monitor it. Hiding within a crowd, however, is a good technique as it takes quite a lot of resources to monitor all activity on a service like Twitter. The techniques described here would work equally well on other social networks, such as LinkedIn, Facebook, etc. How do we embed data within a medium that allows only 140 plaintext characters though? Well, there are several methods, a few of which I'll talk about here. I'm only going to discuss methods that would be quite simple to detect if you knew what you were looking at, but that will go undetected by the majority of people.

The first method is to use a special grammar within your Tweet. If the person you are communicating with knows the grammar then you can alter a message to pass data back and forth. A simple example of this technique would be to choose 2, 4 or 8 words that mean the same thing, but each one represents a value. For example, you could use fast, speedy, quick and rapid to represent 0, 1, 2 and 3 respectively, effictively giving you 2-bits of embedded data. If we had 8 words then we would have 3-bits and so on. This can be extended to word order in the sentence and even the number of words per sentence. However, messages can be difficult to construct in such a way as to be readable and this is not a high data rate. We could probably get only one or two bytes worth of data in an update message.

Another method is suggested by Adrian Crenshaw. He used unicode characters, giving access to two versions of the charcterset. So the lower range represented 0's and the upper range of characters represented 1's. This is a good scheme, as you then transfer as many bits as there are characters in your message. This gives a maximum of 140 bits. The issue with his scheme is that on some devices and Twitter clients the two character sets look quite different and it is definitely detectable. However, a good idea nonetheless.

Following on from this, we can encode bits within the message, so that they aren't seen by the user, by appending whitespace to the end of the message. Whitespaces are things like a space or a tab, i.e. a place where a letter isn't. A simple method to embed your data is to represent a 0 by a space and a 1 by a tab. The good thing is that web browsers will display multiple whitespaces as only a single space, so this will be invisible within a browser. Other clients will print them out, but there's nothing to see. Now, Twitter, and most social media clients, will strip whitespace from the end of your message as they assume that you added them by accident. This will destroy your data. However, if you add the &nbsp; HTML code to the end of your message then it will keep all the whitespace (indeed, you could put any character at the end, but you may see multiple spaces in some clients). The advantage of using the &nbsp; is that it is a whitespace character and won't be displayed in your message. Now, you will need to write a short message and add the non-breaking space at the end, so you won't have that much space, but you should be able to get up to nearly 16 ASCII characters in this way, but certainly over 100 bits if you keep your message short.

We can also be quite blatant with our data. We can rely on the fact that people won't know we're transferring data and won't look very hard. A simple URL shortening service can be exploited in two ways to embed data. The simplest method is to make up a URL. Twitter users rely on http://bit.ly and http://twitpic.com extensively. If we base-64 encode our text or data, then we can add 6 bytes (or characters) to a URL. For example, I could tweet: "Just read this http://bit.ly/UkxSIFVL and saw the photo http://twitpic.com/IEx0ZC4=". Now, these URLs are fake and don't lead anywhere. However, the base-64 encoded text of the two URLs decodes to "RLR UK Ltd." and how many people will follow your link anyway. Even if they do, the two sites here will just put up a helpful message that there was an error with the URL. You can now appologise and provide two real URLs. Meanwhile the message has got across. Obviously more URLs mean more data - up to 36 bytes if you just send 6 URLs.

The second method of using a URL shortening service is to write your own. Now you can provide real URLs but flag particular IP addresses or require the addition of an extra parameter to the URL to make it show a different page to the person you are trying to communicate with, e.g. a password. This isn't really Steganography as such, but could be used to transfer URLs that can be checked by someone else and don't reveal the true target.

The final method I'm going to discuss here is the use of a Stego Profile Image. All social media networks allow you to upload and display a small image on your page. Why not use traditional Steganographic techniques to embed data within this image. If you change your image regularly then it won't look suspicious when you change it to transfer data to someone. There are tools on the Internet to do this for you by replacing the Least Significant Bit (LSB) of every pixel with one bit of your data. This is a simple scheme and easy to detect. There are other much better schemes that are not only harder to detect, but that will give you more 'space' within the image to store your data. To give you some idea, a 4-colour, 73x73 pixel GIF like Twitter's default images can store nearly 4KB of data with no visual impact. However, that's for another blog post...

Tuesday, 25 May 2010

Telephone Systems a Hackable Backdoor?

I have been talking to a company that provides telephone exchanges and services to companies this week on behalf of a client and it has highlighted a worrying backdoor. It turns out that many of these companies have a way to remotely connect to their exchange for support purposes - they can remotely control, configure and troubleshoot your system to get you back up and running. Exchanges often have additional modems in them to allow for remote connections. This is all very well and good from a managed service point of view, but what about the rest of your network? Can this be exploited to gain entry to your network? Quite possibly in some cases - it certainly needs to be included in your security audit and perimeter testing.

Talking about a specific company now, they supply the software to monitor and bill phone calls through the exchange. They remotely install, monitor and manage this software. How do they do that? Well, it turns out that they install LogMeIn on your machine. Now this will make outbound connections through the firewall to make the internal machine accessible from the outside world. Hang on; you're making my networked machine that controls my exchange and billing accessible by anyone? By default LogMeIn will use simple username/password type authentication.

The user who accesses the computer has to set up their account with LogMeIn and will use the same username and password combination on all machines as far as I can see. Does the company have a universal account that they use to remotely access the machines or does each user have their own? If the company uses one default username/password, then what happens if someone gets hold of that information or someone leaves? Does the password get changed? If everyone has their own account, then are they removed when they leave the company? As this is all done through the Web, they could still gain access if they aren't specifically removed from the user group.

How much do you trust all the employees of that external company? How much do you trust the disgruntled ex-employee from them who has access? It might not be that they are trying to attack you, but they may be careless about the credentials or not revoke them properly. Also, consider the case where all the internal employees of an organisation are required to have 2-factor authentication and remote access is locked down. What's the point? There is a simple username/password entry point into the network that bypasses all the secure remote access services you may have in place. How secure are the passwords that the external company use? Would they match up to your complexity requirements? If they are simple, easily guessed or shared, then they open full administrative control over a machine on your 'secure' internal network. Who patches the machine and who updates LogMeIn?

How about installing a keylogger in such a firm to pick up on their username/password combinations so that you can gain full access to every customer's network. Once on the internal machine, malware can easily be installed and attacks launched on other internal machines unhindered. How many organisations have followed best practice and installed a UTM firewall in the core of their network to segregate their servers, etc., from other internal machines? Would a machine running this software be on a normal user subnet or on the management subnet anyway? Do many SMEs have more than one subnet anyway?

Needless to say, my advice was to avoid installing LogMeIn on the machine and temporarily allow a more controlled access to the machine with a temporary account, all of which can be disabled immediately after remote installation is completed. This opens up the problem of how to obtain support, but access can be temporarily granted and then removed when support is required with relatively little effort.

Clearly any such system needs to be well documented and be part of the security audit. I would advise that companies also ask for security audit and policy information from any external company who has any kind of access to the network - this should be standard procedure.

Thursday, 20 May 2010

CQC Using Email to Verify Care Workers

The Care Quality Commission (CQC) has decided to put registration of Care Providers online to make everything faster and easier for the providers. At least that's what they said. In practice, care providers had to fill in the online forms addressing standards that won't be published for another 5 months after the registration deadline. Ignoring all the problems, ridiculous re-branding to avoid inconsistencies and money wasted, there was a serious problem/lack of understanding that has lead to this blog post.

All care providers and managers have to register online individually and have to agree to particular terms in order to be registered and, therefore, trade. I have no problem with this as these care providers are looking after vulnerable people. However, it became obvious that there are serious problems with their system. First off, it isn't possible to change the owner's name if you make a mistake (they can't change it either apparently). Therefore, if you make a mistake, you now have to lie to say that all the details are correct, otherwise you can't register and you'll be out of business - not a good start.

However, this is overshadowed by the fact that the CQC uses only email to verify care managers. First of all they sent a 6-character password to the main business email address with the URL and details of how to log in (no paper verification was done at all). Don't they realise that email is all sent in plaintext and can be read by anyone with a packet sniffer? When logged in, the care provider has to fill in some initial forms as the owner and then list the care managers that they employ. Following this, each care manager is sent a 6-character password via email in order to log in and register their care service. There are a couple of problems with this. Firstly, the email addresses are just entered during the initial form filling exercise and are not checked and secondly, you can't reuse the same email address. So if you are the manager for more than one care service you have to use two different email addresses. The stupid thing is that they accept any email address from an alias to the same mailbox through to hotmail accounts with no checking at all.

They don't seem to realise that half the email addresses people use are just aliases onto other email accounts. On one of my accounts I have 9 email addresses all delivering mail to the same mailbox as they are various options of name and domain all relating to the same company. However, CQC would treat this as 9 different people. There is no checking done to see if that really is the care manager at all. Anyone could sign up as long as they intercept the initial password. Who has access to the standard email address for the organisation? Usually several people and usually not the actual owner of the business - the only person who should receive that email. Due to the way their system works, if someone were to intercept that email (such as a disgruntled ex-employee) they could sign up with a random free email address begin the registration process, not complete it and put the care provider out of business as they won't have registered in the set time.

This is mostly a PR exercise as far as I can see and a bad one. They say that they are checking providers and improving standards. However, it is perfectly possible for the owners and managers to be completely unaware of their registration process because no actual checking is done. In addition, to assume that two email addresses entered into a website are for two different people, and base your authentication on that, shows a lack of understanding of the technology that they are forcing on people.

Edit: 13/7/10

I sent an email to the CQC about this issue and being able to hold people to it legally a while back and a few days ago I received a reply. Here is the main bulk of their reply:

"Thank you for taking the time to write to us and we would acknowledge that the
points you correctly make present some element of risk. To reassure you, and
your customer, we were previously aware of each of those points and have
considered, with our compliance and legal teams, the balance of risk that they
present and any difficulties with legal status that could result, before making
a positive decision to implement in this manner."
"To further reassure you, we commission independent penetration testing of our systems prior to go live to assure their security."
I did ask to see one of their security assessment reports, but was (unsurprisingly) turned down. I understand that they don't want everyone to know how their systems works as that could reveal further problems, but I would like to know what was actually said about this issue and how they justify it.

Monday, 10 May 2010

Series of Demo Videos of Trusteer's Rapport

I am currently producing a series of videos demonstrating the anti-spyware capabilities of Trusteer's Rapport. So far I have looked at keylogging software and screen capture. Specifically, I have demonstrated it with Zemana ScreenLogger, Zemana KeyLogger and SpyShelter. I will be adding more videos over the next few days. The first two videos are embedded below. (Edit: 17/05/10 - I have now added three more videos covering Zemana SSL Logger, AKLT and Snadboy's Revelation V2.)

Links to the YouTube videos are below:

Friday, 30 April 2010

InfoSecurity Europe 2010

Once again InfoSecurity Europe was an interesting place to visit. Lots of good sessions and interesting people to talk to. Most of the usual protagonists were there and the organisers have increased the educational part of the exhibition as well, which is good.

I thought I would put down a few things that I thought were noteworthy from the exhibition. I've already blogged about the GrIDsure anti-phishing sender verification and the new 3M mobile phone privacy filters, but there were a few other things I want to mention.

The first one is Panda Security's new Panda Cloud Internet Protection. This is a cloud-based service that provides consistent security and access policies to all machines within an organisation. The key thing is that it will protect mobile machines that are outside the corporate network with the same policies as those within the network. Protecting corporate machines when mobile is a big concern and a good way to reduce malware or hacking problems on the main network.

The usual problem is that mobile devices connect to public, unsecured (or badly secured) networks and either pick up some malware that they bring back with them, or they connect back remotely and open a soft doorway into the corporate network. By securing machines with this cloud service, it should stop them from being a soft target and weak link in your security chain. If you already do something similar by allowing VPN access into the corporate network and allowing them to create connections out, you are using additional bandwidth for this traffic and having to open a VPN connection, which isn't always wise.

Another topic talked about (mainly by Sophos) was the security, or lack thereof, when using social media. Graham Cluley gave a really good talk on the subject on the Sophos stand including the use of SPAM avatars on sites such as Twitter. The attack is centred around the fact that anti-SPAM filtering finds it hard to scan the content of images, e.g. by doing Optical Character Recognition (OCR). So, people have been putting written messages in their ID picture to get past any filtering. You can find out more from his blog.

The final mention has to go to Ian Mann from ECSC. He, once again, talked about several Social Engineering techniques to get past security. He stated that he always likes to see security guards when trying to gain unauthorised access, as it usually makes the system much less secure. He gave a talk in one of the main theatres as well as several talks on the ECSC stand, all of which were interesting. He has written a book called Hacking the Human, which is worth a read if you want to find out more.

Beyond that, most of the usual suspects were there and many things were as before with incremental changes and updates. There didn't seem to be a central theme or message from all the vendors or industry in general. Everyone seemed to be concentrating on their own topics and products. One new addition to the exhibition was the University Pavilion. I think this could be put to good use to show people what's coming over the horizon or how these technologies that the vendors are pushing actually work.

3M's Mobile Phone Privacy Filter

At this year's InfoSecurity Europe I visited the 3M stand again to see what developments they had for their privacy filters. They had their excellent Gold filter there of course, which is now properly on sale in the UK and the best on the market in my opinion. I previously blogged about this filter in my post "Why do I need a privacy filter? (3M's new Vikuiti Gold Privacy Filter)".

So what's this blog post about? Well, they have now produced privacy filters for mobile phones. Let's add a bit of context to this decision. How many businesses provide mobile devices to their employees that are connected to the corporate network with access to email, contacts, calendars and corporate documents? If you were reading an email from a client or reviewing a sensitive document would you be happy for someone to peer over your shoulder? Maybe you're paranoid like me and try to avoid reading emails in public places and stand with your back to the wall, shielding the screen when you have to read something urgently (Note: you shouldn't really store sensitive documents on a mobile phone in the first place, but that's another topic). However, 3M have made the whole thing a bit easier and allowed people to look a bit more normal than I do when using email in a public place.

I had a bunch of questions that I wanted to ask 3M about this new filter and I got some answers that I will share with you here. Firstly, I'll give you a brief introduction to their product, which can be seen in the image below. This is basically a screen protector with the privacy filter combined. It uses the standard matte grey louvered filter that gives privacy in one plane (I'll explain this in a bit and the problem with it). It uses the matte film as reflective films would get scratched with the type of use that a mobile gets according to the guys on the stand. The film is self-adhesive, using 3M's Post-It note glue, so it should come off with no residue and be easy to fit. This is effectively a replacement for your standard screen protector with the added benefit of including the privacy filter.

3M's new mobile privacy filter

Now to some of the questions I had:
  • Does it work with touch screens? - Yes it does. They had an iPhone there and it worked perfectly.
  • Does it work with a stylus? - Yes it does. They had an Windows Mobile-based XDA there, which also worked with no problems.
  • Does it make the mobile hard to use? - No, the dimming of the screen caused by the filter is not too much of a problem. With the backlight off you pretty much can't read the screen, but how many people use their mobile with the backlight off? There is some drop in brightness, but you can increase the brightness of the screen to compensate. However, this does have the big side-effect of reducing battery life - a major problem on smartphones.
  • What if I have a mobile that I can use in landscape as well as portrait, like an HTC or iPhone? - Well, you have a problem. It comes back to what I said above: the filter only works in one plane. The filter has vertical louvres so that as you move to the side they overlap and block out the screen, like vertical blinds. However, vertical movement doesn't change the overlap of the louvres, so there is no blocking of the screen in this plane. So, you have to decide which way you want the filter, portrait or landscape - it will only provide privacy in one plane. Now, this isn't a problem for a lot of phones, particularly the majority of Blackberries, which are still the preferred business machine by many organisations. It is a problem, however, for iPhones (which aren't business phones in my opinion) and many Windows Mobile phones with the iPhone-esque interface.
  • Couldn't we have the Gold filter on a mobile to sort this problem? - Unfortunately, not yet, but they are working on it. There are a few technical difficulties apparently. Firstly, there is the point I made earlier, that mirror finished filters would scratch too readily on a mobile device that is thrown in a bag or stuffed into a pocket with other things. Apparently, they have a matte version of the Gold Filter in the lab, but it isn't available yet or in the near future. There is a second problem. Apparently, the Gold Filter doesn't take to being glued so easily as the grey filter. However, they are working on this as well and hope to have a solution soon.
  • Do they come pre-cut to my mobile? - Yes and no. If you have a Blackberry or iPhone then yes, otherwise no. You buy a sheet and cut it yourself. I believe that there are other companies, such as wrappz.com, that will be able to cut one for your device in the future. I think this is a must for the uptake of the filter. How many business executives are going to sit down with a craft knife and straight-edge to cut their filter to the exact shape and size of their phone as well as the holes for the buttons, cameras, speakers, microphones, etc.? The problem for 3M is that mobiles come in all shapes and sizes, with absolutely no standardisation. Laptops and monitors, on the other hand, do have standard sizes.
What's my verdict? Another good product from 3M. I think this would be very good for executives with the Blackberry-type device and still help those with touchy-feely, accelerometer-driven interfaces, as long as they remember to only access sensitive information in one plane. They will have a great product when they get the matte Gold filter stuck to the mobile.

Surveys or Phishing Emails?

I was recently sent a survey from a well-known survey company (actually, on second thoughts, I'll name them: Capita) and it made me very cross. Why so cross? Well, I spend a considerable amount of time trying to educate people about their role in the security of the network and about phishing/social engineering. This is all undone by survey companies such as the one in question. See for yourself the email sent and use it as a template for future 'white-hat' testing.

Have your Say! Fill in your Staff Survey today!

Dear Colleague

It’s important to complete the Staff Survey to ensure your voice is heard! The purpose of the survey is to make further improvements to staffs’ working lives at Target Organisation.

Your responses will come direct to Capita Surveys & Research Unit, and will be totally anonymous. No one outside the research team – and certainly no one at Target Organisation – will know who has responded or be able to identify individual responses. The survey findings will be analysed by Capita Surveys & Research Unit and only aggregate results will be reported.

To ensure that you have adequate opportunity to participate, the survey closure date is date month year.

In order to participate in the survey visit:


and enter your password: AAdddd

If you have any queries or require support completing the survey please contact us at Capita Surveys & Research Unit on 0800 587 3115.

Yours sincerely

Cheryl Kershaw
Director of Surveys and Research
Capita Surveys & Research Unit

What's wrong with this? Many things! Phishing scams are on the increase and are one of the biggest threats to security at the moment. Targeted phishing, or spear phishing, is also on the increase and these surveys could easily fall foul of this type of attack. The survey emails are in a standard format with no personalisation. It appears as a classic phishing email, albeit with better grammar. It would be easy to exploit this 'legitimate' survey to ask for additional personal details. Points to consider:

  1. There is no personalisation – ‘Dear Colleague’
  2. The email doesn’t come from the organisation in question – staffsurveys@Capita.co.uk
  3. The URL does not point to the organisation in question – https://sas.capitasurveys.co.uk/organisationname
  4. There is no contact within the organisation presented in the email for confirmation – contact Capita Surveys & Research Unit on 0800 587 3115
  5. They do not use an EV SSL certificate on their site, only DV – QuoVadis Global SSL ICA certifying that this is sas.capitasurveys.co.uk, which could be a phishing site for all a user knows, as it isn’t certified to be Capita or Capita Surveys & Research Unit (see post on EV versus DV certificates)
This would be very easy for someone to impersonate, particularly if they register a similar URL, such as https://sas.crapitasurveys.co.uk/organisationname and then use masking as well. Users are being conditioned into clicking on links without questioning their validity. All I would have to do is know (or guess) that this organisation conducts surveys of this type from an organisation like this. OK, Capita suggests that organisations publicise the survey, but this isn't always done well and can be used to produce a fake version before the real one goes live.

It gets worse though. When I phoned Capita Surveys, a nice helpful lady called Liz told me who they were currently providing surveys for (I won't give out the organisation names here as that would be irresponsible, but if Capita would like to check with me I can prove this). It would be very easy to quickly knock up a copy of their site with a similar URL and registered SSL Certificate, add in a few extra questions, send those emails and wait for the information to roll in. Well done Capita! They say they take people's security seriously and that answers are secure because they use SSL. However, I would beg to differ.

Capita aren't the only culprit though; I was also recently sent a survey for Microsoft from Mori, which was just as bad. They have to take steps to ensure that their surveys can't be hijacked for targeted attacks. There are anti-phishing technologies and techniques available that, whilst not infallible, would help, so why aren't they used?

Tuesday, 30 March 2010

Which Browser is the Most Secure?

I was recently talking to a fellow security professional who develops secure plug-ins for browsers and we started talking about the security of various different browsers. Most of the talk around browsers centres around how fast they are and what sort of features they have, but rarely do people talk about the security of their browser. Unfortunately, the browser is one of your weak points on the network as users have the ability to navigate to sites containing malware or phishing attacks as well as install plug-ins or run scripts that are malicious. So, which browser is the most secure? Any guesses?

All browsers (and all security products for that matter) have security weaknesses and vulnerabilities. However, the architecture of the browser and certain features can make browsing safer. The feature I'm going to put forward first is web browser protection against socially-engineered malware (phishing sites). According to many of the big AV and security vendors, phishing is on the rise and set to be the biggest headache of this year. Two statistics worth quoting are: according to Trend Micro, 53% of malware is delivered via Internet downloads against only 12% via e-mail; and Microsoft claim that 0.5% of the download requests through IE8 are malicious and they block a download for one in 40 users every week. In January 2010 NSS Labs tested five of the latest browsers against socially-engineered malware. Their full report is worth reading, but I have shamelessly reproduced their main graph here.

Graph showing the Browser Mean Block Rate for Socially-Engineered Malware
According to NSS Labs, Internet Explorer 8 blocked 85% of these malware sites using their SmartScreen Filter. The next nearest was Safari 4 at 29% and 0.2% behind that was Firefox 3.5. Chrome 4 was worse, on only 17%, and Opera 10 was bottom of the pile, achieving less than 1% blocking. By far the best of the pack was IE8, but even that still lets through 15% of malware. An interesting and noteworthy aside to this is that I believe Safari and Firefox use Google's Anti-Phishing API and achieve a 29% blocking rate, yet Google's Chrome only achieves 17%. If you want to see what the SmartScreen blocking looks like in IE8, you can see an example below, where IE8 is blocking it and Comodo's Dragon (a Chrome derivative) is not.

Also, again according to NSS Labs, Firefox had an 'average add time' of 5.7 hours, the fastest, versus Microsoft's 6.7 hours. The average add time is how long on average does a user have to wait before a visited malicious site is added to the block list. Speed is very important here, but it does actually have to get blocked in the end to make this a valid metric. These figures are better than the other three browsers, which scored: Safari - 9.0 hours; Chrome - 14.7 hours; Opera 82.4 hours.

Screenshot of IE8 SmartFilter blocking a phishing site alongside Comodo Dragon
Having mentioned Comodo's Dragon now I will give you a brief introduction, if you haven't heard of it before. It is a free Chrome derivative browser from Comodo. This browser has been designed to be more secure than the average browser. It doesn't perform well in the above tests, but has several other features up its sleeve centred around privacy. Some of the main features include not sending the HTTP Referrer so that you cannot be tracked from site to site, it won't send crash and problem reports (so your history remains on your machine only), it highlights DV only secured sites and will give a visit history with the certificates. If you don't know the difference between a DV and an EV SSL/TLS Certificate then read this blog post. An example of the DV certificate warning can be seen in the screenshot below.

Screenshot of DV Certificate warning in Comodo Dragon
One problem I have with the privacy tag associated with this browser is the UserAgent string. I have blogged about Cookieless Browser Tracking by using the UserAgent string before. The point is that the string sent to a web server by your browser to identify its and your machine's capabilities gives about a third of the information required to uniquely identify you. There will only be a handful of machines with the same UserAgent string, especially if you stray from the most common browsers (IE & Firefox). I also think that 'Never save passwords' should be the default setting and 'Allow all cookies' should not be the default setting. It is a new browser though, and I'm sure it will improve over time as the company is committed to security in many guises. Certainly its positive features are good and something that other browser vendors should follow.

The next point is about actual downloads from the Internet. Dragon, and other browsers, will give a warning when downloading executable files, but will just download ZIP, PDF, etc., and allow you to open them without warning. Bear in mind that PDFs and ZIP archives can contain malware. IE8, on the other hand, will ask you to confirm the software used to open a download, regardless of its type. This will always give you the chance to opt out if it wasn't what you were expecting. Also, IE8 will tell you if it is a signed or unsigned download, if it is a plug-in or an executable. Other browsers do not support this feature. What does it mean though? Well, if I am a software vendor, like Adobe, and I want you to download and install my plug-in I will sign it with a digital signature. When you download it, you can verify the signature, which will tell you that I (or Adobe) created and signed the download and that nobody has tampered with it in the meantime. If the download isn't signed, then how do you know that this isn't a phishing or pharming site pretending to be Adobe (or intercepting the download with a proxy) giving you a version containing a Trojan or some other malware? The answer is that you don't!

So, you should only download and install signed plug-ins and executables. Unfortunately, Internet Explorer is one of the few browsers that will control this for you and it makes a distinction between signed and unsigned plug-ins even when they are installed. Which brings me onto my final point (as this post is getting very long and a bit like a rant). Internet Explorer is, I believe, the most attacked browser as it has, until recently, been the most widely used. Due to this, Microsoft has had to build it in a secure fashion, controlling all plug-ins carefully. Firefox, on the other hand, performs many of its tasks by using a plug-in architecture, even for standard functionality. As far as I am aware, there is little or no distinction between a 'built-in' plug-in and one installed from a third party at a later date. This is very dangerous in my opinion. Firefox now enjoys the top position for browsers and it won't be long before the hackers make the switch from attacking IE over to Firefox. I think it will be harder to secure Firefox against this onslaught than it will be for Microsoft to keep up with their architecture.

It is interesting to note that the speed of the browser runs roughly inversely to the graph at the beginning of this post, i.e. Chrome is very fast and IE is considered the slowest of the big 4. However, security always comes at a price - processing being a big loser. Could it be that the reason why IE is such a leviathan and slower than its rivals is because they're doing much more checking and keeping you much more secure? I think so. Microsoft have a way to go though and can't rest on their laurels. I will be watching Comodo Dragon with interest to see if they can really push for the top spot in terms of a secure browser. It certainly does something for user education and privacy.

Anti-Phishing Sender Verification with GrIDsure

I have tried out GrIDsure with a set of users now to see how easy it was to use. I was using the Windows client 2-factor authentication solution I blogged about here. (If you don't know their product you must read either their website or my other blog post above before reading this post as it won't make a lot of sense otherwise.) It turns out that the users had no problem setting it up and using the login - no training required other than a simple explanation of how it works. Doing this trial reminded me of discussions I had with GrIDsure about their Enterprise version of their product, which is fairly new and has more features being added all the time. One feature that I thought was noteworthy is their anti-phishing verification.

Phishing, as you will know from here, is a big problem and is often spread by obscured links in emails, such as http://www.microsoft.com.phishers.org/, which has absolutely nothing to do with Microsoft, but is just a sub-domain of phishers.org. There are many ways to combat phishing, the best of which is user education and awareness. I have, for a while, thought that a solution similar to that of MasterCard's SecureCode could be applied to many emails and on-screen login pages to verify the sender. If you're not familiar with MasterCard's SecureCode, when you set up your credit card to have SecureCode, you enter a password and a phrase that is personal to you (any phrase so long as you recognise it and someone else wouldn't guess it). When you confirm payment for something you are presented with your phrase on screen and asked to enter three characters from your password. The point is that if you don't see your phrase then it isn't MasterCard, so don't enter your password characters. The problem would be spear-phishing, targeting individual users. In this case you could just copy the phrase and fool the user. However, you can't just attack a batch of users or all MasterCard users, for example.

GrIDsure have done something along the same lines to authenticate the sender of emails and other messages (with their SDK it could be made to do this for any number of situations). What their system does is send you a code which, along with your unique key, generates a particular grid. Only you can generate that grid, as only your devices have that key (devices plural, as this could be a desktop application and on your mobile phone). They then tell you what your PIN is on that grid. The verification is simple; enter the code on your device and read your PIN off the resulting grid, if it matches the one in the email it's valid, otherwise delete the email and ignore it.

This is just a very simple way to verify an email to make sure that it is not a phishing scam. Of course there is one issue - replay attacks. If an attacker copied the code and PIN from the email then they could verify any email to that user. However, this does limit it to spear-phishing individual users rather than a mass blanket phishing attack. This could be reduced if a timestamp were introduced as well, e.g. entering the date as part of the code to generate the grid, reducing the window of opportunity to the same day. I would like to see GrIDsure push this and eliminate replay attacks to help stop people falling for phishing scams. More people need to think about technologies like this to verify their emails - alternatively, they could just digitally sign them all as practically all email clients have the ability to verify a digital signature.

Sunday, 28 February 2010

Why do I need a privacy filter? (3M's new Vikuiti Gold Privacy Filter)

I received my free sample filter from 3M a week ago now - it is one of the first of their new Vikuiti Gold Privacy Filters. Before I tell you about my experiences with it though, I think I ought to cover the question: 'Why do I need a privacy filter?'

So, what is a privacy filter? It is a thin sheet of plastic that fits over your screen to reduce the viewing angle. LCD manufacturers spend all their time increasing the viewing angle of their screens so that many people can view the TV from all over the room or crowd round a computer screen to share information. The problem with this is the advantage itself - what if I have sensitive information on my screen that I don't want everyone to be able to read? The privacy filter reverses the wide angle viewing trend to reduce it as close to straight on as is practical. The point of a privacy filter is to stop prying eyes and shoulder surfing.

Do you need a privacy filter? I was speaking to one professional a little while ago and they told me about the time they were on a plane travelling back from an exhibition. He was sat beside a competitor who was working on their laptop for the whole journey, looking at details of their sales leads from the exhibition. At the end of the flight he thanked his fellow passenger for the information. Do you or your users have corporate laptops that they use in a public location? Shoulder surfing documents, usernames, security procedures, etc., can be a serious issue. We can spend all our time and effort protecting the storage and transmission of information and forget about the display and viewing of them.

3M Gold Privacy Filter
Back to the new 3M Gold Privacy Filter. The viewing angles of filters are around 40 degrees from perpendicular. Mostly they work in a similar way to vertical blinds - if you are straight on then you only see the thin edge, but as you move off the perpendicular they start to show until they overlap and you can't see through them. The problem with this is that you can still see the screen if you move in the vertical plane. The 3M Gold filter seems to have a narrower angle of view (which is good for a privacy filter) and also cuts out vertical shifts to a certain extent. This is due to the gold mirror-like surface that cuts out the light from the screen and reflects the surroundings. The matte filters from 3M and other vendors are not so effective due to the lack of reflections. However, in bright ambient light with the laptop LCD panel turned to minimum brightness it can be harder to see the screen effectively with a shiny filter. This can be mitigated, to a certain extent, by the gold filter as it shows a brighter, clearer image than the grey ones in my opinion. Which brings up another problem with privacy filters; they do reduce the brightness of the screen. However, with the brightness turned up on my laptop, I can see the screen with no problems in any ambient lighting environment.

The one poor feature of the filter is the fitting. Small clear plastic tabs get stuck to your laptop round the screen (they have to protrude over the screen). The filter then slides in behind these and fits the screen perfectly (you have to buy the correct size). Fitting the filter is fairly easy (but can be a bit fiddly on a screen like mine as the sides of the laptop slope towards the screen) and removing it is very easy. However, you are left with the tabs over the edges of the screen even with the filter removed. They aren't that obtrusive though and you don't really notice them when the filter is in place.

Overall, I think that the 3M Gold Privacy Filters are probably the best filters on the market at the moment - certainly the best ones I've seen, though I haven't seen them all.

Wednesday, 17 February 2010

Coventry Building Society Grid Card

Coventry Building Society have recently introduced the Grid Card as a simple form of 2-factor authentication. It replaces memorable words in the login process. Now the idea is that you require something you know (i.e. your password) and something you have (i.e. the Grid Card) to log in - 2 things = 2 factors. For more about authentication see this post.

How does it work? Very simply is the answer. During the log in process, you will be asked to enter the digits at 3 co-ordinates. For example: c3, d2 and j5 would mean that you enter 5, 6 and 3 (this is the example Coventry give). Is this better than a secret word? Yes, is the short answer. How many people will choose a memorable word that someone close to them could guess? Remember, that this isn't a password as such, it is expected to be a word and a word that means something to the user. The problem is that users cannot remember lots of passwords, so remembering two would be difficult. Also, having two passwords isn't really any different from having a longer, stronger password, it's still single-factor.

The idea behind the Grid Card is that you have a set of random numbers shared between you and the bank that are very hard to guess. I only say very hard to guess because I don't know how they generate the cards in the first place and if this isn't truly random - which it almost certainly won't be - then you can predict parts of the grid given other parts of it. Randomness is a rare but essential commodity. There are 50 co-ordinates on the card and Coventry ask for 3 each time, giving 19,600 possible combinations, assuming they'll never ask for the same co-ordinate more than once per login (order doesn't matter as we're told which grid squares). Does this mean that someone would have to log all 19,600 combinations before they could regenerate the card? No. Each co-ordinate appears 1,176 times in the 19,600. Each pair of co-ordinates appears 48 times. There are only really 17 unique combinations of co-ordinates such that they aren't repeated (and that's a cheat, because one co-ordinate will appear twice if we have 17 as 17x3=51). However, it is unlikely that these 17 would get asked for in succession, so it would take significantly more observations before we have the whole grid, but we won't need the whole grid before we're very likely to be able to login. Indeed, there's a 17.3% chance that at least one co-ordinate will be repeated on the next login. Also, a shoulder surfer with camera phone (or CCTV cameras) could take a photo of the whole card in one go, so this is an authentication mechanism to be used only in the 'safety' of your own home.

This is, however, a step in the right direction, so they should be commended for it. What else do you need to login to Coventry? Well, a Web ID and date of birth, both of which are easily pharmed. So the security is based solely on the password and Grid Card, which is better than two passwords. They do also have an anti-phishing technique bundled in there as well. When you sign up you choose a picture that they will display during your login along with your last login date and time. If the picture or date is incorrect then this isn't Coventry (or your account has been compromised). It's good to add a picture here, because many people don't actually check the last login date and time even if it's put up on the screen. The picture is obvious and hard to miss though. These mechanisms don't really stop spear phishing (or targeted phishing), but they do stop blanket or mass phishing attacks.

It's about time more banks started issuing 2-factor authentication for login and Coventry should be congratulated on being amongst the first. However, we have to be careful about how it's implemented.

Keylogging Trusteer's Rapport

Let's get some perspective on this first: no security product is 100% secure and just because there may be an obscure way round a product doesn't mean you shouldn't use it and that it won't protect you against a lot of attacks. How secure is your Anti-Virus (AV) product? Certainly not 100%, so we need layers of security. Rapport is another layer of security and could help protect your machine.

I have said in my previous post about this issue how well Trusteer dealt with me. So, now to the method of keylogging Trusteer. It's quite simple really, but requires a special setup. Rapport hooks onto the keyboard driver to prevent keylogging. However, if you invoke the remote desktop feature in Windows then a different keyboard driver is invoked, which Rapport cannot hook onto. So, if you're using a remote desktop connection into your machine then Rapport will not be giving you the full protection (it still has other layers of protection that work in this scenario).

Is this such a special case that you don't need to worry about it? Well not necessarily. There are a plethora of remote access software solutions available to users who are increasingly using them to access their machines at home or at work. There is also another technology that can be leveraged to cause this effect whilst the user is at the actual machine. Microsoft have introduced RemoteApps to the Windows desktop environment to allow for legacy applications to appear to run seamlessly on Windows 7. This is done via Virtual PC running another OS and the RAIL QFE update to allow applications to be exposed from a desktop machine as RemoteApps. However, we can use this technique to look back at the machine and expose the web browser as a RemoteApp, which the user should not notice.

As I say, it's a special case and not one a user would normally encounter, but it is possible. There are other issues with Trusteer as well, being able to capture the screen of protected websites and information leakage as highlighted on ReviewMyLife.co.uk here. It doesn't mean you shouldn't use Rapport though, just know and trust the machine that you're using. Basically, don't ever connect to any secure site or service from an untrusted machine, no matter what's installed on it.

Friday, 12 February 2010

Trusteer's Response to Issues with Rapport

I have been getting a lot of hits on this blog relating to Trusteer's Rapport, so I thought I would take a better look at the product. During my investigations, I was able to log keystrokes on a Windows 7 machine whilst accessing NatWest. However, the cause is as yet unknown as Rapport should be secure against this keylogger, so I'm not going to share the details here yet (there will be a video once Trusteer are happy there is no further threat).

I have had quite a dialogue with Trusteer over this potential problem and can report that their guys are pretty switched on, they picked up on this very quickly and are taking it extremely seriously. They are also realistic about all security products and have many layers of security in place within their own product. No security product is 100% secure - it can't be. The best measure of a product, in my opinion, is the company's response to potential problems. I have to admit that Trusteer have been exemplary here.

Why do I keep saying it's a potential problem when I have logged keystrokes? Well, under normal operating conditions this isn't possible with the keylogger used. Most home users won't have a machine set up like the test machine in this case.

Trusteer have also pointed out that keyloggers are not the main threat facing the banks at the moment and are of less use now than in the past. Rapport has several layers of security protecting the machine beyond keyloggers and blocking screen capture. One of he major plus points about Rapport is their anti-phishing and anti-pharming technologies. Although, again, these aren't perfect, it's better than nothing.

I don't agree totally with Trusteer here though. The problem with being able to log typed characters comes back to weak passwords and single-factor authentication. In this case, NatWest seem to require a customer ID, consisting of the user's date of birth and a 4 digit ID in the format ddmmyyxxxx, a 4 digit PIN and only a short password. Now, they will let any Customer ID through in this format whether it's valid or not (good from a security point of view as you don't know if you've got a valid Customer ID or not). However, clearly they allow 6 character passwords and then ask for three of them. So with one capture I can have 3 out of 4 PIN digits and half the password. We know people choose weak passwords that can be guessed. This becomes a crossword puzzle to make a 6 character password given three known characters. I would agree with Trusteer that keyloggers and screen capture shouldn't be a problem now, but it still is, as the banks cling onto simple username and password authentication, often with poor password policies.

If the banks move to 2-factor authentication and one-time passwords then most of this would be redundant, and Trusteer could concentrate on pushing us off to the correct site to avoid phishing and pharming attacks. Of course, these will become even more prevelant and sophisticated. Technology can't stop this alone, it has to be coupled with user education. Screen capture can still cause problems with strong authentication solutions, such as those using images or on-screen grids to generate one-time passwords.

So, what's the bottom line? Since my earlier posts, Rapport has come a long way with compatibility, etc. The tone of the marketing has also changed for the better and is more realistic (although some of the 44 partner banks could be doing more). So Rapport could be an additional layer of security to protect you, but you will still have to be vigilant. You must have an up-to-date, legitimate anti-virus/anti-malware product, firewall protection, tight controls on your browser and a cautious and skeptical approach to all communiations and links. Without these, Rapport isn't going to help you anyway.

Edit: video in later post - Keylogging Trusteer's Rapport

Thursday, 4 February 2010

Cisco TACACS+ Password Length

I have recently come up against a problem with using the 'new' wireless network at work. We are using Cisco kit and TACACS+ to interface onto Microsoft's AD in the back end. Technically, usernames should be able to be up to 31 bytes long (not a problem there) and the password up to 254 bytes. However, the web portal implementation that we are running has a problem with my password. It would appear that passwords of up to 16 characters are fine, but passwords in excess of 16 characters don't work.

We are currently investigating this, as it seems like a real problem, especially as we are recommending that people switch to using longer pass phrases, in excess of 16 characters. Hopefully vendors will catch up with this soon, as many still have problems with so-called 'special characters' such as punctuation and other common symbols.

Friday, 29 January 2010

Cookieless Browser Tracking

We all know about tracking cookies and privacy. However, according to EFF it isn't necessary to use cookies to do a fair job of tracking your browser activities. According to their research browsers give 10.5 bits of identifying information in the userAgent string, which is supplied to the web server with every request. This is around a third of the information required to uniquely identify you.

They have set up a website to gather more data and give you a 'uniqueness' indicator for your browser, which you can find here. This data set is growing quite rapidly and will tell you how many of the userAgent strings they have received that are the same as yours. I managed to find a machine to test that was unique amongst the 195,000 machines they have tested. This means that someone could potentially track that machine even if cookies are disabled. Even if you come out with the same userAgent string as others, you can be narrowed down by using geolocation of your IP, browser plugins, installed fonts, screen resolution, etc. This isn't a new idea and others have tried it, like browserrecon. Of course if you have a static IP address then you are fairly easy to track anyway.

Various suggestions are made to help protect yourself, such as don't allow scripts to run on untrusted websites, which is fairly obvious. However, although this may reduce the amount of data given out from highs of 15.5 bits on a Blackberry or 15.3 bits on Debian, this won't stop the whole problem. It seems like the worst devices for giving out identifying information are Blackberry and Android phones, with minimum figures of over 12 bits. The best combination would seem to be FireFox running on Windows, which can be controlled down to only 4.6 bits (although highs are around double this), but this could just be because it's the most common combination.

What can you do? Don't visit untrusted sites. Also, you could change your userAgent string. It is just a text string stating the capabilities of your machine so that the web server can customise content to suit you. However, there is no real harm in tweaking this to fall in line with more common strings so that you are harder to track. You have to be careful here, because just removing most of the information will probably make your userAgent string unique. Alternatively, you could regularly change the string. Perhaps browsers should change the string with every connection? Plugins could do this, like User Agent Switcher. This would allow you to use different strings across different sites. Maybe hiding certain activities by temporarily switching the userAgent string would be useful.

FireFox and Opera are both quite easy to configure - type about:config or opera:config in the address bar respectively and navigate to the userAgent options. Internet Explorer is slightly more trickey, in that you have to make a registry change to alter the userAgent string. Navigate to [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent] in regedit. Here you can create string values for 'Compatible', 'Version' and 'Platform' to control what is sent. Under the 'Post Platform' key are a whole bunch of additional parameters that will be added to the string, so you can change or remove these.

Friday, 15 January 2010

How secure is your AV Product?

We all use (or at least we should all use) an Anti-Virus (AV) product on our computer to protect it from malware (yes, that includes you Mac and Linux users as well). Rogue Anti-Malware is on the increase and users should be wary of what they install, but if we do choose a big vendor and pay money for it, does it protect our machine from all threats?

Well the answer is no. No security product can be 100% secure, but how secure are they actually? There have been a number of recent surveys and their results show that things are probably improving, but there's still a significant gap. AV-Comparatives.org showed that in their tests, G Data was the best with a 99.8% detection rate of known malware, with Norman being the worst of the 16 at 84.8%. Known malware was taken to be malware from a period of one year that ended 8 months prior to the test. This is important to stress; these weren't new malware instances, these were old known malware that all vendors will have seen and had time to develop their product to combat.

There's another potential issue as well. What settings do you use on your AV product? Do you use the default settings? Several products do come with the highest protection set as default, but not all. Kaspersky, Symantec and Sophos, for example, don't have the highest security settings by default (although Sophos, to their credit, asked AV-Comparatives to test them with default settings, unlike the other two who asked to be tested with settings changed to high security). McAfee use a cloud-based technology called Artemis, which is on by default, but requires an internet connection. Their test scores come down from 98.7% detection rate when online to 92.6% when offline. So be wary about the settings that you use and the mode of use as well, as it can make a big difference.

AV-Test.org also performed similar tests with more current malware, with similar results. In their tests, Symantec came out top with a score of 98% malware detected and Trend Micro with 83.3%. I'll pick out a few big names so that I can give you average figures from both testing labs.

Detection & Blocking Rates for some Major AV Products
ProductExisting DetectionBlockingLive Detection

This isn't the full story though. The above tests are detected existing malware. There are two other metrics that we need to look at. The first of these is the removal or blocking rate. This is the percentage of malware instances that were blocked or removed by the AV product. The others will have infected the machine. AV-Test.org correctly point out that this is a much more important metric than detected malware, as if an AV product detects it but still allows it to install, then you are only marginally better off than if you didn't know about it at all - your machine is still infected. Their tests show that the blocking rates are a chunk down from the detection rates, with the best now being PC Tools at 94.8% and the worst being CA Internet Security at 73.5%. Blocking rate figures for the set of AV products are also given in the table above.

The final thing to consider is the detection rate of new malware that hasn't been seen before, i.e. from live attacks. Cyveillance performed a set of tests sending live attack malware through a set of the top AV products on a daily basis to see how they performed. In their tests, cloud-based McAfee came out top at 44% and VirusBuster bottom on 16%. AV-Comparatives performed a similar test and came out with slightly better results, ranging from AVIRA on 74% down to Norman on 32%. Again, I have averaged their figures to include in the table above.

Conclusion: you could use more than one AV product as long as they don't conflict. However, it is essential that you keep the product up-to-date at all times and configure it for maximum protection.

Pragmatic Approach to Security

When dealing with security, we must be pragmatic. The resources that an organisation can dedicate to security are limited in terms of time, staff, budget, expertise, etc. Also, perfectly secure systems do not exist - accidents, attacks and penetrations will happen in the end, so plan to deal with them at the outset. Recovery after a breach must be just as much of the planning as the mitigation of the breach in the first place. We all insure our cars, hoping never to call on it, and then try desperately to avoid having any accidents, getting the car stolen or vandalized. However, in the end, a lot of us will end up claiming on the insurance at some point, no matter how careful we are. The same is true of security.

We have to see the bigger picture and align the use of resources with the company's mission. There comes a point when a small amount more security costs a lot more money, time, management effort and is much less user-friendly. Wouldn't it impact the business less if we take the hit and recover quickly and smoothly? Often the answer is yes. We have to find the optimal solution for that particular organisation. The graph above shows that as we increase the security of our system the cost associated with breaches of security comes down, as we have fewer breaches. However, this cost will never be zero, as we will always have breaches. Indeed, breaches may still cost a lot of money but, hopefully they will be few and far between. Conversely, as our security increases, the cost of our countermeasures goes up. Therefore, the total cost will decrease with more security initially, then increase again as the countermeasures become increasingly expensive for less and less improvement to security.

These curves and the overall graph will be different for each organisation. The point I'm trying to make is that we should accept that there is no perfect security, do the best job we can, given the resources allocated, and plan for how we will recover from any breaches in security, be they minor or major. The problem comes when deciding what assets should be given priority and what is the best allocation of resources for a specific organisation. This is where security risk assessments come in. For more about security assessments and risks, see my previous post.

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Blog Archive

Twitter Updates

    follow me on Twitter

    Purewire Trust