Friday, 29 January 2010

Cookieless Browser Tracking

We all know about tracking cookies and privacy. However, according to EFF it isn't necessary to use cookies to do a fair job of tracking your browser activities. According to their research browsers give 10.5 bits of identifying information in the userAgent string, which is supplied to the web server with every request. This is around a third of the information required to uniquely identify you.

They have set up a website to gather more data and give you a 'uniqueness' indicator for your browser, which you can find here. This data set is growing quite rapidly and will tell you how many of the userAgent strings they have received that are the same as yours. I managed to find a machine to test that was unique amongst the 195,000 machines they have tested. This means that someone could potentially track that machine even if cookies are disabled. Even if you come out with the same userAgent string as others, you can be narrowed down by using geolocation of your IP, browser plugins, installed fonts, screen resolution, etc. This isn't a new idea and others have tried it, like browserrecon. Of course if you have a static IP address then you are fairly easy to track anyway.

Various suggestions are made to help protect yourself, such as don't allow scripts to run on untrusted websites, which is fairly obvious. However, although this may reduce the amount of data given out from highs of 15.5 bits on a Blackberry or 15.3 bits on Debian, this won't stop the whole problem. It seems like the worst devices for giving out identifying information are Blackberry and Android phones, with minimum figures of over 12 bits. The best combination would seem to be FireFox running on Windows, which can be controlled down to only 4.6 bits (although highs are around double this), but this could just be because it's the most common combination.

What can you do? Don't visit untrusted sites. Also, you could change your userAgent string. It is just a text string stating the capabilities of your machine so that the web server can customise content to suit you. However, there is no real harm in tweaking this to fall in line with more common strings so that you are harder to track. You have to be careful here, because just removing most of the information will probably make your userAgent string unique. Alternatively, you could regularly change the string. Perhaps browsers should change the string with every connection? Plugins could do this, like User Agent Switcher. This would allow you to use different strings across different sites. Maybe hiding certain activities by temporarily switching the userAgent string would be useful.

FireFox and Opera are both quite easy to configure - type about:config or opera:config in the address bar respectively and navigate to the userAgent options. Internet Explorer is slightly more trickey, in that you have to make a registry change to alter the userAgent string. Navigate to [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent] in regedit. Here you can create string values for 'Compatible', 'Version' and 'Platform' to control what is sent. Under the 'Post Platform' key are a whole bunch of additional parameters that will be added to the string, so you can change or remove these.

Friday, 15 January 2010

How secure is your AV Product?

We all use (or at least we should all use) an Anti-Virus (AV) product on our computer to protect it from malware (yes, that includes you Mac and Linux users as well). Rogue Anti-Malware is on the increase and users should be wary of what they install, but if we do choose a big vendor and pay money for it, does it protect our machine from all threats?

Well the answer is no. No security product can be 100% secure, but how secure are they actually? There have been a number of recent surveys and their results show that things are probably improving, but there's still a significant gap. AV-Comparatives.org showed that in their tests, G Data was the best with a 99.8% detection rate of known malware, with Norman being the worst of the 16 at 84.8%. Known malware was taken to be malware from a period of one year that ended 8 months prior to the test. This is important to stress; these weren't new malware instances, these were old known malware that all vendors will have seen and had time to develop their product to combat.

There's another potential issue as well. What settings do you use on your AV product? Do you use the default settings? Several products do come with the highest protection set as default, but not all. Kaspersky, Symantec and Sophos, for example, don't have the highest security settings by default (although Sophos, to their credit, asked AV-Comparatives to test them with default settings, unlike the other two who asked to be tested with settings changed to high security). McAfee use a cloud-based technology called Artemis, which is on by default, but requires an internet connection. Their test scores come down from 98.7% detection rate when online to 92.6% when offline. So be wary about the settings that you use and the mode of use as well, as it can make a big difference.

AV-Test.org also performed similar tests with more current malware, with similar results. In their tests, Symantec came out top with a score of 98% malware detected and Trend Micro with 83.3%. I'll pick out a few big names so that I can give you average figures from both testing labs.

Detection & Blocking Rates for some Major AV Products
ProductExisting DetectionBlockingLive Detection
Symantec98.2%92.8%35.5%
Kaspersky96.1%89.9%41.0%
McAfee93.0%86.7%45.5%
AVG93.1%84.2%40.0%
F-Secure91.9%80.2%42.0%

This isn't the full story though. The above tests are detected existing malware. There are two other metrics that we need to look at. The first of these is the removal or blocking rate. This is the percentage of malware instances that were blocked or removed by the AV product. The others will have infected the machine. AV-Test.org correctly point out that this is a much more important metric than detected malware, as if an AV product detects it but still allows it to install, then you are only marginally better off than if you didn't know about it at all - your machine is still infected. Their tests show that the blocking rates are a chunk down from the detection rates, with the best now being PC Tools at 94.8% and the worst being CA Internet Security at 73.5%. Blocking rate figures for the set of AV products are also given in the table above.

The final thing to consider is the detection rate of new malware that hasn't been seen before, i.e. from live attacks. Cyveillance performed a set of tests sending live attack malware through a set of the top AV products on a daily basis to see how they performed. In their tests, cloud-based McAfee came out top at 44% and VirusBuster bottom on 16%. AV-Comparatives performed a similar test and came out with slightly better results, ranging from AVIRA on 74% down to Norman on 32%. Again, I have averaged their figures to include in the table above.

Conclusion: you could use more than one AV product as long as they don't conflict. However, it is essential that you keep the product up-to-date at all times and configure it for maximum protection.

Pragmatic Approach to Security

When dealing with security, we must be pragmatic. The resources that an organisation can dedicate to security are limited in terms of time, staff, budget, expertise, etc. Also, perfectly secure systems do not exist - accidents, attacks and penetrations will happen in the end, so plan to deal with them at the outset. Recovery after a breach must be just as much of the planning as the mitigation of the breach in the first place. We all insure our cars, hoping never to call on it, and then try desperately to avoid having any accidents, getting the car stolen or vandalized. However, in the end, a lot of us will end up claiming on the insurance at some point, no matter how careful we are. The same is true of security.



We have to see the bigger picture and align the use of resources with the company's mission. There comes a point when a small amount more security costs a lot more money, time, management effort and is much less user-friendly. Wouldn't it impact the business less if we take the hit and recover quickly and smoothly? Often the answer is yes. We have to find the optimal solution for that particular organisation. The graph above shows that as we increase the security of our system the cost associated with breaches of security comes down, as we have fewer breaches. However, this cost will never be zero, as we will always have breaches. Indeed, breaches may still cost a lot of money but, hopefully they will be few and far between. Conversely, as our security increases, the cost of our countermeasures goes up. Therefore, the total cost will decrease with more security initially, then increase again as the countermeasures become increasingly expensive for less and less improvement to security.

These curves and the overall graph will be different for each organisation. The point I'm trying to make is that we should accept that there is no perfect security, do the best job we can, given the resources allocated, and plan for how we will recover from any breaches in security, be they minor or major. The problem comes when deciding what assets should be given priority and what is the best allocation of resources for a specific organisation. This is where security risk assessments come in. For more about security assessments and risks, see my previous post.

Welcome to the RLR UK Blog

This blog is about network and information security issues primarily, but it does stray into other IT related fields, such as web development and anything else that we find interesting.

Tag Cloud

Twitter Updates

    follow me on Twitter

    Purewire Trust