Skip to main content

Security Through Obscurity

I have been reminded recently, while looking at several products, that people still rely on the principle of 'security through obscurity.' This is the belief that your system/software/whatever is secure because potential hackers don't know it's there/how it works/etc. Although popular, this is a false belief. There are two aspects to this, the first is the SME who thinks that they're not a target for attack and nobody knows about their machines, so they're safe. This is forgivable if misguided and false. See my post about logging attack attempts on a home broadband connection with no advertised services or machines.

The second set of people is far less forgivable, and those are the security vendors. History has shown that open systems and standards have a far better chance of being secure in the long run. No one person can think of every possible attack on a system and therefore they can't secure a system alone. That is why we have RFCs to arrive at open standards that work. An example of a product that failed due to this is DiskLock. This was a few years ago now, but there are modern products that follow a similar philosophy. However, it's not my intention to pick on a particular vendor or product. DiskLock, though, was a program that encrypted files with the DES algorithm. No problems there, but they stored the key with the file, relying on people not knowing this or the scheme used to hide it. Unfortunately, with reverse engineering and chosen-key/plaintext attack techniques this is possible to work out. The problem is that the secrecy won't last long and when that has been bypassed the system should remain secure. If it does, then there was no need to keep it secret in the first place.

The only other time this phrase is used is when talking about the level of security given by implementing NAT. Here the addresses of the internal machines are obscured and an attacker doesn't know how many machines are there or what the internal topology is. Of course NAT will only allow outgoing connections or connections to specific ports due to port forwarding, so that does reduce the chances of attacking some machines. However, a web server will still have ports 80 and 443 open and, if it isn't properly patched, will suffer in exactly the same way as if it wasn't behind NAT.

I'm not saying that you should tell everyone exactly how you have implemented your security, but you can't rely on secrecy to last. The important thing is to thoroughly test your security, preferably with an outside independent agency. This is particularly important if you want others to rely on your system and must include an audit of your code for software and settings for your hardware. Are customers more likely to trust an independent testing agency or a vendor trying to sell a product or system?

Comments

Popular Posts

Coventry Building Society Grid Card

Coventry Building Society have recently introduced the Grid Card as a simple form of 2-factor authentication. It replaces memorable words in the login process. Now the idea is that you require something you know (i.e. your password) and something you have (i.e. the Grid Card) to log in - 2 things = 2 factors. For more about authentication see this post . How does it work? Very simply is the answer. During the log in process, you will be asked to enter the digits at 3 co-ordinates. For example: c3, d2 and j5 would mean that you enter 5, 6 and 3 (this is the example Coventry give). Is this better than a secret word? Yes, is the short answer. How many people will choose a memorable word that someone close to them could guess? Remember, that this isn't a password as such, it is expected to be a word and a word that means something to the user. The problem is that users cannot remember lots of passwords, so remembering two would be difficult. Also, having two passwords isn't real

How Reliable is RAID?

We all know that when we want a highly available and reliable server we install a RAID solution, but how reliable actually is that? Well, obviously, you can work it out quite simply as we will see below, but before you do, you have to know what sort of RAID are you talking about, as some can be less reliable than a single disk. The most common types are RAID 0, 1 and 5. We will look at the reliability of each using real disks for the calculations, but before we do, let's recap on what the most common RAID types are. Common Types of RAID RAID 0 is the Stripe set, which consists of 2 or more disks with data written in equal sized blocks to each of the disks. This is a fast way of reading and writing data to disk, but it gives you no redundancy at all. In fact, RAID 0 is actually less reliable than a single disk, as all the disks are in series from a reliability point of view. If you lose one disk in the array, you've lost the whole thing. RAID 0 is used purely to speed up dis

Trusteer or no trust 'ere...

...that is the question. Well, I've had more of a look into Trusteer's Rapport, and it seems that my fears were justified. There are many security professionals out there who are claiming that this is 'snake oil' - marketing hype for something that isn't possible. Trusteer's Rapport gives security 'guaranteed' even if your machine is infected with malware according to their marketing department. Now any security professional worth his salt will tell you that this is rubbish and you should run a mile from claims like this. Anyway, I will try to address a few questions I raised in my last post about this. Firstly, I was correct in my assumption that Rapport requires a list of the servers that you wish to communicate with; it contacts a secure DNS server, which has a list already in it. This is how it switches from a phishing site to the legitimate site silently in the background. I have yet to fully investigate the security of this DNS, however, as most