Monday, 28 September 2009
I'm not going to talk about the underlying security of POTS, but concentrate on a couple of easy attack vectors on the end device of the user that I have recently observed. A couple of weeks ago, I needed to amend something on one of my credit card accounts (I would tell you which bank, only it's my personal credit card and I don't want phisers knowing which banks I have accounts with). This bank has an automated telephone answering system to make things more efficient and reduce staff required - pretty standard. So I made sure that I was in a room on my own, to prevent eavesdropping my conversation, and dialled the number. The automated system asked me to type in my full credit card number on the keypad.
The problem with doing this is that the telephone will remember these digits as part of the last number dialled. Therefore, all someone would have to do is to recall the last number dialled and read off the credit card number. If they actually dial it they would be put through as the legitimate card holder. Now, admittedly they will probably ask some security questions on the other end before making any changes, but these may consist of simply asking for a date of birth, which is fairly easy to find out. Even if you don't know this information, other information may be given away in the mean time (e.g. who the card holder is as they normally use your name in any greeting). The problem is compounded if you make a call from work, where you will probably be using an exchange. Exchanges will store all the numbers dialled, including any options or credit card numbers entered on the phone's keypad. This log can simply be printed out and your details read off. Of course the number dialled will show which bank you are using as well, although this can also be gleaned from the first 6 digits of your credit card number.
Things can potentially get worse if you use facsimile or fax machines. There are different types of fax machines that work in different ways. Most will keep a log of calls and faxes sent and received. This may or may not be a problem, depending on the level of detail of the log and whether you're typing in credit card details during a phone call made on the machine. However, some fax machines use rolls of pigment on acetate (or similar) to print the fax out when received. The problem here is that these rolls are wound through during printing and that part is never reused (otherwise you will get gaps in your printing). However, what this means is that when you come to throw the roll away once used, it will have a perfect facsimile of everything printed on the machine since the roll was put in, only in negative. To get round this, you must shred, or otherwise destroy, the used roll, not just throw it in the bin.
As to whether this is more or less secure than an online transaction is a difficult question to answer. On the one hand, you often need physical access to the phone or fax machine to get to the logs, although telephone exchanges are often online. Also, sifting through a bin outside a premises isn't that hard and can often be very rewarding. On the other hand, transactions online are encrypted and people are more aware of the security implications in general. However, malware and man-in-the-middle attacks can still thwart this type of transaction, but it does require more skills than sifting through a bin.
Not all data leakage comes from computers, pen drives, etc. Sometimes a seemingly innocuous device can betray your information and breach your security. Unfortunately, you have to think of all possible attack vectors and mitigate the risks. This is why a full information policy that covers all forms of data is required.
Friday, 25 September 2009
Don Turnblade has stated that in his experience "well trained staff had a 3.75% unintentional non-compliance rate; they did not realize that installed software compromised data security. About 0.4% of end users were intentionally non-compliant, generally willful persons with strong technical skill or organizational authority who were unaccustomed to complying with computing restrictions."
So what are the different types of error? Dealing with each in turn, we have Slips, Lapses and Mistakes.
- Slips - actions not carried out as intended, e.g. pressing the wrong key by accident. Slips usually occur at the task execution stage.
- Lapses - missed actions or omissions, e.g. forgetting to log out, or a step in a configuration process.
- Mistakes - occur due to an incorrect intention, whilst believing it to be correct, i.e. they are deliberate actions with no malicious intent, e.g. misconfiguration of a firewall. Mistakes usually occur at the planning stage.
So who causes the error or violation and how do we combat them? Slips and Lapses are usually the fault of the user, but can be mitigated by making it more difficult for the user to make the error, e.g. by having confirmation dialogs for slips and better training for lapses. Mistakes tend to be the fault of designers and are slightly more difficult to combat as designer education is required or outside technical expertise needs to be brought in. However, this doesn't always solve the problem if they don't have the skills and knowledge required. Finally, violations can often be laid at the door of the managers. It is often the case that a culture of violations is accepted by senior management, who fail to impose proper sanctions or take the threat seriously.
All of these have to be dealt with to have a secure system and most of it boils down to having proper user education and training in place.
Thursday, 24 September 2009
In one of those comments, someone pointed out that in their experience users are often a weak link. Isn’t it always the case that users are the weakest link? A poorly educated/trained user can compromise the best security. Unfortunately, I have seen so many organisations that do not adequately train their users or make them aware that there are policies, let alone what they mean to their daily usage of the corporate systems. I have also come across one organisation where a top executive had all the system passwords stored, unencrypted, on his PDA. He didn’t see a problem with this as he always carried it with him!
How many organisations these days have push email onto a mobile? How many of those organisations send sensitive documents around via email? Do they have encryption and password access on those devices? Not many that I’ve seen. The typical Blackberry users that I see have no password or PIN access to their phone, but it does have full access to the corporate mail exchange. These devices also have the ability to store, and even sync, corporate documents. What policies do you have to cover them?
Quoting from ISO-27002:2005 11.7.1: A formal policy should be implemented, and appropriate security measures adopted, for mobile computing and communications activities. Controls should apply to laptop, notebook, and palmtop computers; mobile phones and "smart" phone-PDAs; and portable storage devices and media. Controls include requirements for:
- physical protection;
- data storage minimization;
- access controls;
- cryptographic techniques;
- data backups;
- anti-virus and other protective software;
- operating system and other software updating;
- secure communication (e.g., VPN) for remote access; and
- sanitization prior to transfer or disposal.
"I feel that you can lock down with security policy and tools but this is a complex problem as the combination of mobility and technology diversity, e.g. I can use my iPhone to connect to the enterprise network and store sensitive data on it, is creating a major headache for infosec professionals. As well as the problem with laptops and USB drives we are also seeing a growing use of employee-owned mobile devices, netbooks, games consoles, smart phones, all having IP and WiFi capabilities and all capable of picking up enterprise data and email."There are a number of things we can do to stop these devices from compromising the network by blocking their use. We can block USB devices from being able to connect unless they are a managed resource, so that users can't just plug anything they bring in from home. All USB devices have an ID, which can be registered with a central authentication server to check before a computer allows it to be used. Of course this needs third-party software, but can be done quite easily. We can also block devices from being able to obtain an IP address or connect to the corporate network in the first place. We shouldn't have a free-for-all attitude on the network. It should be locked down to approved devices only. Only managed devices can connect and they will have to authenticate.
I think it’s asking for trouble to allow users to connect their own private devices to the network or services. I don’t see how you can comply with any standards or your own security policies when allowing this, as you don’t know what’s connected or how it’s configured. Even if they are secure (a very big IF), by not knowing the configuration or being able to audit it, you are surely in violation of any accreditation or certification that you may have because you cannot test or 'prove' your compliance.
Wednesday, 9 September 2009
Responsibility for the notorious Heartland Payment Systems data breach late last year has been debated recently, with Heartland’s CEO suggesting that their PCI auditors let the firm down, while the auditors insist they can’t be responsible for checking absolutely everything. This case brings to light the reality that absolute security is an impossible goal, and that audits are only as good as an organization’s vigilance in following proper security procedures after the audit has been completed.
See my second video blog here.
Friday, 4 September 2009
Several recent data breaches at major enterprises and governmental agencies stemmed from the loss or theft of mobile computers and USB drives. While encrypting the data on these devices isn’t a bad idea, the larger question is why was sensitive personal information stored on the mobile device in the first place?
See my first video blog for Comodo Vision here.