Economics and Security

No this post isn't about the cost of security - at least not in direct dollars!

I've been meaning to make this post for a while. Recently I read a great paper from Microsoft Research titled So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users

Some of the points in this paper really hit home about challenging the common wisdom about why users reject or bypass security and the indirect cost to them for something from which they're unlikely to suffer.

Applying ecomomic ideas such as externialities to Information Security is not new, Bruce Schneier has commented on it in the past in regards to software development and it is also mentioned in a chapter in Beautiful Security (which I don't have handy to pull the reference from).
Despite the old gag definition of economics being "The science of explaining tomorrow why the predictions you made yesterday didn't come true today" it is sadly still a step up from much of the FUD, voodoo and magic numbers pulled out of the air by some IT and IT Security folk.
One of the great challenges is, as always, getting useful metrics...

Another major point in the Microsoft paper that really made me sit up and think was their assertation that "Thus, to a good approximation, 100% of certificate errors are false positives. Most users will come across certificate errors occasionally. Almost without exception they are the result of legitimate sites that have name mismatches, expired or self-signed certicates."
Thinking back over many years of surfing the 'net, I had to agree. I couldn't think of a particular instance where I encountered an SSL certificate error that wasn't a false positive.
The bad guys don't use SSL certificates....why bother when you can fool end users by placing a padlock as a favicon or just using an image of a padlock next to the login box on your phishing site?
Developers of legitimate sites don't help the situation either, by mixing secure and nonsecure content on the same page that brings up warning dialog boxes. What's your average end user to do? Assume the legitimate page is bad and deny themselves access to a service, or click on and further reinforce the message that it's alright to click OK on those boxes that appear and nothing bad will happen.
I visited two websites recently, both owned by major IT companies, that had mixed their secure and nonsecure content in this manner.
What's the solution? SSL everywhere and browsers that won't allow non-SSL verified connections?

Training end users is hard. Bringing them onside as allies in your security efforts without overburdening them with externialities or overstating the actual likely harm by using worst-case harm (ie: introducing FUD) is even harder.

Wikipedia & Reputational Risk

A while ago I came across an interesting story on the register where Wikipedia have banned an IP address for posting racist comments - the catch? The IP address belongs to Volvo's IT division.

Wikipedia is a site that I imagine is not blocked or banned in many companies, as it's used as a major source of information by people all through business (the merits or accuracy of which is a discussion for another time).

Volvo aren't the first organization to be caught wikifiddling, when the Wikiscanner was released a few years ago a range of organizations were found to be questionably editing information, including the then-Australian Prime Minister's department and the CIA.

As far as I know the previous organization's 'outed' were mainly revealed to be engaging in pointless vandalism, such as changing Wolf Blitzer's name or adding 'jerk' multiple times to George W. Bush's profile.

A charge of racism is, however, a whole different situation, and one that can certainly bring extremely damaging attention to an organization.

But what to do? Blocking access completely is too draconian for most companies. Policies on blogging and editing online web 2.0 type sites (such as Wikipedia) are a start. Educating the workforce on the type of damage they can do and ensuring they know their access is monitored can act as a proactive deterrent. Combine this with web monitoring/auditing of access to enable follow-up on offenders can allow for quick follow-up in the event of an incident.

It often seems that even 'IT-savvy' staff can completely forget that their actions on the internet can be tracked, traced and may well leave a permanent imprint, especially when it comes to social networking. Adding some general awareness to Information Security education programs along with the usual 'don't click on attachments' may pay off in the long run.

CERT Australia

Looks like our govenment has decided to increase it's efforts in 'cyber-security' by retiring the old GovCERT and rolling the excellent AusCERT into the new CERT Australia (although they need a snappier name!).

It's encouraging to see the government making an effort to assist and encourage increased information security awareness, amongst both businesses and individuals. I can only hope it all works out better than the National Broadband Network and National Internet Filtering Scheme have so far...

Next week David Campbell, the Director of Australian Government Computer Emergency Readiness Team is speaking at the AISA Annual Seminar Day in Sydney, so I'm looking forward to hear what he has to say about this new body, it's mandate and goals.

The SID Duplication Myth

Microsoft's Mark Russinovich (formerly of Winternals fame) has posted a great bit of information busting a popular myth about duplicate SIDs on cloned machines.

I admit, I always thought running something like NewSID was mandatory on cloned machines for correct Windows domain and WSUS functionality, but apparently that's not the case.

I can recall some product (it may have been Trend AV, but I could be wrong) that did seem to rely on the machine SID (ie: on cloned machines pre-NewSID there were problems), but then Mark does mention that while no Microsoft applications look at the machine SID, other 3rd party applications may still require the use of something along the lines of NewSID.

Also be wary of cloning machines after joining them to a domain as duplicate domain SIDs are a different thing entirely and can cause headaches...

World's greatest resume?

Not really security related, but with the CISSP endorsement process requiring an updated resume, when an email with the below hit my inbox I had to share.

Having been a fine arts student and with a hobby interest in design, this has to be one of the most imaginative and visually stunning resumes ever!

CISSP

I'm happy to say I received my CISSP results today and have passed.
Big thanks to Mark Gill for organizing a CISSP study group through AISA that really helped keep my studies on track. I hope I get the chance to replay him by assisting in the study group when it runs again next year. Thanks also to the guest speakers and my fellow students at the study group who offered their insights, experience and expertiese (and bad jokes!)

For anyone preparing for the CISSP who is interested, I used the Shon Harris 4th edition CISSP textbook, the ISC2 official CBK and the Exam Cram CISSP text for reading on the train.

The Shon Harris book is quite in depth, going too deep in some areas compared to what you need to know for the exam, but good as a reference text. I found the official CBK is very dry and hard to read, but useful as a companion to the Shon Harris book to compare the amount of treatment different areas received (it can vary widely between the two books). The Exam Cram book is, as you might expect, brief and to the point. I felt it was a good review book due to it's portability, but not enough to use as your only text.

I also used the sample review exams at cccure.org, but to be honest found many of the questions there to be quite dated (lots of 'Orange book' questions rather than Common Criteria for example) and a few answers to be incorrect. Better than nothing I guess, but I wouldn't take your results on their sample exams as a strict indication of your preparedness for the exam itself.

So what's next? So much to learn, so little time...

Even more default passwords!

It's been widely reported that an Australian man has developed the new iphone virus that 'rickrolls' owners of jailbroken iphones.

The virus spreads via ssh using the iphone's default password of 'alpine'. Normally ssh access is not available on a standard iphone, but enabling access is a requirement of jailbreaking the iphone to get around restrictions placed on the device by Apple.

This comes hot on the heels of a ransonware scam with a dutch hacker holding jailbroken iphones 'hostage' for €5 which uses the same method to gain access to jailbroken phones. (The dutch hacker has since apparently stopped asking for money and has now provided instructions on how to undo his changes).

Does this represent a big security hole for Apple? Not really, as both attacks only affect jailbroken iphones. If you are jailbreaking an iphone, or modifying any device against the manufacturer's instructions, then the onus of providing a secure device has passed from the manufacturer to the end user - something which most end users probably don't think about.

While both 'hackers' have claimed the release of their viruses was a educational 'wake up call' for users with jailbroken iphones to ensure they change their default passwords, the simplicity of the attacks could mean something more sinister is on the horizon.
The pair of them may be in hot water as even a relatively harmless change like rickrolling can have unintended legal consequences (the attempted extortion from the dutchman aside).

If you have a jailbroken iphone, change the default password asap!

*edit* I just came acoss this post from Sophos which has a screenshot of some of the virus source code:

Social Engineering in Real-World Computer Attacks

Great little article over at SANS on Social Engineering in Real-World Computer Attacks

More Default passwords?

A young queenslander has been charged with hacking* offences after 'hacking' several ATMs to withdraw $30,000 dollars in cash.

The article is short on detail about how these 'hacks' occured, but they do suggest he "found information on the internet and in an ATM manual that allowed him to change the machines' settings so he could make huge withdrawals of cash"

What sort of information in a product manual would allow you to do something like this? I'm betting it was some kind of default password.

It isn't stated what bank owned the ATMs or if they were all from the same bank - I'm guessing they may have been. After all if you have a trick to do something like this it probably only works on one model of ATM, and if it worked on one ATM from a particular bank, it probably works on another!

Default passwords and misconfigured devices are unfortunately all too common. I suspect the practice is even worse when people are with specialized, unusual devices like an ATM. This seems to be an example of security by obscurity at work, the incorrect assumption that the default password didn't need changing because only authorized personnel have access to the product manual. A quick google for ATM Manuals and default passwords shows plenty of results!

Security by obsucurity can be a controversial topic in security circles. At it's core is the idea of being secure by design, rather than secure because of secrecy. In a recent discussion I was part of with a group of security professionals from different backgrounds there were mixed opinions on the topic. Should your security design have no secrets? Should you publish it on the internet? Well to me the common sense answer there is no, as obscurity or secrecy does have a place in security design and implementation. The important thing is your security should not rely on the design being kept secret.

While I'm certainly not condoning or encouraging this type of crime and there is a degree of supposition on my behalf to assume default passwords were the cause, it would seem to fit. While the young man deserves the punishment for the crime, what about the failure of duty of care on behalf of the bank? The lax security procedures?

*I don't know if being able to google for an ATM manual makes you a 'hacker'....

More multi-factor authentication

Still on the theme of biometrics, News.com.au is today reporting that Aussies favour fingerprinting to prove ID online. The 'proof' comes from a Unisys Security survey of 1200 Australians. Now I haven't read the survey, but the news item also states: "Unisys...which provides organisations, including the immigration department, with biometric tests..."
Hmmm.

Fingerprinting has many problems, some which I mentioned in yesterday's post, but others such as whether fingerprints are sufficiently unique to be used for authentication, how (and if) users will protect their fingerprints any better than they do their passwords and what happens if your fingerprints are compromised?

Fingerprints aren't hard to get, especially if you have physical access to the victim and their environment. For remote capturing, well all those fingerprints will have to be transmitted and stored somewhere, where they can be captured en masse or they just as vulnerable to phishing and man-in-the-middle attacks as passwords. Had all 10 fingerpirnts captured by the bad guys? Uh-oh! Even worse if they're used by law enforcement and immigraton as unique identifiers!

There are also great variances in the accuracy and the methods used for verification in different fingerprint readers. Having banks (or whomever) send out readers to all their customers goes back to the convenience factor I mentioend yesterday.

I think it will be a long time before we see fingerprinting as a common method of web authentication...

three-factor Authentication

Apparently the National Australia Bank (NAB) are looking at moving to three factor authentication. For those who are unaware, 'multi-factor' authentication involves authenticating a subject through a variety of different methods, most commonly 2 of the below:

  • Something you know (eg: a password)
  • Something you have (eg: a security pass or token)
  • Something you are (eg: biometric security such as a fingerprint or iris scan)
and occasionally adding:
  • Somewhere you are (only allowing access from a specific place, such as using a RAS call-back system)
Multi-factor is generally considered more secure than single-factor authentication as an impersonator must capture or reproduce more than just a password (the most common single factor authentication mechanism)

So if two factors is more secure than one then three must be even better right? Well that all depends on a number of factors (excuse the pun!).
The more factors you add to the equation, the more inconvenient authentication becomes to the end user. Convenience is important. This is why passwords are still so popular, despite being shown to be extremely weak security in that many people will give away their password for a candy bar (especially if you are a woman apparently!)

So when implementing two factor authentication, convenience needs to be taken into account. RSA tokens that can attach to a keyring and One Time Passwords (OTP) that are send via SMS to a registered mobile phone are examples of incorporating a reasonable measure of convenience into the authentication process. I know HSBC uses the RSA tokens for their internet banking login authentication and NAB take a different approach, using only a password for login, but a OTP sent via SMS to verify any money transfers (for personal customers anyway. Business customers get a token)

All sounds terribly secure right? Well no. As security guru Bruce Schneier commented back in 2005 in refernce to 2-factor security: "...it solves the security problems we had ten years ago, not the security problems we have today".
He was, and still is, right. Phishing attacks and Man-in-the-middle (MITM) attacks are examples of very old attacks that can defeat 2-factor authentication by targetting the user. If you can fool the user into providing you with the information you need, you can fool the authentication mechanism.

So if two-factor authenticaion is broken, three-factor authentication will save us! Right?
I'm not convinced. The original article mentions using voiceprint identification for the third factor (something you are). Hmmm.
Biometrics are tricky to say the least. Faces change over time as people age, gain/lose weight and other conditions such as lighting and distance can distort the image viewed by facereadering cameras and lead to false-positives or false-negatives. Fingerprints can change due to accidents or even minor injuries (papercut) and many fingerprint readers have been shown again and again to be easily defeated. Iris scans are very accurate and don't tend to change, but are hardly easily portable or suitable for mobile or home internet banking.
As for voiceprints, well ever had a laryngitis? No? A cold? Bad phone reception?
I'm not convincd they're the way to go and neither are some experts who state: "There is no such thing as a voice print, it's a very very dangerous term. There is no single feature of a voice that is indelible that works like a fingerprint does."

The other unanswered question is what does the NAB hope to achieve by adding a third factor to their authentication? "More security" is not much of an answer, is it anything more than a marketing one-up on their competitors? ("We're the only one who uses three-factor security! bank with us!")
It all seems a bit more like security theatre than real security. Perhaps NAB need to look at their internal security first...

Of Bombs and bums part II

I posted a few weeks ago about the 'bum bomber' who tried to blow up a Saudi prince with an explosive hidden in his rear-end.

Well it seems at last some governments are worried about a sudden upswing in 'bum bombers' and are proposing full body scan x-rays.

One can only hope this is more media beat-up and speculation that actual plans...

Passwords!

By now it's a safe bet anyone working in the security space has heard about the leaked passwords from hotmail, yahoo and gmail.

The most interesting thing so far to come out of the leak is the results of an analysis of the passwords exposed. The results are an interesting mix and shed some light on how the message about using strong passwords is being received out there in user-land.

The most common password found was '123456' with '12346789' coming in second. It's enough to keep a security guy up at night!

Amazingly 'password' didn't make the top 20 list, but despite the fact the average password length was 8 characters, 42% of all the passwords listed were lower case only and only 36% were what we commonly consider 'strong passwords' (in complexity if not length). This shows the message is not being heard.

Why is this a concern to the security guy in the enterprise? Well the same users are likely to be in the office and these results show that the password message is not getting through. Not to mention employees with good intentions emailing work documents to themselves @hotmail so they can be diligent and work on them at home. That same hotmail address with the '123456' password....

The good news (comparitively!) is that the passwords have not been gathered due to a flaw in the security of these industry heavyweights, but by via phishing attacks against the users themselves.
The problem though is even when users are diligent and more complex passwords are used there is the problem of those same users being suckered in by phishing attacks. Even the head of the FBI was banned from online banking by his wife for almost falling victim to a phishing email.

A senior security engineer for nCircle recently presented at SecTor the results of a survey both technical and non-technical users that showed while 83% of users checked for the magic padlock in the browser when entering their credit card details, a dismal 41% checked for the same padlock when entering a password. Although the displaying the magic padlock can be easily faked.
Unsurprisingly almost 50% of users also clicked through security warnings without paying attention to them. In this we're paying the price for training end users to 'just click ok' through countless exposures to buggy software.

People can't be relied upon to pick strong passwords or read security warnings. Security guru Bruce Schneier has written about this back in 2006 when 100000 myspace accounts were exposed through a phishing attack. That wonderful password '123456' made the top 20 back then too, but the best performer was 'password1'.

Bruce comments that "We used to quip that 'password' is the most common password. Now it's 'password1.' Who said users haven't learned anything about security?"

He's completely correct and in fact I'd hazard a guess that they've continued to learn and the most common password these days (where complexity rules are applied) is 'Password1'.

What's the answer? Nothing simple comes to mind, but clearly our education of users isn't working today, we need to do better.

And finally a more humourous look at choosing a password...

Stick Figure's guide to Advanced Encryption Standard (AES).

Awesome.

So you wrote your own web server ?

Since I am posting something I can only assume I have an assignment due today, let me see... Assignment 1...28th of September... doh! Ah well, better get on with it (writing a post that is).

I am constantly amazed at the lengths developers will go to to guarantee the insecurity of the application thy are writing. The application I am evaluating at work at the moment is a web application that, up until this version, has run atop IIS. Now in all their wisdom the manufacturer has decided it would be a far better idea to write their own web server. But wait it gets better the services for this particular application need to run with local admin rights including the web server. Wait, let me get this straight you want me to expose a custom written web server with who knows how many buffer, stack and heap overflow vulnerabilities, not to mention race conditions, memory leaks et al, running as local administrator to the internet? Let me think about it. NO!

Writing a web server is hard and re-inventing the wheel is simply unnecessary. This goes for more than just web servers, encryption, authentication (another recent doozy) and authorisation schemes among others have already been built for you. If you are homebrewing something like this you are doing it wrong.

Of bombs and bums

The Register is reporting about a recent suicide bomber attempt on a Saudi Prince where the would-be assassin apparently concealed an explosive device in his *ahem* rear-end, which he then detonated upon meeting the Prince, resulting in more mess than injury (to the Prince anyway!)

I can remember flying out of Sydney not long after the infamous 'Shoe bomber' had been caught trying to destroy an airliner mid-flight. This was not long after the September 11 attacks and the already heightened airport security in place went into a new gear with the revelation that the humble shoe could be a new weapon.

I was pulled aside while going through a security check and asked to remove my shoes, which were then prodded, poked and flexed by a stern looking security guard before they were returned. No real inconvenience.

After reading the Register's article, I did give out a sigh of relief that Richard Reid hadn't tried to blow up that plane with explosives hidden elsewhere, otherwise those airport security checks could be even more painful!

What does the above have to do with information security? Not much, aside from raising awareness of the lengths to which people will go to accomplish their goals. For insiders intent on stealing data, even the most stringent security checks that may include inspecting laptops, cameras, thumbdrives and ipods would more than likely fail to find storage hidden in a pen, sunglasses, jewelry or coins.

Legal ≠ Secure

A recent discussion about the security of an application generated the response "But this was OK'd by Legal so no more needs to be done".

Legal ≠ Secure. When a Legal department is asked for input, they are purely concerned with determining whether whatever is being presented to them contravenes the law. Most of the time the law will state something along the lines of "due care must be taken not to disclose data" rather than "you must use a minimum of 128-bit encryption to encrypt the data and the transmission".

What is due care? Well that's up to the judge to decide after the lawsuit has begun. Lawyers aren't normally Information Security professionals (well none I know!) and in fact often suffer from the same mideset as most non-IT professionals in that they tend to lump all things IT into the same basket*. As far as they're concerned, if someone in IT said we've done our best to secure something, they'll assume we've done due diligence and sign off, not really making the distinction of whether the 'IT guy' (or gal!) is technical or non-technical, a programmer, sysadmin or IT security expert. It may only be later during the court case when a prosecuting expert testifies that using DES to encrypt those passwords wasn't a good idea.**

When an IT Security Professional is asked for input, they generally have a pretty good grasp of legal requirements (well the good ones will!) and can always see legal for clarification. They are the ones who can ensure from a technology standpoint that the company is obeying the letter and the spirit of the law.

You wouldn't ask an IT Professional to to organize your legal defence, so don't ask a lawyer to vet the security of your applications. While the lawyers have their part to play, in ensuring that the law is being upheld, Legal ≠ Secure.


*In fairness to lawyers, I probably lump them all into the same basket too, not really paying attention to the difference between a patent lawyer and an ambulance chaser.
**If you're a lawyer reading this and don't understand this comment, go ask a friendly IT Security Professional!

APRA IT Security Risk Guidelines

APRA have released a discussion paper and draft best practice guide on the management of IT security risks.

APRA are the Australian financial services industry regulatory body. They oversee banks, credit unions, building societies, insurance and superannuation companies.

While light on specific detail, as a quick guide on what is expected for organizations under APRA's juristiction. It's a neat, concise set of guidelines that's not too jargon heavy - ie: good for Management to give them an overview of what is considered best practice (or 'prudential practice' as APRA call it).

I quite like the recommendation that organizations need to have "an overarching IT security risk management framework, addressing matters including an IT security strategy and a hierarchy of policies, standards, guidelines & procedures; and clearly-defined security principles for this strategy, addressing issues such as defence-in-depth, control diversity, breach detection and denial of unnecessary permissions/protocols."

It's good to see a body such as APRA publishing a document like this, I think it really helps raise awareness about some of these issues that may be lagging here in Australia compared to other parts of the world. My only criticism it that it's only a 'prudential guide' and non-enforcable, but that hopefully may change in the future.

The papers are available here.

Unusual Vectors

Threatpost is reporting the use of the more unusual malware vectors with a pentester sending bogus CDs and letters supposedly from the National Credit Union Administration containing training materials. The CDs, of course, contain malware and the real lesson is not to judge a CD by its cover letter.

There's nothing new about this type of attack vector; its similar to an old case of pentesters who left usb sticks outside their target and watched the employees 'find' the drives and then, as expected, plug them in their computers as soon as they got into the office.

While a well trained staff should have had second thoughts about the latter, the former is much more troubling. How would even a trained security officer know the difference between a 'real' training CD from an official body (or partner company) and a fake?

Many organizations are constantly receiving CDs in the mail, legitimate CDs from partners, regulatory and government bodies or software trials/updates from vendors. Should someone with malicious intent be targeting an institution (or institutions), then slipping this type of trojan-laden CD into the mail probably wouldn't be all that hard. Especially if it was disguised as something the organization was expecting or receives on a regular basis.

The downside of this type of spear-phishing attack is it becomes more difficult to maintain the level of anonymity that the internet can provide once you are passing out physical discs, so although it might have a higher strike rate for the attacker, the risks of being caught are also greatly increased.

I can recall once dealing with a company in the past who had simply removed all the CD drives from their PCs to stop employees bringing in malware. This is something similar to the old 'supergluing the usb ports' trick. While an effective measure, it is somewhat extreme and will more than likely cause more problems than it will solve.

Nonetheless this type of attack is a troubling thought for those tasked with protecting a company's information assets. The only good part is it is unusual and unlikely to happen to you, but might be worthwhile mentioning in your next security awareness course.

In plain sight

Darkreading has a great article on Weaponizing the ipod touch.

In short it is an article from a DefCon presentation about turning the ipod touch into a wireless network penetration tool. Although not blessed with great processor or memory capability it does have a generous storage capability and with some specialized versions of tools such as TCPDump and NMap it can quickly become a rather stealthy headache for the corporate security guy.
While the guy (or gal!) sitting in the lobby of your building or in the carpark with a laptop out may arouse some suspicion, the same person pecking away on their iphone or ipod touch wouldn't even warrant a second glance in most cases.

As processing power becomes more and more portable, from smarter phone and personal entertainment devices to wearable computers ensuring any wireless security in your company is properly secured will become more and more crucial. Standards and configurations that may have been sufficiently secure a year or two ago will need constant review to ensure security is maintained. The wired network is far from immune from danger, as smaller and smaller devices can be plugged into rarely used network ports in conference rooms or unused offices can be used to sniff traffic and beam data back to an attacker, or simply collect information until they are retrieved.

Educating the corporate user base to ensure they understand the dangers of using wireless networking outside the office will also become increasingly important. With more and more corporate users demanding access to increasing amounts of corporate data from home or on the move, from cafes and airport lounges, the danger increases of malicious networks performing MITM (man in the middle) attacks or capturing credentials by impersonating 'free' wireless services.

While for some, simply not providing wireless access is the current option, the day where that is acceptable for business is coming to an end, so get ready. Even the wireless police can only do so much!

Now where did I put my ipod touch?

Failing Securely.

The Australian is reporting a clever fraud scam where the criminals arrive after hours and cut the phone lines to stores before turning up during business hours and purchasing expensive items with stolen credit cards. With the phone lines down, the merchants have the choice of turning away the sale or manually processing the card and therefore doing without the normal credit card verification. A difficult choice, especially in the current tough economic times.

What this scam highlights for the security conscious is not so much the lack of physical security around the phone lines (although that is a concern, it is not under the control of the merchants) but the fact that the backup system (manual processing) lacks the verification of the primary system.

Businesses can suffer the same problem, where security is relaxed in a Disaster Recovery environment or is viewed as a secondary concern to restoring business processes. It may be that systems and applications at a DR site are patched less frequently or software is not kept at the current version as backup sites can be 'out of sight, out of mind'.

It should be kept in mind that when designing a Business Continuity Plan that should the business have the need to fail over to backup systems or a DR site that it can do so not only quickly but also without compromising the normal level of security.

After all if you are already suffering from an event that requires the use of a DR site (like a fire or flood), the last thing you need is a massive virus outbreak on your DR network or your backup web servers hacked....

Security Theatre - An Example

Bruce Schneier frequently talks about 'security theatre' or the illusion of security. I saw the perfect example the other day when visiting a secure datacentre's co-lo. Visitors are required to sign in with a receptionist who is sitting behind bullet resistant glass behind bullet resistant doors. Having verified who you are they take you to the co-lo, through a door with swipe card access which must be closed before using the hand print scanner and a pin to open the next door in a kind of airlock setup. "Wow, this must be a secure datacentre" you think, until you realise that at the other end of the lobby from the first door with the swipe card access is another door, still with swipe card access but no pin or biometrics, that gives you access to the exact same datacentre. I'll give you one guess which door the staff use.

CAPTCHA

I came across this CAPTCHA method while reading Whirlpool today (original thread here). Interesting idea but a horrible implementation, I like it from the point of view of accessibility and usability but incredibly easy to defeat in its current implementation. As was pointed out in the thread in order to be effective both the span class and the separating character need to be random as well as probably the interval at which the hidden characters are inserted.

Our first follower...

Hey hey, we have our first follower... Welcome... I guess that means we had better add some more content (hmmmm.... that mug shot looks spookily familiar)

Security Disasters

Urgh... if I don't post something soon Justin is going to bump me off the front page... but I guess since I have now handed in my second ethics assignment I can spare a few minutes... before moving on to the Network and Security Administration one (sigh)

I read this rant by Marcus Ranum on the train this morning, a very enjoyable read and eerily familiar... Though I do find his contention that "The failures I am describing are failures of hope" a little unfair, I can hardly be to blame for all information security problems... sorry, it's late and I have just finished an ethics assignment.

More on security disasters once my brain stops feeling quite like unset papier mache

Security Maxims

Now this is just great.

My favourites:
Ignorance is Bliss Maxim: The confidence that people have in security is inversely proportional to how much they know about it.

Arrogance Maxim:
The ease of defeating a security device or system is proportional to how confident/arrogant the designer, manufacturer, or user is about it, and to how often they use words like “impossible” or “tamper-proof”.

Self destructing data

An interesting article over at the New York Times (original site here) on 'self destructing' messages (self encrypting and throw away the key really).

The researchers at the University of Washington have developed a system to help control how long user data is available 'in the cloud'. Recognising that end users have little control over where their data in the cloud is stored, or even who has access to it, the Vanish system is designed to help the users control how long anyone (themselves included) can access the data.

The system works by encrypting the data with a key distributed in bits throughout a peer-to-peer network.
Now encryption is nothing new, but the difference with Vanish is that neither the sender nor the recipient hold the key in the long term.

While an interesting piece of research with some long term potential, this certainly doesn't eliminate the dangers of storing your data in the cloud.

Just ask twitter who learned the hard way a couple of weeks ago that someone determined (or bored!) enough will find the weakest link and exploit it.

Information Security is often described as 'asymmetrical warfare', a battle in which the good guys have to find and plug all the holes and the bad guys only have to find one poorly defended point.....

Wardriving Police

It was recently reported that police in Queensland would start 'wardriving' around select Queensland towns as part of a public service to educate residents and small businesses on the dangers of running unsecured wireless networks.

This is not the first time this has happened, back in 2006 the Douglas County, Colorado Sheriff's department started doing the same thing. I couldn't find any information on how well it went, or if they are still doing it to this day. There didn't seem to be any information on their website. I have emailed them to ask if they're continuing the practice.

I have mixed feelings about this one. On one hand, education is part of law enforcement, just as educating the users in your business can assist in securing your network and your data. On the other hand I imagine there is normally enough 'real crime' to keep generally overworked police busy, and I doubt the general public even want to hear the message. Public service announcements about drinking and driving, smoking and speeding haven't slashed the instances of those three things and they will kill you!

Manufacturers providing home wireless routers that force a password change during install and have security (encryption) turned on by default thereafter would probably do more good than the police and public service announcements. The average home user doesn't want to think about computer security, they just want it their new toy to work., just like their TV and DVD player did when they plugged them in.

Despite society getting more tech-savvy, your average consumer doesn't want to have to get a degree in computer science or an MCSE to set up a printer. They have a hard enough time moving from Windows XP to Windows Vista.

All in all, providing it doesn't take away from more important policing, I think more education is a good thing and at least they're trying something different up north.

Hopefully I'll hear back from the Douglas County Sheriff's office and find out if they had much success....

Download limits and Security

I saw a few articles recently (to which I would post a link, but I can't find them again...) about how download limits are bad for security. The basic point being made was that developers can't be trusted to deliver secure software, so a plethora of security updates is inevitable. For those people subject to download limits, they may (or probably would) choose to spend their precious download limits on things they perceive as far more valuable to themselves than a patch for Windows or Acrobat.

The sudden interest seems to have come on the back of US ISPs such as Time Warner Cable looking at charging customers by the byte which has led to a consumer advocacy group asking Congress to investigate whether charging by the byte is 'price gouging'.

While it may be new for the US, this type of download limitation and additional charges for exceeding set caps is nothing new here in Australia or many other parts of the world.

But how could this affect security?

I was told a story from a South African Microsoft employee about the way ISPs divided up download limits in the Republic. As far as I recall, there was basically a generous allowance for sites hosted within South Africa, and a much smaller allowance for sites based overseas. As Microsoft did not have a windows update server in South Africa, this led to people being unwilling to update windows and burn up their precious overseas download limit. A partial solution was another Microsoft employee set up a private WSUS server within South Africa and advised people to connect to his server to obtain the frequent updates.
While there are obvious potential security issues with that solution, it is perhaps the lesser of two evils compared to not patching at all.

But do 'regular' users really pay all that much attention to their download caps? All sorts of applications rely heavily on internet access to be able to download updates, from Windows and Adobe Acrobat to itunes and anti-virus products. Would someone really disable their AV updates to save download allowance?

Speaking to a few non-IT friends the prevailing opinion is it is not something they even think about, and I imagine that is the common view. I suspect it would take being heavily slugged with extra charges for exceeding your allowance before most people even think about their download limits - although I have heard of people using 3G tethered internet connections on global roaming being unhappily surprised with hugh bills for unknowingly downloading patches and updates automatically while travelling.

At this point it seems like much ado about nothing, and the introduction of download limits in the US will hardly lead to a new age of poorly secured unpatched systems. The bigger problem is the underlying operating systems and applications that are built with security as an afterthought (if it is thought of at all), the constant downloading of updates and patches is simply a symptom.

Password Masking

I've been reading a few pros and cons recently about password masking. Traditionally it is one of those unquestionable security commandments - "Thou shalt mask passwords", but is it always necessary?

Why do we mask passwords? What's the benefit?
To stop the password being exposed to third parties. The password is a shared secret between the system and the authorized user, so letting others see it in plain text is a no-no. While this is true, are all passwords created equally? The password, or PIN number, you may use on your ATM card in a public place is at much higher risk of being seen by an unauthorised third party than your webmail login or even your network login that you use in the privacy of your office or cubicle.
"But we don't all sit in an office or in a cozy cubicle!" you say, cursing the designer of the open plan office. Very true.
Another benefit may be that users 'feel' more secure that their password is being kept 'secret' by not displaying on the screen. Despite the original purpose being to mask the password from a 'shoulder surfing' colleague, it has come to be something expected by users today, and like the padlock in the corner of the browser is a 'symbol of security'.
The downside is of course, that users cannot see what they're typing, so when they are denied a login they're not sure if they've forgotten their password or are simply mistyping the correct password. There is also the argument that password masking leads to poorer security because users choose 'easier' passwords (ie: less complex ones) so that they reduce the chances of mistyping a complex password. This argument does assume that having the password unmasked would lead to more complex passwords because the chances of mistyping are reduced. Personally I think most poor simple passwords are based on ease of remembering rather than the odds of mistyping.

I noticed recently that Apple had implemented a 'half-way' solution on the iphone (and this may have been around for a while, I'm just stating where I saw it) in that as each character is typed it appears in plaintext briefly before becoming an asterisk. This has the benefit of reducing mistypes (not uncommon with the iphone on screen keyboard!) but also making shoulder surfing a little harder by forcing the 'surfer' to pay attention to each keystroke and never showing the password as a whole. I think this is an interesting solution that has potential for those sorts of passwords that are most likely to be used in a 'private' setting (like your office or home) but not of course for PIN numbers and the like.

National E-security Awareness Week 2009

It seems last week was National E-security Awareness Week. Oops. Not alot of publicity it seems, but still a worthwhile initiative from our government and a nice change to see Senator Conroy trying to do something useful instead of trying to censor the internet country-wide.

National 'change your password day' was June 5th, and the Senator was encouraging all Australians to get a "better, stronger password and most importantly updating it regularly."

"Don't just choose a password with your birthday or the name of your favourite football team. Get security software and update it regularly," he said.

Hopefully next year it will be better publicized. Maybe get Hallmark on board and we could celebrate with cards and gifts of candy RSA tokens!

3G, Public Transport and Information Security

This is a post I have been meaning to write for a while and it seems a worthy distraction from the Ethics essay I am currently supposed to be writing (read: Richard is procrastinating).

Something which I think is often underestimated is the risk to corporate data when it leaves the building, be it on backup tapes, other removable media or in a slightly different sense on the screens of employees who catch up on work during their trip to and from the office. Ease of access to the internet afforded by technologies such as 3G mean that people are more and more using their daily commute to carry on business activities. The benefit of dedicated office space, even open plan, is that it affords a level of physical security for an organisation’s information; it is much harder for an outsider to read over someone’s shoulder in the office than on the train. This situation is not limited to public transport, cafes and fast food outlets with wireless access points are subject to this weakness too. It is amazing the information that one can glean sitting next to someone naively tapping away at an email on their laptop, I have seen people reading marketing and sales reports (the most recent example was a survey post a product recall) as well as business email and other documents that their employer would probably regard as sensitive. If you watch carefully you will be able to observe addresses for SSL VPNs, Outlook Web Access and other webmail pages, usernames and internal software in use, even source code for internal applications and web pages, all useful to an attacker in one way or another.

Obtaining information in this way can be of use to both the opportunistic attacker, casually observing that company X is about to launch an advertising campaign to pre-empt some negative publicity or is using an out-dated version of a particular piece of software, and the attacker with a specific target in mind tasked with obtaining information about a competing company. The approach each takes will be somewhat different but the end result is the leakage of information from a company’s network that Data Loss Prevention systems are currently unable to protect against and which the target may never be aware of.

This lack of physical security facilitates compromises which require no technical hacking skills (after all, the target is doing the hard work of gaining access to the network for you, though granted, you are limited to what they are accessing at the time), are very difficult to detect and have the potential to be extremely damaging. This type of compromise is in fact a form of social engineering attack and while there is a certain amount of subtlety required, it is surprisingly little in most cases. As with any social engineering the best form of defence is awareness and education, you are not going to stop people from working on the way home (it’s much more appealing that doing it when you get home) but if they are aware of the possibility perhaps they will think twice before opening that strategy email.

I’m sure that this kind of surveillance is nothing new but it is, perhaps, something which is underestimated when considering the protection of sensitive information.

PIN Numbers

Recently I bought a new phone. I stayed on the same carrier, so it was just an upgrade of my call plan and a new piece of hardware. As part of the identity verfication process in the phone store, I was asked my name, the phone number and the PIN number I had provided the company with when I first signed up for my previous phone. This PIN number was also used in the past to verify my identity over the phone when I had a mobile phone stolen and needed it call blocked. It is essentially a shared secret to identify me as me.
It's little different than the password I get asked for at the local video store when I rent a dvd, although they also require I provide then with my membership card (multifactor authentication!)
Back to the phone, I dutifully provided the PIN number and began the process of filling out forms and signing my life away on a new phone contract. While filling in details I noticed at the top of the form there was a box (filled in by the sales clerk) for my PIN. He had dutifully written my PIN in the box as part of application.
I asked him if this was the norm, if this 'secret' number was commonly written in large friendly digits on the applcation forms and he applied in the affirmative.

I did ask his opinion on writing this 'secret' number on a form that is kept in triplicate (one copy to me, one for the store and one sent off to a central office) but he didn't seem interested in discussing the ins and outs of how they secure their data or prevent impersonation.

I guess in the end I got my phone and have to hope one of the three copies of the form with my name, address, date of birth and PIN number don't fall into the wrong hands and someone decides to cancel my account or report my phone as stolen. Although thinking about it, I imagine with a name, address and date of birth alone you could use some social engineering to effective DoS someone's phone. Hmmmm. When is Richard's birthday?

Cyber Security Policy

President Obama has put Information Security firmly on the government agenda with his speech about forming a new office at the White House led by a 'Cyber Security co-ordinator'.

I think the success or failure of such a role will depend on the same factors that make Information Security work in the private sector - commitment from the top and buy in from the major stakeholders. There seem to be alot of unanswered questions, besides who will hold the post, such as what level of authority the position will hold, what kind of budget it will control and if it will be more than a single point of coordination for existing agencies.

Protecting a business is difficult enough, ensuring there is adequate resources, executive support is maintanied over time along with continuing co-operation from various departments and divisions. Protecting a country, where there is a history of inter-agency antagonism and where private industry hold so much power of parts of the critical infrastructure is a mammoth task.

I wonder how long it will be before our government decides they too need someone in charge of information security? If they do, I can only hope it is handled better than the National Braodband Network and the government web filter. Experience suggests otherwise however...

==
As an update there is a great opinion piece on the history of the 'cyber-czar' and what it may amount to here

Retransmission Steganography

Now this is just plain cool, creating a covert channel from the retransmission of data in protocols that support retransmissions eg TCP and CSMA/CD (the method used by ethernet switches to stop collisions). Like any good hack, it utilises essential and helpful features in a method not intended by the original designers.... I like it

Securing Computers for Prisoners

This is something which would pose some unique security challenges but looking at the article (linked below) they seem to have done a reasonably nice job, though it is difficult to tell without all the technical details. I particularly like the touch of see through cases to stop the inmates from hiding things inside.

Original Story

Default Passwords

In case you need any more reasons to change default passwords on your systems...

A separate IT department in the company a friend works for had, while attempting to configure a new SAN which they had just purchased, managed to accidentally access the management interface to the SAN that hosts the main file server (the management tool apparently auto discovers all matching devices on the network). Upon discovering several existing LUNS and being true intellectual giants, they assumed that said LUNS must be factory defaults and deleted them... All of them...

Thankfully the file server is part of a DFS pair so the impact was minimal, but it could have been one of those incidents that keeps sys admins at work 'til the wee small hours running restorathons.

Adobe's patch tuesday!

Looks like Adobe have recognized the need to organize their security efforts! I guess with all the recent bad press about security holes in Acrobat and Adobe Reader they've decided to get organized. Aligning their 'patch day' with Microsoft's patch tuesday is a nice touch and helps make life that little bit easier for the administrators out there. No comment on their blog about it, but I hope this quarterly 'patch tuesday' also includes flash....

More here

Oh noes

Just what the world needs, another Information Security Blog!

powered by Blogger | WordPress by Newwpthemes | Converted by BloggerTheme