Dark Mail Alliance

Interesting concept, and one that's probably well over due, but I get the feeling it may be a little like implementing DNSSEC or IPv6... Who knows, maybe the Snowden leaks will provide the push necessary to overcome the inertia.

http://arstechnica.com/business/2013/10/silent-circle-and-lavabit-launch-darkmail-alliance-to-thwart-e-mail-spying/

http://www.darkmail.info/

Don't be...?

So I saw this today:

Google will allow its advertisers to use your image and comments in ads for their products, via a new feature called Shared Endorsements. This change raises privacy concerns for some people, and if you use Google Plus, the company's competitor to Facebook, you need to understand these changes.From November 11, the names, images and comments of Google Plus users will be available to Google advertisers for incorporation into the advertisements that appear when users run searches on the site. The changes are reflected in new Terms of Service that are understood to be accepted whenever you use Google services.

While this may not qualify as 'evil' (as in the famous Google "don't be evil" motto that they used to use), it seems like adding a 'feature' like this that sin't opt-in is kinda a douche move and a particularly poorly timed one in the post-Snowden era where privacy is currently such a touchy subject. Although Google may claim they didn't give the NSA access to private user data, giving it to advertisers is apparently OK.

(and yes I know this blog is on blogger - owned by Google).

Investigation, now and then...

Here's an interesting article from a journalist who paid a private investigator to investigate him to see what he could discover back in 1999 and compared it to seeing what an ethical hacker could find today. Interesting results, although the 1999 techniques (largely social engineering) would probably still bear fruit today... 

TED Talk - Why Privacy Matters

Interesting TED Talk on privacy and one which highlighted in my mind an interesting crossover that exists in the Information Security industry.  Information Security professionals are often in an excellent position to breach privacy, and are often called upon to do tasks that do just that (though only in an ethical and responsible manner). On the flip side and possibly as a result of the above, they are often the strongest proponents of improved privacy controls and the most outspoken critics of those that breach them.

More on Storing Passwords

Following on from my Storing Passwords post, here is an excellent and far more comprehensive discussion of password storage (albeit in a .NET context) from Troy Hunt.

Storing Passwords

A while back, one of the guys here at work set up an open source collaboration web app to exchange some files with a customer, being an inquisitive chap I thought I'd take a wander through the source code to see how it fared from a security stand point. My first port of call for these sort of things is usually to have a look at how they're doing authentication and in this case they're doing (arguably) the right thing or, at least better than a number of other web apps out there and storing an individually salted sha1 hash of the password, they also offer the option of LDAP integration, so far so good.

The code for validating password complexity when setting passwords turned out to be a bit more interesting, the user’s password is stored with the other user details in the user table in MySQL, but there is also a password table, and the passwords in it don't look like sha1 hashes... A little more digging through the code reveals this table to be a password history table, but this doesn't explain why the passwords aren't just stored as hashes, there are also a couple of functions with ominous names called within the password reset code (cp_encrypt and cp_decrypt) that take the password as a parameter... eeek! Also, testing confirms that not just the history, but also the current password is stored in this table.


1:  function cp_encrypt($password, $time){  
2:      //appending padding characters  
3:      $newPass = rand(0,9) . rand(0,9);  
4:      $c = 1;  
5:      while ($c < 15 && (int)substr($newPass,$c-1,1) + 1 != (int)substr($newPass,$c,1)){  
6:          $newPass .= rand(0,9);  
7:          $c++;  
8:      }  
9:      $newPass .= $password;  
10:        
11:      //applying XOR  
12:      $newSeed = md5(SEED . $time);  
13:      $passLength = strlen($newPass);  
14:      while (strlen($newSeed) < $passLength) $newSeed.= $newSeed;  
15:      $result = (substr($newPass,0,$passLength) ^ substr($newSeed,0,$passLength));  
16:        
17:      return base64_encode($result);  
18:  }  


1:  function cp_decrypt($password, $time){  
2:      $b64decoded = base64_decode($password);  
3:        
4:      //applying XOR  
5:      $newSeed = md5(SEED . $time);  
6:      $passLength = strlen($b64decoded);  
7:      while (strlen($newSeed) < $passLength) $newSeed.= $newSeed;  
8:      $original_password = (substr($b64decoded,0,$passLength) ^ substr($newSeed,0,$passLength));  
9:        
10:      //removing padding  
11:      $c = 1;  
12:      while($c < 15 && (int)substr($original_password,$c-1,1) + 1 != (int)substr($original_password,$c,1)){  
13:          $c++;  
14:      }  
15:      return substr($original_password,$c+1);  
16:  }  

Looking into the encrypt and decrypt functions, there are a couple of things of note; firstly their very existence tells us that the passwords are being stored in a reversible form in the database, essentially defeating the purpose of storing the password as a salted hash.  This means that an administrator on the system can decrypt and use any user's password; it also means that should the server be compromised an attacker could also trivially decrypt the passwords and use these accounts.  While salted SHA1 hashes are also crackable in short order with the right hardware they do raise the bar somewhat.  To mitigate the SHA1 attack it would be relatively straight forward in this case to swap out the SHA1 function being used and substitute it with a slow password hashing algorithm such as scrypt or bcrypt.

A review of the encryption algorithm being used is also pretty interesting; the developer has chosen to use XOR encryption, the password is padded to 16 characters and XOR’d with an MD5 of a pseudo-random string generated at install and the current date.  In the case of short strings like passwords, this is actually a reasonably effective encryption algorithm as the password is unlikely to be any longer than 16 characters (in the event it is the key material is repeated which introduces a weakness).  However in this context the encryption being used is really academic, the passwords shouldn’t be stored in a reversible format, though when you look at the apparent reasoning behind it there is an interesting security trade-off. 

The impetus behind storing the passwords encrypted appears to be so that the application can do a comparison of the new password with previous passwords to ensure that not only is it different from previous passwords but also that it isn’t too similar to any previous passwords.  The motivation here is noble but, in my opinion, somewhat misguided, a better approach would probably be to prompt the user based on the entropy of the new password while also checking that it doesn’t match any of the previous X passwords.  While this doesn’t solve the problem of having users choose passwords and incrementing a suffix it means that passwords aren’t being stored reversibly, even removing the current password from the history table doesn’t mitigate the problem as users still have a tendency to choose passwords based on a pattern, regardless of the checks that you enforce.  Password selection guidance is still maybe best done as part of a security education program.

How to Phish Friends and Influence People

As I mentioned in a previous blog post I'm doing a bit of lecturing for an undergraduate degree in Network Security, this semester I'm teaching Enterprise Security.  This week we covered Security Engineering and were discussing, among other things, the psychology/ behavioural economics of Phishing.  Rather than try and explain the incentives and mentality at play when someone clicks on a phishing link I thought I'd take a more practical approach and carry out a small simulated phishing campaign.

Using the Simple Phishing Toolkit, an excellent but sadly abandoned open source tool for running educational Phishing campaigns, I set out to phish the students under the guise that their Moodle platform had been upgraded with a number of bug and security fixes and a link to click to see a full list of the changes.  The tool provides the capability to have a dummy login form, even providing an inbuilt scraper to automate the building of the form, I stopped short of this however, mainly due to time constraints. For the purposes of this exercise just clicking on the phishing link was enough to get you marked as a victim.  After all, these are students who are studying security and should know better than to click on a links in an email without validating the destination and visiting a malicious site is often enough.

Having set up the campaign and pushed out the emails I went off to do some other jobs that needed doing, I didn't really expect to get too many hits on the link, it pointed at a dynamic IP and I didn't really think they would find a list of updates to Moodle worth clicking through for ( a theory that was subsequently confirmed when I spoke to them in the lecture). As it turns out either I'm a better phisherman than I give myself credit for or this group of students is a gullible bunch, 12 out of a total 32 students clicked on the link (see chart below).  Given that a couple of those 32 students seem to have given up checking course related emails, the percentage may be even higher. Those that clicked on the link were redirected to a phishing education page (also supplied in SPT) with a video on phishing from Symantec.



Phishing the students was certainly an interesting exercise and one that I'd like to repeat with other groups and extend into other organisations, more and more, having recognised the human element as the weak link in their security posture, organisations are running social engineering pen-tests and including simulated phishing campaigns.  Done right, this could be an excellent education tool, and one worth pursuing, it serves as a nice demonstration of the types of methods used by real attackers against organisations, giving your users real experience that they can relate to net time they encounter a real phishing (or spear-phishing) email, with the right instruction and correct incentives, users can be taught to identify phishing emails and report them to your security team.  The confidence to report a phishing email is even more important if the user did click on the link or fill in the form, it is important not to castigate users for making security mistakes, the knowledge that they have done so at least allows you to respond to the potential outcomes rather than having to detect it through other means.  It also serves as another source of insight into the security posture of your organisation and potentially an intelligence source for identifying high risk users to be correlated against mail gateway logs.

Back in the security saddle

It's been quiet around here lately as I've been travelling and extremely busy with work. However it's time to get back to blogging on a semi-regular basis (I don't know what Richard's excuse is!)

While checking out the new IOS7 features recently (although I've yet to upgrade) I came across this gem:

Apps can now be configured to automatically connect to VPN when they are launched. Per app VPN gives IT granular control over corporate network access. It ensures that data transmitted by managed apps travels through VPN — and that other data, like an employee's personal web browsing activity, does not.
Now that's a nice feature (and about time), especially in the BYOD era. Speaking of BYOD, I recently had a chance to meet a number of Security managers from around the world and BYOD was a hot topic. However here in Japan it is not even on the radar for many organizations. A Logicalis research paper [pdf] from last year showed Japan as significantly trailing other markets in regards to corporate IT actively promoting BYOD and, perhaps unsurprisingly, leading the pack in the measure of 'IT don't know about it but we're doing it anyway'.

Why is Japan slow to embrace this trend? My personal view is it is a combination of inherently conservative companies and IT departments (who are unwilling to give up control) combined with the strict labour laws regarding overtime work. As we've seen in the west, mobility and BYOD blur the lines of work/life significantly and risk putting companies here on the wrong side of the law if employees are found to be working excessive overtime.

powered by Blogger | WordPress by Newwpthemes | Converted by BloggerTheme