Showing posts with label Law. Show all posts
Showing posts with label Law. Show all posts

Happy New Year

明けましておめでとございます!
Happy New Year from Security-Samurai.net!

Some interesting articles that recently caught my eye on the impending changes to the Privacy Act in Australia (courtesy of itnews.com.au):

  • Is your IP address personal information?
  • The Privacy Act and the cloud

  • Consent and the Privacy Act in the Big Data era
  • Are you ready for a data request deluge?

  • The posts raise some interesting points (such as is your IP address or mobile phone number PII?) and highlight some of the challenges Governments now face when trying to legislate privacy today.

    Breach Blanket Bingo

    So it looks like Australia may finally have a data breach notification law. It was back in 2011 when the Government started really discussing this again and at that time I wrote a little about it and posted to links to an interesting point/counterpoint as to whether these laws work. While I think the jury may still be out, I hope some law is better than no law and at the very least we get something reasonable that makes sense.
    (am I setting the bar too high here?)

    At the same time "China" is reported to have hacked the Australian Government, including stealing plans for the new ASIO Headquarters - but it seems we forgive them, so all is OK.

    I wonder if/what the Government would have to report if the new laws were in place already?

    Creator responsibility

    I recently came across this rather interesting story (wired.com) about a guy who added secret compartments to vehicles. End of the story is, despite the fact what he did may not technically be illegal, he got 24 years in prison as some of his clients were (without his knowledge - although he may have suspected) major drug smugglers. At the same time, the two guys in charge of the drug smuggling operation got much shorter sentences - go figure!

    The article ends with the comment:

     The (hacker) culture’s libertarian ethos holds that creators shouldn't be faulted if someone uses their gadget or hunk of code to cause harm; the people who build things are under no obligation to meddle in the affairs of the adults who consume their wares.
    But Alfred Anaya’s case makes clear that the government rejects that permissive worldview. The technically savvy are on notice that they must be very careful about whom they deal with, since calculated ignorance of illegal activity is not an acceptable excuse. 
    Interesting food for thought. To what extent is the creator responsible for the use of his/her creation?
    Unlike the "guns don't kill people, people kill people" argument, the primary function of a secret compartments - or perhaps a technology like encryption - is not to cause harm, but to protect privacy. Should/could the makers of truecrypt be held responsible for criminals or terrorists using it to hide evidence of a crime?

    I'm not an American, but sometimes these American precedents can have an influence overseas. It would seem to me a slippery slope if, as this article suggests, the person responsible for creating/implementing a technology that may be used for committing a crime more effectively can be sentenced far more harshly than the perpetrators of the crime.

    Breaching Acceptable Use Policies

    Care of Slashdot I saw this post on the potential ramifications of breaching an Acceptable Use Policy based on a recent judgement [pdf]  in Western Australia.

    The defendant was a Police Officer, who would normally be held to a higher standard than Joe Public, and the system in question was a Police database, but as the blog post points out: "Ms Giles wasn't convicted for breaching police secrecy, or improper disclosure of information --- she was convicted for common cracking. She used the restricted-access system other than in accordance with her authorisation"

    Nick Gifford in his book "Information Security: Managing the Legal Risks" (which I have mentioned before) describes AUAs (Acceptable Use Agreements) as "a contractural mechanism for managing the risks to the organisation associated with granting user access rights" and as a contract I can understand that there would be a legal risk to those who would breach that contract.

    What about your company's Acceptable Use Policy? Is it up to date and consistent with employee duties?
    Have all of your users read your organisation's AUP? What about those staff who have been there 10, 15 or 20+ years? Has your AUP changed over that period, and have those users acknowledged those changes? Do they have to re-acknowledge the AUP regularly? (yearly?)
    Does it explicitly state that there should be no expectation of privacy when using email, browsing the internet or storing data on comapny assets? Does it allow for monitoring employees and clearly state potential penalties for breaches?

    While it's a little late for New Year's resolutions (maybe a Chinese New Year resolution?), make it a priority to look into your AUP and how you track acknowledgement and ensure compliance. And if you don't have an AUP, the ever-useful SANS website has a sample [pdf] to help get you started.

    Legal Clouds

    I came across a copuple of interesting reads over at the UK-based Cloud Legal Project site (which is part of the Cenre for Commercial Law Studies, Queen Mary University of London).

    The first is a survey of Cloud vendor contracts ('Terms of Service Analysis for Cloud Providers'), which highlights risks such as the vendor right to change ToS at any time without niotification, cancellation of accounts for disuse or AUP violations and limited liabilities for loss of data.

    The second paper is on Information Ownership ino the Cloud, which highlights the need for strict definitions in contracts as to who retains the ownsership rights of various data types.

    Both papers are well worth a read.

    Home grown hacker

    An aussie hacker who was arrested back in July for infecting @2500 computers with a virus to steal banking and credit card information has plead guilty but asked for a reduced sentence as his actions wee 'youthful curiosity' and he 'was interested in becoming an internet security consultant'.

    Are there any hackers who got arrested who didn't pledge to go straight and become an IT Security consultant? Now there's not alot of detail in the news articles about exacly what he did (did he write his own code, is he a script kiddie running something like Zeus, etc), but regardless, asking for a more lenient sentence after you commited a crime so you can become a security consultant - is that not something like being arrested for stealing cars because you want to be a mechanic or robbing a bank because you wanted to be a security guard?


    I know there is a great precedent of those who were on the wrong side of the law, who reformed and have become security consutlants or security celebrities (eg: Kevin Mitnick, Kevin Poulsen), and it is a subject that has been well debated before. Would you hire a 'reformed' blackhat? Does it always "take a thief to catch a thief"? I'm not so sure...

    The interesting thing about this case from an Australian point of view is that:

    "The judge was told there had been no similar cases across Australia to guide him when imposing a penalty."
    It will be worth watching closely to see what kind of sentence is handed out, and to compare it against  other parts of the world where these types of prosecutions have been more common.

    Once more unto the Breach...

    I attended the AISA national seminar day earlier this week (which was a great day), and one of the panel discussions touched on whether there was a need for greater regulation or government intervention in IT Security. The prevailing view was that over-regulation would stifle innovation and government mandated minimum requirements would lead to businesses doing the bare minimum and no more.

    I don't disagree with those points, but I do believe that Australia is stll behind the US/Europe in understanding Information Risk in the boardroom and one of the ways to make sure it gets on the radar and stays there is mandatory breach notification.

    My view was somewhat echoed in a recent itnews story that made the good point that individual data breaches may be too small for authorities to really investigate but the implementation of a IC3-style centralized reporting body could assist in aggregating many small breaches into a large one and show a pattern of behaviour or negligence by an organization.

    On a similar note I (re)discovered a link to a useful document that I had used in a Uni assignment last year that compares Data Breach Notification Laws around the world [pdf]. Although a little out-of-date (2009), it's still a great little summary.

    On data breaches, there is of course Wikileaks. Wow. Infosec Island has a nice piece on how the forthcoming "megaleak" from a major US bank will be 'Enron-esque' in the fallout (if you haven't seen it, I recommend Enron:The Smartest Guys in the Room).

    If it is as big as promised, it will be interesting to see the effect on corporate security (and is probably a great time to be a salesman with a good DLP solution...)

    Crooks & Crypto

    "Criminals are a superstitious cowardly lot" said none other than the caped crusader, Batman. But it seems they're a lazy lot too. The Register has an article on how 'belief that they won't get caught' and laziness has meant that the feared widespread use of cryptography by criminals has not come about.

    It was this fear that has lead governments (most notably the US) to float the idea of criminalizing the use of encryption software or requiring the Government hold a key in escrow (such as with the Clipper chip).

    A few years go the UK passed a law ("RIPA section 49")requiring suspects to hand over encryption keys when requested or face fines and up to two years jail. They have since charged suspects under it.

    A great piece on the controversy of whether encryption is harmful or not is also available here.

    Cryptography is a tool and can be used for good or for ill. Personally I don't believe in a system where the Government holds keys in escrow without unprecedented transperancy around who is accessing keys (and why!) and don't believe such a system would ever be workable. Make Cryptography illegal? Well the 'bad guys' are already breaking the law and only law-abiding citizens would be disadvantaged.

    Oh, and I'm more than happy for criminals to remain a lazy, overconfident and superstitious cowardly lot!

    Hacking a hacker?

    While doing some recent reading on Digital Foerensics I came across a particularly interesting older case where a Russian hacker was caught by the FBI and charged with computer intrusion and fraud. While this doesn't sound like anything too out of the ordinary what caught my attention was some of the details.

    The FBI alleged that Ivanov and other international hackers gained unauthorized access into computers at CTS Network Services (an ISP) and used them to attack other e-commerce companies, including two credit card processors, where he stole customer financial information and used this information in the usual fraud schemes. Nothing too out of the ordinary so far.

    Once the FBI had identified their culprit, in order to make the arrest they lured him and an accomplice to the US on the premise of offering a job as an IT security consultant. When the pair arrived, the FBI had them remotely connect to their machines back in Russia as a demonstration of their skills for the new prospective employer. But not all was as it seemed, as the FBI were keylogging the machines the Russians used in the US and used these captured credentials to connect to the Russian computers and extract the evidence they needed (without a search warrant) to prosecute Ivanov and his accomplice.

    Do the ends justify the means? The Russian Federal Security Service, or FSB, didn't think so, started criminal proceedings against the FBI Agents for unauthorized access to computer information. Meanwhile back in the States, the Agents involved were awarded the director’s award for excellence as the case was the first in bureau’s history to “utilize the technique of extra-territorial seizure.”

    The assistant US District attorney commented that he "wouldn't call it hacking" when discussing the Agent's actions and a federal judge agreed, rejecting motions filed that sought to suppress the evidence obtained from the computers with Ivanov eventually being sentenced to three years in prison.

    Do, in this case, the ends justify the means? Or is it simply the beginning of a slipperly slope allowing state-sanctioned hacking in the name of justice?

    This case is wan older one and was 'pre-9/11', so I wonder what effect the PATRIOT act has had in the intervening years...

    InfoSec Legal Risks II

    Back in Feb I mentioned a Book I'd come across: Information Security: Managing the Legal Risks by Nick Gifford.

    Recently Nick gave a great presentation at the AISA Risk Management Special Interest Group (RMSIG) in Sydney.

    Some of the points that came out of his presentation** that I found rather interesting follow:

    • Most InfoSec-related cases are brought under the tort of negligence
    • Damages cannot be recovered under negligence for pure economic loss
    • No cases have yet been tried in Australia for under the tort of Negligence for InfoSec breaches ~ although cases have been settled before going to court
    • The highest privacy breach payout in Australia is around $8000 ~ leaving privacy breaches more damaging to reputation than financially (barring lost revenue from reputational damage of course!)
    • The Trade Practices Act Section 52 is the key area to pay attention to for Australian InfoSec professionals when verifying legal liability ~ it has less hurdles that proving negligence and can be 'creatively' applied by the courts.
    • The ALRC has recommended a new tort of "serious invasion of privacy" and recommended compulsory disclosure laws in Australia.
    Nick also referenced an intersting quote from the FTC paper on Identity Theft [pdf]:
    The Rule specifies that what is “reasonable” will depend on the size and complexity of the business, the nature and scope of its activities, and the sensitivity of the information at issue. This standard recognizes that there cannot be “perfect” security, and that data breaches can occur despite the maintenance of reasonable precautions to prevent them
    The formal acknowledgement that "perfect" security cannot exist from someone outside of IT is interesting to see.

    Nick gave a great talk, and I do recommend his book.

    **Any errors or omission of information in this post are my fault and not Nick's. I am no lawyer! So go seek your legal advice from someone who is!

    COFEE vs DECAF

    I'm currently studying Digital Forensics and a recent bit of google-inspired research lead me to one of the big stories of late last year (which I vaguely remembered) where a Microsoft forensic tool designed for use by law enforcement called COFEE (Computer Online Forensic Evidence Extractor) was leaked on the internet.

    Given the prevelance of computer-based crime and the level of skill required to perform proper forensic analysis, it makes sense for Microsoft (or someone else) to develop a simple-to-use wrapper for what apparently was a number of common forensic tools available elsewhere on the internet.

    The reaction to the leak seems to have been mixed, with Microsoft claiming they weren't bothered by the release of the software, although noting it is licenced for use by Law enforcement only, to someone developing a counter-forensic tool called (of course..) DECAF. What was the thinking in creating this counter to COFEE? One of the developers said:

    "We saw Microsoft released COFEE and that it got leaked, and we checked it out," the man said. "And just like any kid's first day at the fair, when you walk up to that cotton-candy machine and it smells so good and you see it, it's all fluffy – just so good. You get up there and you grab it and you bite into it, it's nothing in your mouth.

    "That's the same thing we did with COFEE. So, knowing that and knowing that forensics is a pretty important factor, and that a lot of other pretty good forensic tools are getting overlooked, we decided to put a stop to COFEE."

    This arguement seems fairly disingenuous as COFEE seems to hardly have been aimed to replace any existing tools, but to simply make them easier for a less-well trained law enforcement operator to use in order gather crucial forensic evidence. The fact the tool was released by Microsoft probably had more to do with creating a counter-tool than noble thoughts of 'better tools being overlooked'.

    No matter what the task, there is almost always a 'better tool', whose use might not be desirable because of cost, complexity or the expert knowledge required to operate it. Much of the history of software innovation has been designed around making complex tasks easier so more people can perform them, Windows being the prime example as it took desktop computers from the realm of geeky hobbyists to mainstream use in businesses and in homes. While simplifying (or as some may call it 'dumbing down') tasks may grate the nerves of the some, it is an inevitable and in many ways, desirable end goal.

    Information Security: Managing the Legal Risks

    I recently picked up a copy of Information Security: Managing the Legal Risks by Nick Gifford. What caught my attention is that it is written from an Australian point of view, which seems rare as most books that deal with the legal aspects of InfoSec are heavily US-centric.

    I'll post a review once I have a chance to have a good read.

    Legal ≠ Secure

    A recent discussion about the security of an application generated the response "But this was OK'd by Legal so no more needs to be done".

    Legal ≠ Secure. When a Legal department is asked for input, they are purely concerned with determining whether whatever is being presented to them contravenes the law. Most of the time the law will state something along the lines of "due care must be taken not to disclose data" rather than "you must use a minimum of 128-bit encryption to encrypt the data and the transmission".

    What is due care? Well that's up to the judge to decide after the lawsuit has begun. Lawyers aren't normally Information Security professionals (well none I know!) and in fact often suffer from the same mideset as most non-IT professionals in that they tend to lump all things IT into the same basket*. As far as they're concerned, if someone in IT said we've done our best to secure something, they'll assume we've done due diligence and sign off, not really making the distinction of whether the 'IT guy' (or gal!) is technical or non-technical, a programmer, sysadmin or IT security expert. It may only be later during the court case when a prosecuting expert testifies that using DES to encrypt those passwords wasn't a good idea.**

    When an IT Security Professional is asked for input, they generally have a pretty good grasp of legal requirements (well the good ones will!) and can always see legal for clarification. They are the ones who can ensure from a technology standpoint that the company is obeying the letter and the spirit of the law.

    You wouldn't ask an IT Professional to to organize your legal defence, so don't ask a lawyer to vet the security of your applications. While the lawyers have their part to play, in ensuring that the law is being upheld, Legal ≠ Secure.


    *In fairness to lawyers, I probably lump them all into the same basket too, not really paying attention to the difference between a patent lawyer and an ambulance chaser.
    **If you're a lawyer reading this and don't understand this comment, go ask a friendly IT Security Professional!

    powered by Blogger | WordPress by Newwpthemes | Converted by BloggerTheme