Showing posts with label security culture. Show all posts
Showing posts with label security culture. Show all posts

Sh!tty Security

As we move into the age of the Internet of Things, expect to see more and more stories like this one, where a luxury toilet firm here in Japan have developed a Android-app controlled 'smart toilet'. The problem? All the toilets are hardcoded to a PIN of 0000 -- allowing anyone with the app (in bluetooth range) to control the toilet.

While the actual benefits of a Android-app controlled toilet escape me at present (and the impact of an attack is admittedly pretty minor), the poor security in the execution is unfortunately all too common. Today it's a toilet, tomorrow implanted medical devices (actually that is also today...).

The toilet pales in comparison to the Smart TV Hacking [pdf] research from Korea. Which is extra creepy if you're watching your smart TVon your smart toilet...

Cyber Defence Unit in Japan

After a few recent high-profile data breaches, and all the global cyberwar press, here in Japan a Cyber Defence Unit (CDU) is being created, but not without what appears to be warranted criticism.

No word on whether it will also include animal handlers for their incident investigation team.

I think I speak for us all when I say we expected any Japanese military (or SDF) related unit with the word "cyber" in it's name to include more of this kind of thing...!

Complexity the worst enemy of security

First post for 2013. The plan is to hopefully be a bit more active after a slow 2012!

I came across this interesting article that has the details of an interview with Bruce Schneier.

While I pretty much agree with Bruce and especially like Bruce's last comment: "Is my data more secure with you than it is with me?", I think the problems begin with the follow-up to that question - which is "prove it".

Now 'proving security' is fraught with danger (and is most likely an impossible task), but while you may have a good understanding of what you do - or don't do - from a security perspective, it's the lack of details that cloud providers will supply on their security practices (other than to say "we use military-grade encryption" or "we follow industry best practices") that always concerns me.

"Trust us" seems to be the mantra from a number of cloud or SaaS providers and trust them we have, sometimes with less than stellar results.

Before signing over the keys to the kingdom to cloud providers, I think it's important to get a good understanding of exactly how they protect your data, what will happen if they do suffer a breach (at what point do they notify you? When they suspect something happened or 2 weeks later when they've confirmed the breach?) and what you can can do to protect your data (such as encrypt everything and keep the keys to yourself).

"A Clash of Development Cultures"

Not strictly security related, but I wanted to point out an interesting post over on a blog written by Symanetc's Anthony Langsworth titled 'A Clash of Development Cultures'. It is an interesting viewpoint and one that I thing also fits into other IT realms, such as Infrastructure or Security.

I met Anthony while studying for the CISSP, and he's a smart cookie. His blog is worth checking out.

Shaky Security Isles


The New Zealand Government has suffered a major data breach...or have they? From the initial reporting it seems more like they had a gaping vulnerability that was found by a freelance journalist and blogger (Keith Ng) - although he had admitted to downloading the data and apparently then wiping it.

So what can we learn from the published details?

The breach was through physical access to kiosk terminals
Despite the fact the kiosks have internet access, there is nothing I've seen so far to indicate the data was steal-able from the internet. Physical access is always going to be trouble, so extra care needs to be taken. (of course if their remaining network security was as poor as this kiosk example, it may well have been even easier to steal this information for afar...)

The kiosk terminals had full MS Office suite installed.
The obvious question is why? Never install any software you don't need. In this case Kevin Ng used the MS Office 'open file' dialog to access the underlying file structure to move and copy files.
This leads to a greater question of why did the (I assume) auto-logon account even have permissions to access to any file location with sensitive data.....

The Kiosk terminals could access other internal network shares.
Again, why? Once again least privilege was not applied here. If all the kiosks needed was intranet/internet access - then that is all they should be able to access. Bare minimum permissions - once again 'least privilege'. In fact they should have been on an isolated network (in a perfect world), but at the very least, firewalled from the sensitive stuff.

The kiosk terminals allowed the use of USB mass storage devices
Obviously a bad idea. Even if you needed to allow Joe Public to upload data, the USB ports can be set to read only via a registry setting. Better still, disable them completely (physically if need be). One can only wonder if the terminals also allowed booting from USB.....

The Kiosks were running Windows 2000 and XP.
Considering they were installed 'just over a year ago' I really hope the reporter got it wrong. Windows 2000? Really? XP is bad enough, but at least it will be supported for a few more years. Windows 2000 support ended quite a while ago - which means no security updates or patches (which makes enabling USB drives even worse....)

There is also some discussion about whether Keith should be charged. Personally I think he didn't need to go as far as downloading data and "taking it home for analysis" in order to confirm the poor security state of the kiosks. But he wouldn't be the first to be prosecuted for embarrassing a government or organization who publicized their poor security...

*edit*: I rather like this opinion piece on the matter. It is probably closer to the mark than we'd like to think. Keith did get 'tipped off' about the vulnerability. Could it have been a disgruntled (or perhaps outraged) insider?

Waiter? Can I get fries with my Firewall?

Having both waited tables and worked in InfoSec, I can appreciate this article on "What Information Security Can Learn from Waiting Tables"

Although I can't remember anyone ever tipping me for providing Security advice....


Security Awareness Training

Recently I've been involved in some security awareness training for business users, and in some discussions around the effectiveness of such training, including the question "should we even bother?".

Funnily enough, as I was contemplating this post, I came across PCI Guru's post on the why you should do awareness training which was a response to David Aitel's article on 'Why you shouldn't train employees for security awareness'.

I'm on PCI Guru's side of the fence on this one. Just because awareness training isn't 100% effective (or perhaps even close) is no reason to stop doing it completely. In my view awareness training is one of the ways to get a message across, to present the information contained in all those organizational security policies no one reads and most importantly - communicate to the end users what is expected of them. Will they always do what you ask? Probably not, but there will be those who do internalize the message and alter their behaviour as a result. I can recall genuine surprise on the faces of some employees when I explained that email is not 'private' - scoff  if you like, but to the non-IT or non-Security folks out there the fact it's not private may have never occurred to them - same as they don't expect their cell phone calls or SMS messages to be intercepted. The 'revelation' altered end user behaviour as they understood they may have been doing the 'wrong thing' because of their previous belief. Without security awareness training, how would the message have even reached them?
I also think that good security awareness training should also be aimed at the individual, explain how they can address risks to themselves and their family through altering their behaviour and then explain how this can carry on to their behaviours in the office.

I don't disagree that Dave's alternatives to training are also very beneficial to a company, and like so many other areas of security, are part of a defence-in-depth strategy, but one that should include awareness training:


One thing that isn't mentioned is the use of security awareness training to alter the end users opinion of the information security department. Too often the security team is seen as 'the cops' or a roadblock (and I think some of them like being seen that way) and part of that reason is the threats and risks we are trying to address are unknown to the general audience. Through awareness training we can give end users a glimpse of the world from our point of view and (hopefully) start to find some common ground when it comes to working together to addressing information risks.

I don't believe we can solve our security problems with technology alone, people need to be part of the solution (and more people than just us security propeller-heads). Security awareness training may be far from perfect, but for now, it beats not doing anything to educate your workforce.

Not the end of the world as we know it....

DNS changer has been big in the news of late. News.com.au even ran a headline featuring a nuclear explosion!


The Australian government has a DNS changer check page - http://dns-ok.gov.au/ - to help you determine if you are affected. With the impending shutdown of the DNS changer servers, some are estimating 30,000 - 40,000 devices will be affected - really a drop in the ocean of the millions and millions (billions?) of devices connected to the internet. 

I figure some people will find their internet isn't working, shrug their shoulders as they assume it's another 'computer gremlin' and get someone to help fix it. No Cyber-Armageddon of Internet Doomsday.....

2012 - planning ahead.

2012 is well and truly here. Possibly the end of the world if you believe the Mayan Calendar conspiracy types, but perhaps more likely the end of some businesses (and their employees) if the lessons of 2011 haven't been learned.

Last year felt like a tipping point in the awareness of information security threats in the boardroom. With so many high-profile hacks that saturated the mainstream media, even the most cynical of executives probably caught themselves wondering whether their organization was doing enough to protect their data.

So what sort of questions should business executives be asking themselves? (in no particular order...)


Do you truly understand your critical data and assets?
What is it? Where is it? Who has access to it?
Determine what is critical for your business: is it your e-commerce server? Your customer data? Your R&D data? Once you've identified what it is, you need to understand how valuable it is - try this exercise: imagine you were selling your company tomorrow. How much would it be worth? Now imagine your most critical data asset was not included in the sale - by how much does that decrease the companies' value?
Once you understand what you're trying to protect (any hopefully how valuable it is), you need to locate it - both physically and logically - before you can look at protecting it.

Who is responsible (and accountable!) for protecting that critical data?
If nobody is responsible or accountable, then odds are nobody is protecting your critical data and any data loss or breach will be 'somebody else's problem'. Hopefully by now you have an understanding of the value of your critical data and can see the need to protect it adequately. However identifying and appointing someone is only half the job - do they have the skills and resources required to do the job? Are the objectives of your security program clearly defined? Equally importantly, do they have the authority to make any required changes; to data access, storage location or use? As the expression goes: "if you're responsible for security but don't have the authority to enforce security; then your true role is to take the blame when things go wrong"

How do you measure the effectiveness of that security?
Is your security in the right place? Are the right risks being addressed? Is that security reviewed regularly to ensure it is still adequate? How do you measure that? Have risk appetites for information security-related risks been set?
Many books have been written on measuring security effectiveness, and I doubt there is a single 'right way'. My advice is to measure the things you can control - not the things you cannot. For example, don't measure the number of intrusion attempts from the internet; measure your response metrics from detection to closure.
A clearly defined risk appetite is also critical - not just for information security, but for all of your operational or financial risks. At it's core, Information Security is a practical application of risk management, so having a clear understanding of hown much risk you're willing to tolerate; and under what circumstances; is critical.

Can you detect breaches? Do you know what to do if one occurs?
How mature is your 'security intelligence'? Some organizations have been infiltrated for months before a data breach was detected. Do you have a CERT/CSIRT plan for responding to breaches?
Like Disaster Recovery/BCP or even a fire drill, knowing what to do when something bad happens can stop a bad situation from getting worse. Having a plan alone isn't enough, it must be regularly tested to ensure all participants understand their role and can perform under the pressure of a real security incident where time may be of the essence.

Does your security program meet compliance requirements?
With ever increasing legal requirements does your security program still match up? Have you reviewed your current processes against SOX, HIPPA, PCI-DSS, or any other applicable legislation recently? Are there proposed changes on the horizon that may effect the way you currently protect your data? How often do you review your security, not only against the changing threat landscape, but against changing regulations, technologies and best practice?


Is the company culture 'security aware'?
Information security staff can only do so much. Like a neighbourhood watch, diligent employees are the best defence against information security incidents. Do your employees know not to open random email attachments or how to spot a social engineer? Do they know who to contact if they suspect an incident has occurred? Do they undertake regular security awareness training?


Can your current security practices evolve with the business?
The IT industry is in the middle of one of it's biggest shake-ups in recent history. With the increasing consumerization of IT driving the need for flexibility such as BYOD programs, cloud computing pushing in from all sides and the ever growing need for company data to be highly mobile and accessible, securing your sensitive corporate and customer data has probably never been more challenging. As all of these external pressures aren't going to go away, is your current Information Security program or strategy flexible enough to cope with the changing environment?

There are undoubtedly other important questions you could ask yourself, but if you can answer these few with confidence then you are most likely ahead of some of your peers and on your way to being able to sleep soundly at night.

“By failing to prepare, you are preparing to fail.” - Benjamin Franklin

Last minute Xmas gift?

Richard passed me this, perhaps the perfect stocking filler for the social engineer to give to his targets?

Pocket sized and perfect for recording all the things those pesky security guys tell you not to write down - all in one convenient place!

Worryingly, it is currently out of stock...a best seller perhaps?

Availability = not my problem!

Well OK, "not my problem" is perhaps a little harsh. But not my responsibility could be more accurate.
I think it is definitely time to rethink 'Availability' (as in the classic security 'CIA' triangle of Confidentiality, Integrity, and Availability) as being the responsibility of the Security area.
Availability, and it's bigger, uglier cousin Disaster Recovery, have long been a part of the Information Security mantra, from entry level CompTIA Security+ level up to CISSP or CISM level. Why is this so?

While you could argue that availability is a security responsibility in the case of a DoS attack, does it remain a responsibility if, for example, a lack of disk space causes a server to come crashing down? Does that mean capacity planning is now Security's responsibility? Or if the single power supply dies and a server or router is unavailable - should Security have ensured that the critical system has sufficient redundancy to avoid an outage due to hardware failure?

I think in the dim dark past that Availability fell under security so it would be 'somewhere' and someone would be thinking about it - even if the 'security guys' weren't the most appropriate people.

I don't think the CIA triangle is going anywhere soon, but in my opinion you're better off concentrating on Confidentiality and Integrity and leaving Availability and DR to the IT department...

Shooting the messanger

Here's one for the shame file. An Australian security researcher, while accessing his superannuation fund's website, noticed a security flaw - a direct object vulnerability when the website displayed customer statements.

He notified the company, provided them his personal details and the details of the vulnerability. He even notified the ex-colleague whose records he accidentally viewed. The companies reaction? Call the cops, engage the lawyers and even threaten that he may be held liable for the cost of fixing the vulnerability!


Seriously? What planet are these guys living on? Would the outcome have been better if he had sold or disclosed the vulnerability to some less ethical party? Or done nothing and waited for someone else to exploit it in future? Maybe it's time to implement some kind of whistleblower-style laws to protect researchers in these circumstances.

I guess no good deed really does go unpunished. This kind of URL manipulation (ie: changing a single digit) hardly constitutes hacking in my mind. It'll be interesting to see the outcome here, and how our judicial system handles this case (if it gets that far).

Toys for the boys

 I think anyone working in corporate IT (and especially security) is dealing with the headaches of the 'iPad invasion' (which extends well beyond Apple's 'must-have' products to all things new and shiny).

While I can understand the clamor of users who want the newest gadgets (IT staff can be the worst offenders), there is always the need to balance the implementation of such devices with the overall security requirements of the organization.

It's easy to argue that companies should just allow BYOD policies and protect the data rather than the perimeter or the endpoint, actually implementing these changes for many organizations can be a daunting task; and expensive in terms of dollars and manpower; with the business benefits not always apparent  - in terms of productivity rather than simply goodwill.

This recent article about the trial of iPads by the Western Australian Government highlights many of the problems faced today. I am personally appalled at the parliamentarians who "threatened "industrial action" if iPads were not considered in the list of devices available as part of their laptop allowance" and who are quoted as saying: "We told them, 'If you don't give it to us, we will turn around and pass a law so you will give it to us!'".
Way to abuse your powers, jerk.

Sharing Government documents was also highlighted as a problem with parliamentarians using cloud storage service dropbox (which has had it's own security problems), claiming "We are only one FOI [Freedom of Information] request away from having to hand it over anyway...So it's not something we have been focusing on".
If that is the case, why protect any parliamentary documents at all? Post everything on a public website. Because it's not like governments ever deny FOI requests.

Threatened abuse of lawmaking powers and throwing taxpayer dollars on a device based more on marketing than an actual use-case. I'm just glad I don't live in W.A....

Sony password analysis

 The upside of big data breaches involving passwords is that it gives us Security Pros an understanding of what users are actually doing when they're selecting their passwords. The cynic in me thinks that we can spend time trying to educate employees, family, friends and neighbours into using strong passwords and changing them frequently - and they'll nod and smile and agree it is important...and then go back to using 'abc123' on their Internet banking.
I've blogged before on past analysis into exposed passwords, and now with the recent Sony breach Troy Hunt has posted an analysis of 37,000 of the exposed Sony passwords. Does it contain anything groundbreaking? Well..no. It's a good bit of analysis that pretty much confirms what my inner cynic suspected - half of the passwords had only one character type (with 90% of these being lowercase only) and 45% of the passwords were numbers only. Only 4% of the passwords analyzed were what is commonly considered 'strong' passwords.

One of the nice things Troy did with his analysis was compare the uniqueness of the passwords across the different Sony databases exposed - a luxury one usually doesn't have when examining breached passwords - 92% of passwords where identical for the 2,000 accounts that had the same email address. Troy even managed to cross reference these accounts against the Gawker data breach and found of the 88 common accounts 67% were the same.
Oh and '123456' and 'password' were once again in the top few passwords used.

In other Sony related news - did Sony really sack a bunch of Security staff just before the data breach? That adds a new wrinkle to this most newsworthy of all breaches this year. I haven't seen it suggested, but could a disgruntled ex-employee have played a part?

The Wild West

A friend passed this report [pdf] into Information Systems Security from the Western Australian Auditor General.

Key findings:

  • Fourteen of the 15 agencies we tested failed to detect, prevent or respond to our hostile scans of their Internet sites. These scans identified numerous vulnerabilities that could be exploited to gain access to their internal networks and information.
  • We accessed the internal networks of three agencies without detection, using identified vulnerabilities from our scans. We were then in a position to read, change or delete confidential information and manipulate or shut down systems. We did not test the identified vulnerabilities at the other 12 agencies.
  • Eight agencies plugged in and activated the USBs we left lying around. The USBs sent information back to us via the Internet. This type of attack can provide ongoing unauthorised access to an agency network and is extremely difficult to detect once it has been established.
  • Failure to take a risk-based approach to identifying and managing cyber threats and to meet or implement good practice guidance and standards for computer security has left all 15 agencies vulnerable:
    • Twelve of the 15 agencies had not recognised and addressed cyber threats from the Internet or social engineering techniques in their security policies.
    • Nine agencies had not carried out risk assessments to determine their potential exposure to external or internal attacks. Without a risk assessment, agencies will not know their exposure levels and potential impacts on their business.
    • Seven agencies did not have incident response plans or procedures for managing cyber threats from the Internet and social engineering.
  • Nearly all the agencies we examined had recently paid contractors between $9 000 to $75 000 to conduct penetration tests on their infrastructure. Some agencies were doing these tests up to four times a year. In the absence of a broader assessment of vulnerabilities, penetration tests alone are of limited value, as our testing demonstrated. Further, they are giving agencies a false sense of security about their exposure to cyber threats.
Some serious findings indeed, but it's good to see the Government performing thiese kind of assessments and trying to get some traction on remediation of findings.

Whilst reading the report consider how well your organization would have fared in this type of assessment.

I also found the link for the 2010 report [pdf]  for comparison.

breachapalooza

We're halfway through 2011 and the breachapalooza* continues unabated!

Sony have been hit so many times in fact there's a new term for it: "Sownage". Add to the ever-growing list senate.gov, Citibank, Honda Canada and the IMF.

Although it isn't really news to Security folk, the mainstream media has picked up on it (largely thanks to the scale of Sony's woes) and are continuing to report on the never ending tide of high profile defacements and smash-and-grabs. A quick look at datalossdb shows the number of incidents so far this year (322) is only slightly up on this time last year (300) and behind 2009 (376); while Sony's 77 million records lost is still well behind Heartland's 130 million back in 2008.

With mainstream media interest undoubtably leading to increased interest in boardrooms with executive asking "Can it happen to us?" and "what do we need to do to stop it happening to us?" the question has to be asked are the actions of lulzsec good or bad for the industry? Patrick Gray ruffled a few feathers with his thought-provoking "Why we secretly love LulzSec":

LulzSec is running around pummelling some of the world's most powerful organisations into the ground... for laughs! For lulz! For shits and giggles! Surely that tells you what you need to know about computer security: there isn't any.
Which lead to an equally interesting response from Adam over at the Newschool site.

I think the answer may be a little from column A and a little from column B. In Patrick's defence, he's probably right to some degree. Every Security guy or gal who has ever been overruled or just plain ignored when explaining the need for better security testing, implementation, tools, monitoring, etc etc; probably has a little voice somewhere saying 'I told you so'.
Adam is right too when he says:
We’re being out-communicated by folks who can’t spell.
Why are we being out-communicated? Because we expect management to learn to understand us, rather than framing problems in terms that matter to them. We come in talking about 0days, whale pharts, cross-site request jacking and a whole alphabet soup of things whose impact to the business are so crystal clear obvious that they go without saying.

Although I would point out that sometimes even framing the problem in the right language to the right audience still doesn't result in the desired outcome. The old 'you can lead a horse to water, but you can't make him drink' problem exists if a mentality of 'it can't happen to us' rules. The only plus out of LulzSec actions is that they may be breaking down some of that mentality.

However the most disappointing, or possibly telling, thing is that from what has been reported, is that very little of what lulzsec has accomplished has been particularly difficult or sophisticated. This is not really surprising as it matches what Verizon revealed earlier in the year [pdf] when they reported that 92% of the breaches investigated where 'not particularly sophisticated'. SQL injection may be old school, but it's more popular than ever.

In the meantime, Paul Ducklin from Spohos issued a challenge to the LulzSec group to use their skills, and there obvious spare time, to do something worthwhile like supporting Johnny Long's Hackers for Charity.

That may have to wait until after LulzSec are done warring with 4chan/anonymous, which at the very least may provide some relief to Sony and may give other companies a break.**


*just heard Patrick Gray's risky.biz podcast from last week call it the pwnpocalypse. Why didn't I think of that?

**Edit 18/6:  or maybe they're not as they're still exposing records.

Be prepared!

Being horribly sick at home with a nasty chest infection has some small benefits - such as being able to catch up on some TV. I just finished watching 'The Egyptian Job'  which is a speculative recreation of the robbing of the tomb of pharaoh Amenemhat III.

Amenemhat III was one of the richest rulers of the Middle Kingdom and had a state-of-the-art pyramid protected by the best security of the time - blind passageways, dead-ends, massive immovable stone doors and a 45 ton slab of quartzite sealing the burial chamber.

But it was all for nought! Using stone tools, ingenuity and elbowgrease, a determined group of thieves managed to dig a 100 metre tunnel, move several 15+ ton doors and crack through the 45 ton quartzite slab to pull off one of the richest heists in history.

So what's the lesson? The usual one that no matter how good your defences may seem, a truly determined attacker with time on his/her side will find a way through. Amenemhat III's pyramid used static defences, giant blocks that were 'set and forget'. If the graverobbers hadn't made off with his loot 3700 years ago, Egyptologist Flinders Petrie, with 'modern' tools and techniques would have taken the lot in 1888.

So bearing in mind that one day your defences will fail, the next important step is to be properly prepared for that eventuality. In the aftermath of September 11, and more recently the Christchurch earthquake and Queensland cyclone Yasi, many businesses created or updated their Disaster Recovery and Business Continuity Plans.

While DR/BCP plans are important, such large scale disasters (or even smaller ones, such as your building catching fire) are relatively rare. A statistically more likely occurrence would be for a business to lose critical data - through either malicious or accidental means, or to suffer some other type of network breach such as a large scale virus outbreak or website defacement. But how many businesses have response plans in place to deal with these types of incidents?

Regardless of the business size, having some type of incident response plan to deal with these types of occurrences is a good idea. The very basics of clearly defining who needs to be notified internally (and has the authority to make decisions such as if a compromised critical system can be/should be shut down or if law enforcement needs to be informed) or under what circumstances external bodies must be informed (regulatory bodies or reporting the loss of PII data) is a solid starting point. Predefined statements for the media (or at least determining who is allowed to talk to the media) are also a good idea in case the breach is made public.

Identifying who has the skills to perform an investigation (internally or externally) and has budget authority to engage investigators (nothing is ever free!) is the next steps as it is far better to have this sort of thing defined well in advance in calm circumstances that making high-pressure decisions on the fly at 3am when a major data breach may or may not have occurred (or indeed still be in progress!).

Where investigations are handled internally, having adequately trained and resourced staff is essential - you can't just rely on your 'regular' I.T. staff or Information Security staff to be able to collect evidence and perform forensics without specialized training - and these skills need to be kept up to date through regular incident response drills that expose a sufficient number of staff to the response process (primary responders and backup team members so that a missing key team member doesn't derail the response process).

If a third-party is to be used, ensure they have the employees with sufficient skills to investigate and collect evidence - this is especially important if the incident ends up going to court - and preferably has a proven track record of performing such investigations. Understand how long different types of investigations take and how much they're likely to cost - the cost of the investigation always has to be balanced against the damage of the incident.

Finally of course is being able to tell if an incident has occurred. Sometimes it is easy, but sometimes an organization may not know for months that its network and information systems have been compromised. Sometimes it may be a false positive and no incident may have occurred at all. Understanding what is 'normal' in your environment is critical - as is being able to quickly detect when something is not normal.

Does Obscure = Secure?

It's not new (in fact it's from 2008), but today I came across this nice piece on Security by Obscurity.

Well worth a read and will some nice points and counterpoints from the likes of Steve Riley and Jesper Johannson.

Suckers down under

Now I always thought us Aussies were a pretty savvy lot, with an ingrained cultural skepticism of things that seem 'too good to be true'.

It is reported that a forthcoming study from the ACCC (Australian Competition and Consumer Commission) shows that over 50 Aussies a week are falling prey to online scams, paying out over $1.5 million per month.

Ouch!

Cyber Crime Facts Executives Need to Know

I came across an article on PCWorld entitled "7 Cyber Crime Facts Executives Need to Know" and thought I'd add some comments:

Cyber crimes are far more costly than taking steps to harden an environment beforehand
Prevention is always cheaper than cure (cheaper in time, resources and dollars!). This doesn't just go for security, but other areas such as software development as well. Retro-fitting is always difficult, always expensive and never as good as if you'd 'done it right the first time'. The quote:
"the appointment of a single top executive responsible for enterprise risk management, a la a Chief Security Officer, or better still, a Chief Risk Officer is a critical factor for success" is an interesting one, as in my experience (and from talking with peers) many CROs in Australia are still primarily focused on financial and operational risk, with little understanding or appreciation of Information Risk. Perhaps it's a bit different in the US however (and I hope the trend is slowly changing here as well....thanks Julian Assange!)


Cyber crimes are pervasively intrusive and increasingly common occurrences
Recent high-profile events such as Wikileaks and the recent Vodafone breach have probably helped raise some awareness about Information Security and the 'reality' of cyber-crimes, although your less tech-savvy executives may think that having anti-virus installed = magical cyber-crime prevention forcefield.

The most costly cyber crimes are those caused by web attacks and malicious insiders
Web attacks I agree with, but I think there has always been some controversy about the real threat of insiders. While they can't be discounted (OK Wikileaks again....), they shouldn't be overestimated either. Insiders know they're more likely to get caught than the anonymous hacker in Russia or some other place with no extradition laws....
IMHO your web stuff is more likely to get attacked than you are to suffer an internal breach, especially with the rush to throw as much as possible onto the internet.

At onset, rapid resolution is the key to reducing costs  
Rapid identification and handling of incidents is a must in order to reduce damage and cost. Like point #1, preparation is the key and will make all the difference when the bits hit the cyber-fan.
Oh and notice I mentioned identification - you can't handle or resolve that you don't know about!

Loss of information due to theft represents the highest external cost, followed by the costs associated with the disruption to business operations
This may vary industry to industry and country to country as laws such as breach disclosure are different across the world. But in general, if it was worth breaking in and stealing, it must be worth something to someone - a competitor, a rival government, etc. Resuming operations is certainly easier than retrieving data posted on the Internet or consumer confidence in the face of a privacy breach.

All industry verticals are susceptible to cybercrime
If you have data worth something, then you're a potential target. Whether you're in medical, finance or widget manufacture, you may be a target for cybercrime. Unfortunately it's a fact of life today. Of course some industries (like finance) are far more likely to be targeted.

If you deal with senior or Executive Management in your organization, these make great starting points to present some information to them. Use sites like datalossdb to find incidents in your area or industry to emphasize your points. Don't assume they know these things, go out there and educate them!

powered by Blogger | WordPress by Newwpthemes | Converted by BloggerTheme