Wednesday 18 December 2013

File Integrity Monitoring – 3 Reasons Why Your Security Is Compromised Without It Part 1

Introduction
This is a 3 step series examining why File Integrity Monitoring is essential for the security of any business’ IT. This first section examines the need for malware detection, addressing the inevitable flaws in anti-virus systems.

Malware Detection – How Effective is Anti-Virus?
Security Is Compromised Without File Integrity Monitoring When malware hits a system - most commonly a Windows operating system, but increasingly Linux and Solaris systems are coming under threat (especially with the renewed popularity of Apple workstations running Mac OS X) - it will need to be executed in some way in order to do its evil deeds.
This means that some kind of system file – an executable, driver or dll has to be planted on the system. A Trojan will make sure that it gets executed without further user intervention by replacing a legitimate operating system or program file. When the program runs, or the OS performs one of its regular tasks, the Trojan is executed instead.

On a user workstation, 3rd party applications such as internet browsers, pdf readers and mundane user packages like MS Word or Excel have been targeted as a vector for intermediate malware. When the document or spreadsheet is opened, the malware can exploit vulnerabilities in the application, enabling malware to be downloaded and executed.

Either way, there will always be a number of associated file changes. Legitimate system files are replaced or new system files are added to the system.

If you are lucky, you won’t be the first victim of this particular strain of malware and your AV system – provided it has been updated recently – will have the necessary signature definitions to identify and stop the malware.

When this is not the case, and bear in mind that millions of new malware variants are introduced every month, your system will be compromised, usually without you knowing anything about it, while the malware quietly goes about its business, damaging systems or stealing your data.

FIM – Catching the Malware Other Anti-Virus Systems Miss
That is, of course, unless you are using file integrity monitoring.

Enterprise-level File Integrity Monitoring will detect any unusual filesystem activity. Unusual is important, because many files will change frequently on a system, so it is crucial that the FIM system is intelligent enough to understand what regular operation looks like for your systems and only flag genuine security incidents.

However, exclusions and exceptions should be kept to a minimum because FIM is at its best when it is operated in a ‘zero tolerance’ approach to changes. Malware is formulated with the objective that it will be effective, and this means it must both be successfully distributed and operate without detection.

The challenge of distribution has seen much in the way of innovation. Tempting emails with malware bait in the form of pictures to be viewed, prizes to be won and gossip on celebrities have all been successful in spreading malware. Phishing emails provide a convincing reason to click and enter details or download forms, and specifically targeted Spear Phishing emails have been responsible for duping even the most cybersecurity-savvy user.

Whatever the vector used, once malware is welcomed into a system, it may then have the means to propagate within the network to other systems.

So early detection is of paramount importance. And you simply cannot rely on your anti-virus system to be 100% effective, as we have already highlighted.

FIM provides this ‘zero tolerance’ to filesystem changes. There is no second-guessing of what may or may not be malware, guaranteeing that all malware is reported, making FIM 100% effective in detecting any breach of this type.

Summary
FIM is ideal as a malware detection technology as it is not prone to the ‘signature lag’ or ‘zero day vulnerabilities’ that are the Achilles’ Heel of anti-virus systems. As with most security best practices, the advice is always more is better, and operating anti-virus (even with its known flaws) in conjunction with FIM will give the best overall protection. AV is effective against legacy malware and its automated protection will quarantine most threats before they do any damage. But when malware does evade the AV, as some strains always will do, real-time FIM can provide a vital safety net.

Wednesday 11 December 2013

Which File Integrity Monitoring Technology Is Best For FIM? File Integrity Monitoring FIM or SIEM FIM?

Introduction
Within the FIM technology market there are choices to be made. Agent-based or agentless is the most common choice, but even then there are both SIEM, and ‘pure-play’ FIM, solutions to choose between.


FIM – Agents or Agentless

File Integrity Monitoring FIM or SIEM FIMThere is never a clear advantage for either agent-based or agentless FIM. There is a balance to be found between agentless FIM and the arguably superior operation of agent-based FIM, offering
  • Real-time detection of changes – agentless FIM scanners can only be effective on a scheduled basis, typically once every day
  • Locally stored baseline data meaning a one-off full scan is all that is needed, while a vulnerability scanner will always need to re-baseline and hash every single file on the system each time it scans
  • Greater security by being self-contained, whereas an agentless FIM solution will require a logon and network access to the host under test
Conversely, proponents of the Agentless vulnerability scanner will cite the advantages of their technology over an agent-based FIM system, including
  • Up and running in minutes, with no need to deploy and maintain agents on end points, makes an agentless system easier to operate
  • No need to load any 3rd party software onto endpoints, an agentless scanner is 100% self-contained
  • Foreign or new devices being added to a network will always be discovered by an agentless scanner, while an agent-based system is only effective where agents have been deployed onto known hosts For these reasons there is no outright winner of this argument and typically, most organizations run both types of technology in order to benefit from all the advantages offered.
Using SIEM for FIM

Using SIEM technology is much easier to deal with. Similar to the agentless argument, a SIEM system may be operated without requiring any agent software on the endpoints, using WMI or native syslog capabilities of the host. However this is typically seen as an inferior solution the agent-based SIEM package. An agent will allow for advanced security functions such as hashing and real-time log monitoring.

For FIM, all SIEM vendors will rely on a combination of host object access auditing, combined with a scheduled baseline of the filesystem. The auditing of filesystem activity can give real-time FIM capabilities, but will require substantially higher resources from the host to operate this than a benign agent. The native auditing of the OS will not provide hash values for files so the forensic detection of a Trojan cannot be achieved to the extent that an enterprise FIM agent will do so.

The SIEM vendors have moved to address this problem by providing a scheduled baseline and hash function using an agent. The result is a solution that is the worst of all options – an agent must be installed and maintained, but without the benefits of a real-time agent!

Summary

In summary, SIEM is best used for event log analysis and FIM is best used for File Integrity Monitoring. Whether you then decide to use an agent-based FIM solution or an agentless system is tougher. In all likelihood, the conclusion will be that a combination of the two is going to be only complete solution.

Monday 4 November 2013

Is Your QSA Making You Less Secure?

Introduction
Most organizations will turn to a QSA when undertaking a PCI Compliance project. A Qualified Security Assessor is the guy you need to satisfy with any security measures and procedures you implement to meet compliance with the PCI DSS so it makes sense to get them to tell you what you need to do.
For many, PCI Compliance is about simply dealing with the PCI DSS in the same way they would deal with another deadlined project. When does the bank want us to be PCI Compliant and what do we need to do before we get audited in order to get a pass?

PCI Compliance project For many, this is where the problems often begin, because of course, PCI compliance isn’t simply about passing an audit but getting your organization sufficiently organized and aware of the need to protect cardholder data at all times. The cliché in PCI circles is ‘don’t take a checkbox approach to compliance’ but it is true. Focusing on passing the audit is a tangible goal, but it should only be a milestone along the way to maturing internal processes and procedures in order to operate a secure environment every day of the year, not just to drag your organization through an annual audit.

The QSA Moral Maze
However, for many, the QSA is hired to ‘make PCI go away’ and this can sometimes present a dilemma. QSAs are in business and need to compete for work like any other commercial venture. They are typically fiercely independent and take their responsibility seriously for providing expert guidance, however, they also have bills to pay.

Some get caught by the conflict of interest between advising the implementation of measures and offering to supply the goods required. This presents a difficult choice for the customer – go along with what the QSA says, and buy whatever they sell you, or go elsewhere for any kit required and risk the valuable relationship needed to get through the audit. Whether this is for new firewalls, scanning or Pen Testing services, or FIM and Logging/SIEM products, too many Merchants have been left to make difficult decisions. The simple solution is to separate your QSA from supplying any other service or product for your PCI project, but make sure this is clarified up front.

The second common conflict of interest is one that affects any kind of consultant. If you are being paid by the day for your services, would you want the engagement to be shorter or longer? If you had the opportunity to influence the duration of the engagement, would you fight for it to be ended sooner, or be happy to let it run longer?

Let’s not be too cynical over this – the majority of Merchants have paid widely differing amounts for their QSA services but have been delighted with the value for money received. But we have had one experience recently where the QSA has asked for repeated network and system architecture re-designs. They have recommended that firewalls be replaced with more advanced versions with better IPS capabilities. In both instances, you can see that the QSA is giving accurate and proper advice, however, one of the unfortunate side-effects of doing so is that the Merchant delays implementation of other PCI DSS requirements. The result in this case is that the QSA actually delays security measures being put in place, in other words, the security expert’s advice is to prolong the organizations weak security posture!

Conclusion
The QSA community is a rich source of security experience and expertise, and who better to help navigate and organization through a PCI Program than those responsible for conducting the audit for compliance with the standard. However, best practice is to separate the QSA from any other aspect of the project. Secondly, self-educate and help yourself by becoming familiar with security best practices – it will save time and money if you can empower yourself instead of paying by the day to be taught the basics. Finally, don’t delay implementing security measures – you know your systems better than anyone else, so don’t pay to prolong your project! Seize responsibility for de-scoping your environment where possible, then apply basic best practices to the remaining systems in scope – harden, implement change controls, measure effectiveness using file integrity monitoring and retain audit trails of all system activity. It’s simpler than your QSA might leave you to believe.

Wednesday 16 October 2013

SIEM plus Correlation = Security?

Introduction
Whether you are working from a SANS 20 Security Best Practices approach, or working with an auditor for SOX compliance or QSA for PCI compliance, you will be implementing a logging solution.
Keeping an audit trail of key security events is the only way to understand what ‘regular’ operation looks like. Why is this important? Because it is only when you have this clear that you can begin to identify irregular and unusual activity which could be evidence of a security breach. Better still, once you have that picture of how things should be when everything is normal and secure, an intelligent log analysis system, aka SIM or SIEM, can automatically assess events, event volumes and patterns to intelligently judge on your behalf if there is potentially something fishy going on.

Security Threat or Potential Security Event? Only with Event Correlation!

The promise of SIEM systems is that once you have installed one of these systems, you can get on with your day job and if any security incident occurs, it will let you know about it and what you need to do in order to take care of it.

SIEM Solution The latest ‘must have’ feature set is correlation, but this must be one of the most over used and abused technology term ever!

The concept is straightforward: isolated events which are potential security incidents (for example, ‘IPS Intrusion Detected event’) are notable but not as critical as seeing a sequence of events, all correlated by the same session, for example, an IPS Alert, followed by Failed Logon, followed by a Successful Admin Logon.

In reality, these advanced, true correlation rules are rarely that effective. Unless you are in a very active security bridge situation, with an enterprise comprising thousands of devices, standard single event/single alert operation should work well enough for you.

For example, in the scenario above, it should be the case that you DON’T have many intrusion alerts from your IPS (if you do, you really need to look at your firewalling and IPS defenses as they aren’t providing enough protection). Likewise if you are getting any failed logins from remote users to critical devices, you should put your time and effort into a better network design and firewall configuration instead of experimenting with ‘clever, clever’ correlation rules. It’s the KISS* principle applied to security event management.

As such, when you do get one of the critical alerts from the IPS, this should be enough to initiate an emergency investigation, rather than waiting until you see whether the intruder is successful at brute forcing a logon to one of your hosts (by which time it is too late to head off any way!)

Correlation rules perfected – but the system has already been hacked…

In fact, consider this last point further, as it is where security best practices deviate sharply from the SIEM Product Managers pitch. Everyone knows that prevention is better than cure, so why is there so much hype surrounding the need for correlated SIEM events? Surely the focus should be on protecting our Information Assets rather than implementing an expensive and complicated appliance which may or may not sound an alarm when systems are under attack?

Security Best Practices will tell you that you must implement – thoroughly – the basics. The easiest and most available security best practice is to harden systems, then operate a robust change management process.
By eliminating known vulnerabilities from your systems (primarily configuration-based vulnerabilities but, of course, software-related security weaknesses too via patching) you provide a fundamentally well-protected system. Layer up other defense measures too, such as anti-virus (flawed as a comprehensive defense system, but still useful against the mainstream malware threat), firewalling with IPS, and of course, all underpinned by real-time file integrity monitoring and logging, so that if any infiltration does occur, you will get to know about it immediately.

Conclusion

Contemporary SIEM solutions offer much promise as THE intelligent security defense system. However, experience and the evidence of ever-increasing numbers of successful security breaches tell us that there is never going to be a ‘silver bullet’ for defending our IT infrastructure. Tools and automation can help of course, but genuine security for systems only comes from operating security best practices with the necessary awareness and discipline to expect the unexpected.

*KISS – Keep It Super Simple

Thursday 12 September 2013

Cyber Threat Sharing Bill and Cyber Incident Response Scheme – Shouldn’t We Start with System Hardening and FIM?

Background: Defending the Nation’s Critical Infrastructure from Cyber Attacks

In the UK, HM Government’s ‘Cyber Incident Response Scheme’ is closely aligned in intent and purpose to the forthcoming US Cyber Threat Sharing Bill.

The driver for both mandates is that, in order to defend against truly targeted, stealthy cyber attacks (APTs, if you like), there will need to be a much greater level of awareness and collaboration. This becomes a government issue when the nation’s critical infrastructures (defense, air traffic control, health service, power and gas utilities etc.) are concerned. Stuxnet proved that cyber attacks against critical national infrastructure can succeed and there isn’t a government anywhere in the world who doesn’t have concerns that they could be next.

The issues are clear: a breach could happen, despite best efforts to prevent it. But in the event that a security breach is discovered, identifying the nature of the threat and then properly communicating this to others at risk is time-critical. The damage may already be done at one facility but heading it off before it effects other locations becomes the new priority: the impact of the breach can be isolated if swift and effective mitigating actions can be taken at other organizations subject to the same cyber threat.

As it stands, the US appear to be going further, legislating regulation via the Cyberthreat Sharing Bill. The UK Government have created the Cyber Incident Response Scheme but without this being a legislated and regulated requirement it may be suffer from slow adoption. Why wouldn’t the UK do likewise if they are taking national cyber security as seriously?

System Hardening and FIM

Prevention Better Than Cure?

One other observation, based on experience with other ‘top down’ mandated security standards such as PCI DSS, is that there is a temptation for the authorities to prioritize specific security best practices over others. Being able to give a ‘If you only do one thing for security, it’s this…’ message gets things moving in the right direction, however it is can also lead to a false sense of security amongst the community, that the mandated steps have been taken and so the organization is now ‘secure’.

In the case of the UK initiative, the collaboration with CREST is sound in that it provides a degree of ‘quality control’ over the resources recommended for usage. However, the concern is that the emphasis of the CREST scheme may be biased too heavily towards penetration testing. Whilst this is a good, basic security practice, pen testing is either too infrequent, or too automated (and therefore breeds complacency). Better than doing nothing? Absolutely – but the program should not be stopped there.

A truly secure environment is one where all security best practices are understood and embedded within the organization and operated constantly. Likewise, vulnerability monitoring and system integrity should be a non-stop process, not a quarterly or ad hoc pen test. Real-time file integrity monitoring, continuously assessing devices for compliance with a hardened build standard and identifying all system changes is the only way to truly guarantee security.

Wednesday 4 September 2013

PCI DSS Version 3 and File Integrity Monitoring – New Standard, Same Problems

PCI DSS Version 3.0

PCI DSS Version 3 will soon be with us. Such is the anticipation that the PCI Security Standards Council have released a sneak preview ‘Change Highlights’ document.
The updated Data Security Standard highlights include a wagging finger statement which may be aimed at you if you are a Merchant or Acquiring Bank.

“Cardholder data continues to be a target for criminals. Lack of education and awareness around payment security and poor implementation and maintenance of the PCI Standards leads to many of the security breaches happening today”

In other words, a big part of the drive for the new version of the standard is to give it some fresh impetus. Just because the PCI DSS isn’t new, it doesn’t make it any less relevant today.

pci dss v3

But What is the Benefit of the PCI DSS for Us?

To understand just how relevant cardholder data protection is, the hard facts are outlined in the recent Nilson report. Their findings are that global card fraud losses have now exceeded $11Billion. It’s not all bad news if you are a card brand or issuing bank – the losses are made slightly more bearable by the fact that the total of transactions now exceeds $21TRILLION.

http://www.nilsonreport.com/publication_the_current_issue.php?1=1

“Card issuer losses occur mainly at the point of sale from counterfeit cards. Issuers bear the fraud loss if they give merchants authorization to accept the payment. Merchant and acquirer losses occur mainly on card-not-present (CNP) transactions on the Web, at a call center, or through mail order”

This is why the PCI DSS exists and needs to be taken seriously with all requirements fully implemented, and practised daily. Card fraud is a very real problem and as with most crimes, if you think it won’t happen to you, think again. Ignorance, complacency and corner-cutting are still the major contributors to card data theft.

The changes are very much in line with NNT’s methodology of continuous, real-time security validation for all in scope systems – the PCI SSC state that the changes in version 3 of the standard include “Recommendations focus on helping organizations take a proactive approach to protect cardholder data that focuses on security, not compliance, and makes PCI DSS a business-as-usual practice”

So instead of this being a ‘Once a year, get some scans done, patch everything, get a report done from a QSA then relax for another 11 months’ exercise, the PCI SSC are trying to educate and encourage merchants and banks to embed or entrench security best practices within their everyday operations, and be PCI Compliant as a natural consequence of this.

Continuous FIM – The Foundation of PCI Compliance

In fact, taking a continuous FIM approach as the starting point for security and PCI compliance makes much sense. It doesn’t take long to set up, it will only tell you if you need to take action when you need to do so, will help to define a hardened build standard for your systems and will drive you to adopt the necessary discipline for change control, plus it will give you full peace of mind that systems are being actively protected at all times, 100% in line with PCI DSS requirements.

Wednesday 21 August 2013

A New Role for FIM in the Unix and Linux World – Undoubtedly, This is The Shape of Things to Come…

Lots of coverage this week relating to ‘Hand of Thief’, the latest black-market Trojan designed for any aspiring cyber-fraudster – yours for just $2000.

Server room racksIt’s concerning news in that the threat to your personal data – predominantly your internet banking details – is an increasingly marketable commodity, but for the IT community the additional interest in this particular piece of malware is that it has been engineered specifically for Linux. Estimates suggest that Linux as a desktop OS accounts for less than 1% of the worlds’ total. Of course, Linux is very popular as a host/server OS, but Hand of Thief is squarely intended to intercept a user’s browser interactions. It may be a proportionally small pool of potential targets but at least you get 100% of it – the quantity of malware targeted on the Linux OS is negligibly tiny compared to the tens of millions of newly added malware variants being discovered in the Windows world every year.

What Would Walter White Would Do?

The market for Hand of Thief seems to be modelled in the image of Breaking Bad’s Walter White’s structure for his blue crystal meth market (I’m sure I don’t need to explain what Breaking Bad is?). At the top, there is a development lab manufacturing the malware, and the guys engineering the code, like Walter and his trainee cooks, seem satisfied just to produce and sell product. Their customers will either be the criminal gangs looking to use the malware to steal banking information, or there could even be a further tier of middle-men operating the phishing network to distribute the malware and gather account codes and passwords to sell onto other groups. These will be the guys actually logging in and transferring the cash out.
The timing is interesting too – with the Citadel bust just being made public, the headline and moral of the story should have been that the perpetrators have just been jailed, but maybe the estimated $500M stolen was actually the more eye-catching element of the story? So instead of acting as a warning and deterrent to other cybercriminals, the story could just as likely have inspired even more to “get rich or die tryin’”, just like the notorious Albert Gonzalez who held this as his motto when he undertook his various scams targeting cardholder data theft.

Linux Users – Welcome to the New Wild West

The only real conclusion is that the inevitable proliferation of cybercrime-enabling malware continues, and that the previous ‘high ground’ afforded by the Non-Windows Operating Systems seems now to be diminishing. The good news is that protection technology is also progressing – real-time FIM is already available for Mac OS X, and nearly all other contemporary Linux and Unix, including Solaris, Ubuntu, RedHat and Suse. This means that there is already technology to detect malware, even Zero Day attacks that will evade anti-virus systems. Furthermore, with prevention always being the ideal strategy, hardening checklists can now be applied using the same file integrity monitoring technology to audit Linux hosts and Desktops to ensure most vulnerabilities are closed down and kept out. And of course, vigilance is always going to be required – phishing attacks have doubled in the last 12 months and this all points to a potentially upwards spiraling trend.

Monday 8 July 2013

File Integrity Monitoring – FIM and Why Change Management is the Best Security Measure You Can Implement

Introduction
Server room racksWith the growing awareness that cyber security is an urgent priority for any business there is a ready-market for automated, intelligent security defenses. The silver-bullet against malware and data theft is still being developed (promise!) but in the meantime there are hordes of vendors out there that will sell you the next best thing.

The trouble is, who do you turn to? According to, say, the Palo Alto firewall guy, his appliance is the main thing you need to best protect your company’s intellectual property, although if you then speak to the guy selling the FireEye sandbox, he may well disagree, saying you need one of his boxes to protect your company from malware. Even then, the McAfee guy will tell you that endpoint protection is where it’s at – their Global Threat Intelligence approach should cover you for all threats.

In one respect they are all right, all at the same time – you do need a layered approach to security defenses and you can almost never have ‘too much’ security. So is the answer as simple as ‘buy and implement as many security products as you can’?

Cyber Security Defenses– Can You Have Too Much of a Good Thing?
Before you draw up your shopping list, be aware all this stuff is really expensive, and the notion of buying a more intelligent firewall to replace your current one, or of purchasing a sandbox appliance to augment what your MIMEsweeper already largely provides, demands a pause for thought. What is the best return on investment available, considering all the security products on offer?

Arguably, the best value for money security product isn’t really a product at all. It doesn’t have any flashing lights, or even a sexy looking case that will look good in your comms cabinet, and the datasheet features don’t include any impressive packets per second throughput ratings. However, what a good Change Management process will give you is complete visibility and clarity of any malware infection, any potential weakening of defenses plus control over service delivery performance too.

In fact, many of the best security measures you can adopt may come across as a bit dull (compared to a new piece of kit for the network, what doesn’t seem dull?) but, in order to provide a truly secure IT environment, security best practices are essential.

Change Management – The Good, The Bad and The Ugly (and The Downright Dangerous)
There are four main types of changes within any IT infrastructure
  • Good Planned Changes (expected and intentional, which improve service delivery performance and/or enhance security)
  • Bad Planned Changes (intentional, expected, but poorly or incorrectly implemented which degrade service delivery performance and/or reduce security)
  • Good Unplanned Changes (unexpected and undocumented, usually emergency changes that fix problems and/or enhance security)
  • Bad Unplanned Changes (unexpected, undocumented, and which unintentionally create new problems and/or reduce security)
A malware infection, intentionally by an Inside Man or external hacker also falls into the last category of Bad Unplanned Changes. Similarly, a rogue Developer implanting a Backdoor into a corporate application. The fear of a malware infection, be it a virus, Trojan or the new buzzword in malware, an APT, is typically the main concern of the CISO and it helps sell security products, but should it be so?

A Bad Unplanned Change that unintentionally renders the organization more prone to attack is a far more likely occurrence than a malware infection, since every change that is made within the infrastructure has the potential to reduce protection. Developing and implementing a Hardened Build Standard takes time and effort, but undoing painstaking configuration work only takes one clumsy engineer to take a shortcut or enter a typo. Every time a Bad Unplanned Change goes undetected, the once secure infrastructure becomes more vulnerable to attack so that when your organization is hit by a cyber-attack, the damage is going to be much, much worse.

To this end, shouldn’t we be taking Change Management much more seriously and reinforcing our preventative security measures, rather than putting our trust in another gadget which will still be fallible where Zero Day Threats, Spear Phishing and straightforward security incompetence are concerned?

The Change Management Process in 2013 – Closed Loop and Total Change Visibility
The first step is to get a Change Management Process – for a small organization, just a spreadsheet or a procedure to email everyone concerned to let them know a change is going to be made at least gives some visibility and some traceability if problems subsequently arise. Cause and Effect generally applies where changes are made – whatever changed last is usually the cause of the latest problem experienced.
Which is why, once changes are implemented, there should be some checks made that everything was implemented correctly and that the desired improvements have been achieved (which is what makes the difference between a Good Planned Change and a Bad Planned Change).

For simple changes, say a new DLL is deployed to a system, this is easy to describe and straightforward to review and check. For more complicated changes, the verification process is similarly much more complex. Unplanned Changes, Good and Bad, present a far more difficult challenge. What you can’t see, you can’t measure and, by definition, Unplanned Changes are typically performed without any documentation, planning or awareness.

Contemporary Change Management systems utilize File Integrity Monitoring, providing a zero tolerance to changes. If a change is made – configuration attribute or to the filesystem – then the changes will be recorded.

In advanced FIM systems, the concept of a time window or change template can be pre-defined in advance of a change to provide a means of automatically aligning the details of the RFC (Request for Change) with the actual changes detected. This provides an easy means to observe all changes made during a Planned Change, and greatly improve the speed and ease of the verification process.

This also means that any changes detected outside of any defined Planned Change can immediately be categorized as Unplanned, and therefore potentially damaging, changes. Investigation becomes a priority task, but with a good FIM system, all the changes recorded are clearly presented for review, ideally with ‘Who Made the Change?’ data.

Summary
Change Management is always featured heavily in any security standard, such as the PCI DSS, and in any Best Practice framework such as SANS Top Twenty, ITIL or COBIT.

If Change Management is part of your IT processes, or your existing process is not fit for purpose, maybe this should be addressed as a priority? Coupled with a good Enterprise File Integrity Monitoring system, Change Management becomes a much more straightforward process, and this may just be a better investment right now than any flashy new gadgets?

Monday 1 July 2013

File Integrity Monitoring – Use FIM to Cover All the Bases

Why use FIM in the first place?
Unlike anti-virus and firewalling technology, FIM is not yet seen as a mainstream security requirement. In some respects, FIM is similar to data encryption, in that both are undeniably valuable security safeguards to implement, but both are used sparingly, reserved for niche or specialized security requirements.

FIM solutionsHow does FIM help with data security?
At a basic level, File Integrity Monitoring will verify that important system files and configuration files have not changed, in other words, the files’ integrity has been maintained.

Why is this important? In the case of system files – program, application or operating system files – these should only change when an update, patch or upgrade is implemented. At other times, the files should never change.

Most security breaches involving theft of data from a system will either use a keylogger to capture data being entered into a PC (the theft then perpetrated via a subsequent impersonated access), or some kind of data transfer conduit program, used to siphon off information from a server. In all cases, there has to be some form of malware implanted onto the system, generally operating as a Trojan i.e. the malware impersonates a legitimate system file so it can be executed and provided with access privileges to system data.

In these instances, a file integrity check will detect the Trojans existence, and given that zero day threats or targeted APT (advanced persistent threat) attacks will evade anti-virus measures, FIM comes into its own as a must-have security defense measure. To give the necessary peace of mind that a file has remained unchanged, the file attributes governing security and permissions, as well as the file length and cryptographic hash value must all be tracked.

Similarly, for configuration files, computer configuration settings that restrict access to the host, or restrict privileges for users of the host must also be maintained. For example, a new user account provisioned for the host and given admin or root privileges is an obvious potential vector for data theft – the account can be used to access host data directly, or to install malware that will provide access to confidential data.

File Integrity Monitoring and Configuration Hardening
Which brings us to the subject of configuration hardening. Hardening a configuration is intended to counteract the wide range of potential threats to a host and there are best practice guides available for all versions of Solaris, Ubuntu, RedHat, Windows and most network devices. Known security vulnerabilities are mitigated by employing a fundamentally secure configuration set-up for the host.

For example, a key basic for securing a host is via a strong password policy. For a Solaris, Ubuntu or other Linux host, this is implemented by editing the /etc/login.defs file or similar, whereas a Windows host will require the necessary settings to be defined within the Local or Group Security Policy. In either case, the configuration settings exist as a file that can be analyzed and the integrity verified for consistency (even if, in the Windows case, this file may be a registry value or the output of a command line program).

Therefore file integrity monitoring ensures a server or network device remains secure in two key dimensions: protected from Trojans or other system file changes, and maintained in a securely defended or hardened state.

File integrity assured – but is it the right file to begin with?
But is it enough to just use FIM to ensure system and configuration files remain unchanged? By doing so, there is a guarantee that the system being monitored remains in its original state, but there is a risk of perpetuating a bad configuration, a classic case of ‘junk in, junk out’ computing. In other words, if the system was built using an impure source – the recent Citadel keylogger scam is estimated to have netted over $500M in funds stolen from bank accounts where PCs were set-up using pirated Windows Operating System DVDs, each one with keylogger malware included free of charge.

In the corporate world, OS images, patches and updates are typically downloaded directly from the manufacturer website, therefore providing a reliable and original source. However, the configuration settings required to fully harden the host will always need to be applied and in this instance, file integrity monitoring technology can provide a further and invaluable function.

The best Enterprise FIM solutions can not only detect changes to configuration files/settings, but also analyze the settings to ensure that best practice in security configuration has been applied.

In this way, all hosts can be guaranteed to be secure and set-up in line with not just industry best practice recommendations for secure operation, but with any individual corporate hardened build-standard.
A hardened build-standard is a pre-requisite for secure operations and is mandated by all formal security standards such as PCI DSS, SOX, HIPAA, and ISO27K.

Conclusion
Even if FIM is being adopted simply to meet the requirements of a compliance audit, there is a wide range of benefits to be gained over and above simply passing the audit.

Protecting host systems from Trojan or malware infection cannot be left solely to anti-virus technology. The AV blind-spot for zero day threats and APT-type attacks leaves too much doubt over system integrity not to utilize FIM for additional defense.

But preventing breaches of security is the first step to take, and hardening a server, PC or network device will fend off all non-insider infiltrations. Using a FIM system with auditing capabilities for best practice secure configuration checklists makes expert-level hardening straightforward.

Don’t just monitor files for integrity – harden them first!

Wednesday 19 June 2013

File Integrity Monitoring - View Security Incidents in Black and White or in Glorious Technicolor?

FIM solutionsThe PCI DSS and File Integrity Monitoring
Using FIM, or file integrity monitoring has long been established as a keystone of information security best practices. Even so, there are still a number of common misunderstandings about why FIM is important and what it can deliver.

Ironically, the key contributor to this confusion is the same security standard that introduces most people to FIM in the first place by mandating the use of it - the PCI DSS.

PCI DSS Requirement 11.5 specifically uses the term 'file integrity monitoring' in relation to the need to "to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly"

As such, since the term 'file integrity monitoring' is only mentioned in requirement 11.5, one could be forgiven for concluding that this is the only part FIM has to play within the PCI DSS.

In fact, the application of FIM is and should be much more widespread in underpinning a solid secure posture for an IT estate. For example, other key requirements of the PCI data security standard are all best addressed using file integrity monitoring technology such as "Establish firewall and router configuration standards" (Req 1), "Develop configuration standards for all system components" (Req 2), "Develop and maintain secure systems and applications" (Req 6), "Restrict access to cardholder data by business need to know" (Req 7), Ensure proper user identification and authentication management for nonconsumer users and administrators on all system components" (Req 8), "Regularly test security systems and processes" (Req 11).
Within the confines of Requirement 11.5 only, many interpret this requirement as a simple 'has the file changed since last week?' and, taken in isolation, this would be a legitimate conclusion to reach. However, as highlighted earlier, the PCI DSS is a network of linked and overlapping requirements, and the role for file integrity analysis is much broader, underpinning other requirements for configuration hardening, configuration standards enforcement and change management.

But this isn't just an issue with how merchants read and interpret the PCI DSS. The new wave of SIEM vendors in particular are keen to take this narrow definition as 'secure enough' and for good, if selfish, reasons.

Do everything with SIEM - or is FIM + SIEM the right solution?

PCI requirement 10 is all about logging and the need to generate the necessary security events, backup log files and analyze the details and patterns. In this respect a logging system is going to be an essential component of your PCI DSS toolset.

SIEM or Event log management systems all rely on some kind of agent or polled-WMI method for watching log files. When the log file has new events appended to it, these new events are picked up by the SIEM system, backed up centrally and analyzed for either explicit evidence of security incidents or just unusual activity levels of any kind that may indicate a security incident. This approach has been expanded by many of the SIEM product vendors to provide a basic FIM test on system and configuration files and determine whether any files have changed or not.

A changed system file could reveal that a Trojan or other malware has infiltrated the host system, while a changed configuration file could weaken the host's inherently secure 'hardened' state making it more prone to attack. The PCI DSS requirement 11.5 mentioned earlier does use the word 'unauthorized' so there is a subtle reference to the need to operate a Change Management Process. Unless you can categorize or define certain changes as 'Planned', 'Authorized' or expected in some way, you have no way to label other changes as 'unauthorized' as is required by the standard.

So in one respect, this level of FIM is a good means of protecting your secure infrastructure. However, in practice, in the real-world, 'black and white' file integrity monitoring of this kind is pretty unhelpful and usually ends up giving the Information Security Team a stream of 'noise' - too many spurious and confusing alerts, usually masking the genuine security threats.

Potential security events? Yes.
Useful, categorized and intelligently assessed security events? No.

So if this 'changed/not changed' level of FIM is the black and white view, what is the Technicolor alternative? If we now talk about true Enterprise FIM (to draw a distinction from basic, SIEM-style FIM), this superior level of FIM provides file changes that have been automatically assessed in context - is this a good change or a bad change?

For example, if a Group Policy Security Setting is changed, how do you know if this is increasing or decreasing the policy's protection? Enterprise FIM will not only report the change, but expose the exact details of what the change is, was it a planned or unplanned change, and whether this violates or complies with your adopted Hardened Build Standard.

Better still, Enterprise FIM can give you an immediate snapshot of whether databases, servers, EPoS systems, workstations, routers and firewalls are secure - configured within compliance of your Hardened Build Standard or not. By contrast, a SIEM system is completely blind to how systems are configured unless a change occurs.

Conclusion

The real message is that trying to meet your responsibilities with respect to PCI Compliance requires an inclusive understanding of all PCI requirements. Requirements taken in isolation and too literally may leave you with a 'noisy' PCI solution, helping to mask rather than expose potential security threats. In conclusion, there are no short cuts in security - you will need the right tools for the job. A good SIEM system is essential for addressing Requirement 10, but an Enterprise FIM system will give you so much more than just ticking the box for Req 11.5.

Full color is so much better than black and white.

Wednesday 12 June 2013

File Integrity Monitoring - FIM Could Just Save Your Business

Busted! The Citadel Cybercrime Operation

No guns were used, no doors forced open, and no masks or disguises were used, but up to $500Million has been stolen from businesses and individuals around the world. Reuters reported last week that one of the worlds biggest ever cybercrime rings has just been shut down. The Citadel botnet operation, first exposed in August last year, shows that anyone who wants to think big when it comes to cybercrime can make truckloads of money without even leaving home.

It's a familiar story of basic identity theft - PC's used to access on-line bank accounts were infiltrated by keylogging malware known as Citadel. This allowed security credentials to be stolen and then used to steal money from the victims' bank accounts. The malware had been in operation for up to 18 months and had affected up to 5 million PC's.

Like any malware, until it has been discovered, isolated and understood, anti-virus technology cannot tackle malware like Citadel. So-called 'zero day' malware can operate undetected until such time as an anti-virus definition has been formulated to recognize the malware files and remove them.

This is why file integrity monitoring software is also an essential defense measure against malware. File integrity monitoring or FIM technology works on a 'zero tolerance' basis, reporting any changes to operating system and program filesystems. FIM ensures that nothing changes on your protected systems without being reported for validation, for example, a Windows Update will result in file changes, but provided you are controlling when and how updates gets applied, you can then isolate any unexpected or unplanned changes, which could be evidence of a malware infection. Good FIM systems filter out expected, regular filechanges and focus attention on those system and configuration files which, under normal circumstances, do not change.

A victimless crime? Maybe not if you're a business that has been affected

In a situation like this, banks will usually try and unravel the problem between themselves - bank accounts that have been plundered will have had money moved to another bank account and another bank account and so on, and attempts will be made to recover any misappropriated funds. Inevitably some of the cash will have been spent but there is also a good chance that large sums can be recovered.
Generally speaking, individuals affected by identity theft or credit card fraud will have their funds reimbursed by their bank and the banking system as a whole, so it often feels like a victimless crime has been perpetrated.

Worryingly though, in this case, an American Bankers Association spokesman has been reported as saying that 'banks may require business customers to incur the losses'. It isn't clear as to why the banks may be seeking to place blame on business customers in this case. It is reported that Citadel was present in illegally pirated copies of Windows, so the victims may well be guilty of using counterfeit software, but who is to blame, and how far down the line can the blame be passed? The business customer, their supplier of the pirated software, the wholesaler who supplied the supplier?

Either way, any business user of on-line banking technology (and the consensus of estimates suggest that around half of businesses do at least 50% of their banking on-line, but that this is increasing year on year) should be concerned that protecting access to their bank account should be something they take seriously. It could well be that nobody else is looking out for you.

Conclusion

It may still be the case that 'Crime doesn't pay' but it seems that Cybercrime can pay handsomely. But for cybercrime to work, there needs to be a regular supply of victims and in this case, victims not using any kind of file integrity monitoring are leaving themselves exposed to zero-day malware which is currently invisible to anti-virus systems.

Good security is not just about installing AV software or even operating FIM but should be a layered and integrated approach. Leveraging security technology such as AV, FIM, firewalling, IDS and IPS should be done in conjunction with sound operating procedures to harden and patch systems regularly, verified with a separate auditing and governance function.
The biggest security threat is still complacency.


Wednesday 29 May 2013

File Integrity Monitoring - Is FIM Better Than AV? Is a Gun Better Than a Knife?

Is a gun better than a knife?

I've been trying hard for an analogy, but this one kind of works. Which is better? A gun or a knife?
Both will help defend you against an attacker. A gun may be better than a knife if you are under attack from a big group of attackers running at you, but without ammunition, you are left defenseless. The knife works without ammunition and always provides a consistent deterrent, so in some respects, gives better protection than a gun.

Which is not a bad way to try and introduce the concept of FIM versus Anti-Virus technology. Anti-Virus technology will automatically eliminate malware from a computer, usually before it has done any damage. Both at the point at which malware is introduced to a computer, thorough email, download or USB, and at the instant at which a malware file is accessed, the AV will scan for known malware. If identified as a known virus, or even if the file exhibits characteristics that are associated with malware, the infected files can be removed from the computer.

However, if the AV system doesn't have a definition for the malware at hand, then like a gun with an empty magazine, it can't do anything to help.

File Integrity Monitoring by contrast may not be quite so 'active' in wiping out known malware, but - like a knife - it never needs ammo to maintain its role as a defense against malware. A FIM system will always report potentially unsafe filesystem activity, albeit with intelligence and rules to ignore certain activities that are always defined safe, regular or normal.

AV and FIM versus the Zero Day Threat

The key points to note from the previous description of AV operation is that the virus must either be 'known' i.e. the virus has been identified and categorized by the AV vendor, or that the malware must 'exhibit characteristics associated with malware' i.e. it looks, feels and acts like a virus. Anti-virus technology works on the principle that it has a regularly updated 'signature' or 'definition' list containing details of known malware. Any time a new file is introduced to the computer, the AV system has a look at the file and if it matches anything on its list, the file gets quarantined.

In other words, if a brand new, never-been-seen-before virus or Trojan is introduced to your computer, it is far from guaranteed that your AV system will do anything to stop it. Ask yourself - if AV technology was perfect, why would anybody still be concerned about malware?

The lifecycle of malware can be anything from 1 day to 2 years. The malware must first be seen - usually a victim will notice symptoms of the infection and investigate before reporting it to their AV vendor. At that point the AV vendor will work out how to counteract the malware in the future, and update their AV system definitions/signature files with details of this new malware strain. Finally the definition update is made available to the world, individual servers and workstations around the world will update themselves and will thereafter be rendered immune to this virus. Even if this process takes a day to conclude then that is a pretty good turnaround - after just one day the world is safe from the threat.

However, up until this time the malware is a problem. Hence the term 'Zero Day Threat' - the dangerous time is between 'Day Zero' and whichever day the inoculating definition update is provided.

By contrast, a FIM system will detect the unusual filesystem activity - either at the point at which the malware is introduced or when the malware becomes active, creating files or changing server settings to allow it to report back the stolen data.

Where is FIM better than AV?

As outlined previously, FIM needs no signatures or definitions to try and second guess whether a file is malware or not and it is therefore less fallible than AV.

Where FIM provides some distinct advantage over and above AV is in that it offers far better preventative measures than AV. Anti-Virus systems are based on a reactive model, a 'try and stop the threat once the malware has hit the server' approach to defense.

An Enterprise FIM system will not only keep watch over the core system and program files of the server, watching for malware introductions, but will also audit all the server's built-in defense mechanisms. The process of hardening a server is still the number one means of providing a secure computing environment and prevention, as we all know, is better than cure. Why try and hope your AV software will identify and quarantine threats when you can render your server fundamentally secure via a hardened configuration?
Add to this that Enterprise FIM can be used to harden and protect all components of your IT Estate, including Windows, Linux, Solaris, Oracle, SQL Server, Firewalls, Routers, Workstations, POS systems etc. etc. etc. and you are now looking at an absolutely essential IT Security defense system.

Conclusion

This article was never going to be about whether you should implement FIM or AV protection for your systems. Of course, you need both, plus some good firewalling, IDS and IPS defenses, all wrapped up with solid best practices in change and configuration management, all scrutinized for compliance via comprehensive audit trails and procedural guidelines.

Unfortunately there is no real 'making do' or cutting corners when it comes to IT Security. Trying to compromise on one component or another is a false economy and every single security standard and best practice guide in the world agrees on this.

FIM, AV, auditing and change management should be mandatory components in your security defenses.

Tuesday 7 May 2013

File Integrity Monitoring – Database Security Hardening Basics

The Database – The Mother Lode of Sensitive Data
Being the heart of any corporate application means your database technology must be implemented and configured for maximum security. Whilst the desire to ‘get the database as secure as possible’ appears to be a clear objective, what does ‘secure as possible’ mean?
Database Security Hardening Basics

Whether you use Oracle 10g, Oracle 11g, DB2, Microsoft SQL Server, or even MySQL or PostgreSQL, a contemporary database is at least as complex as any modern server operation system. The database system will comprise a whole range of configuration parameters, each with security implications, including
  • User accounts and password settings
  • Roles and assigned privileges
  • File/object permissions
  • Schema structure
  • Auditing functions
  • Networking capabilities
  • Other security defense settings, for example, use of encryption
Hardened Build Standard for Oracle, SQL Server, DB2 and others
Therefore, just as with any Windows or Linux OS, there is a need to derive a hardened build standard for the database. This security policy or hardened build standard will be derived from collected best practices in security configuration and vulnerability mitigation/remediation, and just as with an operating system, the hardening checklist will comprise hundreds of settings to check and set for the database.
Depending on the scale of your organization, you may then need hardening checklists for Oracle 10g, Oracle 11g, SQL Server, DB2, PostgreSQL and MySQL, and maybe other database systems besides.

Automated Compliance Auditing for Database Systems
Potentially, there will be a requirement to verify that all databases are compliant with your hardened build standard involving hundreds of checks for hundreds of database systems, so automation is essential, not least because the hardening checklists are complex and time-consuming to verify. There is also somewhat of a conflict to manage in as much as the user performing the checklist tests will necessarily require administrator privileges to do so. So in order to verify that the database is secure, you potentially need to loosen security by granting admin rights to the user carrying out the audit. This provides a further driver to moving the audit function to a secure and automated tool.

In fact, given that security settings could be changed at any time by any user with privileges to do so, verifying compliance with the hardened build standard should also become a regular task. Whilst a formal compliance audit might be conducted once a year, guaranteeing security 365 days a year requires automated tracking of security settings, providing continuous reassurance that sensitive data is being protected.

Insider Threat and Malware Protection for Oracle and SQL Server Database Systems
Finally, there is also the threat of malware and insider threats to consider. A trusted developer will naturally have access to system and application files, as well as the database and its filesystem. Governance of the integrity of configuration and system files is essential in order to identify malware or an insider-generated application ‘backdoor’. Part of the answer is to operate tight scrutiny of the change management processes for the organization, but automated file integrity monitoring is also essential if disguised Trojans, zero day malware or modified bespoke application files are to be detected.

File Integrity Monitoring – A Universal Solution to Hardening Database Systems
In summary, the most comprehensive measure to securing a database system is to use automated file integrity monitoring. File integrity monitoring or FIM technology serves to analyze configuration files and settings, both for vulnerabilities and for compliance with a security best practices-based hardened-build standard.
The FIM approach is ideal, as it is provides a snapshot audit capability for any database, providing an audit report within a few seconds, showing where security can be improved. This not only automates the process, making a wide-scale estate audit simple, but also de-skills the hardening exercise to an extent. Since the best practice knowledge of how to identify vulnerabilities and also which files need to be inspected is stored within the FIM tool report, the user can get an expert assessment of their database security without needing to fully research and interpret hardening checklist materials.

Finally, file integrity monitoring will also identify Trojans and zero-day malware that may have infected the database system, and also any unauthorized application changes that may introduce security weaknesses.
Of course, any good FIM tool will also provide file integrity monitoring functions to Windows, Linux and Unix servers as well as firewalls and other network devices, performing the same malware detection and hardening audit reporting as described for database systems.

For fundamentally secure IT systems, FIM is still the best technology to use.

Wednesday 10 April 2013

FIM for PCI DSS - Card Skimmers Still Doing the Business After All These Years

Card Skimming - Hardware or Software?
Simplest is still best - whether they are software-based (as in the so-called 'Dexter' or 'VSkimmer' Trojan - Google it for more information) or classic hardware interception devices, card skimming is still a highly effective means of stealing card data.
FIM for PCI DSS
The hardware approach can be as basic as inserting an in-line card data capture device between the card reader and the EPOS system or Till. This sounds crude but in more advanced cases, the card skimming hardware is cunningly embedded within the card reader itself, often with a cell phone circuit to relay the data to the awaiting fraudster.
Software skimmers are potentially far more powerful. First of all, they can be distributed globally and clearly are not physically detectable like the hardware equivalent. Secondly, they provide access to both 'card present' i.e. POS transactions as well as 'card not present' transactions, for example, tapping into payments via an eCommerce website.

EMV or Chip and PIN - Effective up to a Point
Where implemented - which of course, excludes the US at present - EMV technology (supporting 'Chip and PIN' authorizations) has resulted in big reductions in 'cardholder-present' fraud. A card skimmer would need not just the card details but the added encryption PIN (Personal Identity Number) to unlock it. Embedded card skimming technology can grab the PIN as it is entered too, and hence the emphasis on requiring only approved PIN entry devices that have anti-tampering measures in-built. Alternatively, just use a video camera to record the user entering the PIN and write it down!
By definition, the EMV chip security and PIN entry requirement is only effective for face-to-face transactions where a PED (PIN Entry Device) is used. As a consequence, 'card not present' fraud is still increasing rapidly all over the world, proving that card skimming remains a potentially lucrative crime.
In a global market, easily accessible via the internet, software card skimming is a numbers game. It is also one that relies on a constantly renewing stream of card numbers since card fraud detection capabilities improve both at the acquiring banks and card brands themselves.

Card Skimming in 2013 - The Solution is Still Here
Recently reported research in SC Magazine suggests that businesses are subject to cyber attacks every 3 minutes. The source of the research is Fire Eye, a sandbox technology provider, and they are keen to stress that these malware events are ones that would bypass what they refer to as legacy defences - firewalls, anti-virus and other security gateways. In other words, zero day threats, typically mutated or modified versions of Trojans or other malware, delivered via phishing attacks.
What is frustrating to the PCI Security Standards Council and the card brands (and no doubt software companies like Tripwire, nCircle and NNT!) is that the 6 year old PCI DSS advocates arrange of perfectly adequate measures to prevent any of these newly discovered Trojans (and buying a Fire Eye scanner isn't on the list!) All eCommerce servers and EPOS systems should be hardened and protected using file integrity monitoring. While firewalls and anti-virus is also mandatory, FIM is used to detect malware missed by these devices which, as the Fire Eye report shows, is as common as ever. A Trojan like VSkimmer or Dexter will manifest as file system activity and, on a Windows-system, will always generate registry changes.
Other means of introducing skimming software are also blocked if the PCI DSS is followed correctly. Card data storing systems should be isolated from the internet where possible, USB ports should be disabled as part of the hardening process, and any network access should be reduced to the bare minimum required for operational activities. Even then, access to systems should be recorded and limited to unique usernames only (not generic root or Administrator accounts).
The PCI DSS may be old in Internet Years, but fundamentally sound and well-managed security best practises have never be as relevant and effective as they are today.

Monday 11 March 2013

Linux Server Hardening


For today’s computing platforms, ease of access and openness is essential for web based communications and for lean resourced IT Management teams.
Linux Server HardeningThis is directly at odds for the increased necessity for comprehensive security measures in a world full of malware, hacking threats and would-be data thieves.
Most organizations will adopt a layered security strategy, providing as many protective measures for their IT infrastructure as are available – firewalls, sandboxes, IPS and IDS, anti-virus – but the most secure computing environments are those with a ‘ground up’ security posture.
If data doesn’t need to be stored on the public-facing Linux web server, then take it off completely – if the data isn’t there, it can’t be compromised.
If a user doesn’t need access to certain systems or parts of the network, for example, where your secure Ubuntu server farm is based, then revoke their privileges to do so – they need access systems to steal data so stop them getting anywhere near it in the first place.
Similarly, if your CentOS server doesn’t need FTP or Web services then disable or remove them. You reduce the potential vectors for security breaches everytime you reduce means of access.
To put it simply, you need to harden your Linux servers.

Linux Hardening Policy background
The beauty of Linux is that it is so accessible and freely available that it is easy to get up and running with very little training or knowledge. The web-based support community places all the tips and tutorials you’ll ever need to carry out any Linux set-up task or troubleshoot issues you may experience.
Finding and interpreting the right hardening checklist for your Linux hosts may still be a challenge so this guide gives you a concise checklist to work from, encompassing the highest priority hardening measures for a typical Linux server.

And, if you want to make life simpler…
NNT Change Tracker Enterprise provides an automated tool for auditing servers, firewalls, router and other network devices for compliance with a full range of hardened build checklists. Once a hardened build baseline has been established, any drift from compliance with the required build standard will be reported. To enhance security protection further, Change Tracker also provides system-wide, real-time file integrity monitoring to detect any Trojan, backdoor or other malware infiltrating a secure server. Request a trial or demonstration here www.newnettechnologies.com/enterprise-change-and-configuration-management.html

Account Policies
  • Enforce password history – 365 days
  • Maximum Password Age - 42 days
  • Minimum password length – 8 characters
  • Password Complexity - Enable
  • Account Lockout Duration - 30 minutes
  • Account Lockout Threshold – 5 attempts
  • Reset Account Lockout Counter - 30 minutes
Edit the /etc/pam.d/common-password to define password policy parameters for your host.

Access Security
  • Ensure SSH version 2 is in use
  • Disable remote root logons
  • Enable AllowGroups to permitted Group names only
  • Allow access to valid devices only
  • Restrict the number of concurrent root sessions to 1 or 2 only
Edit sshd.config to define SSHD policy parameters for your host and /etc/hosts.allow and /etc/hosts.deny to control access. Use /etc/securetty to restrict root access to tty1 or tty1 and tty2 only.

Secure Boot Only
Remove options to boot from CD or USB devices and password protect the computer to prevent the BIOS options from being edited.
Password protect the /boot/grub/menu.lst file, then remove the rescue-mode boot entry.

Disable All Unnecessary Processes, Services and Daemons
Each system is unique so it is important to review which processes and services are unnecessary for your server to run your applications.
Assess your server by running the ps –ax command and see what is running currently.
Similarly, assess the startup status of all processes by running a chkconfig –list command.
Disable any unnecessary services using the sysv-rc-conf service-name off

Restrict Permissions on Sensitive Files and Folders to root Only
Ensure the following sensitive programs are root executable only
  • /etc/fstab
  • /etc/passwd
  • /bin/ping
  • /usr/bin/who
  • /usr/bin/w
  • /usr/bin/locate
  • /usr/bin/whereis
  • /sbin/ifconfig
  • /bin/nano
  • /usr/bin/vi
  • /usr/bin/which
  • /usr/bin/gcc
  • /usr/bin/make
  • /usr/bin/apt-get
  • /usr/bin/aptitude
Ensure the following folders are root access only
  • /etc
  • /usr/etc
  • /bin
  • /usr/bin
  • /sbin
  • /usr/sbin
  • /tmp
  • /var/tmp
Disable SUID and SGID Binaries
Identify SUID and SGID files on the system: find / \( -perm -4000 -o -perm -2000 \) –print.
Render these files safe by removing the SUID or SGID bits using chmod –s filename
You should also restrict access to all compilers on the system by adding them to a new ‘compilers’ group.
  • chgrp compilers *cc*
  • chgrp compilers *++*
  • chgrp compilers ld
  • chgrp compilers as
Once added to the group, restrict permissions using a chmod 750 compiler

Implement Regular/Real-Time FIM on Sensitive Folders and Files
File integrity should be monitored for all files and folders to ensure permissions and files do not change without approval.

Configure Auditing on the Linux Server
Ensure key security events are being audited and are forwarded to your syslog or SIEM server. Edit the syslog.conf file accordingly.

General Hardening of Kernel Variables
Edit the /etc/sysctl.conf file to set all kernel variables to secure settings in order to prevent spoofing, syn flood and DOS attacks.

Tuesday 19 February 2013

Agentless FIM – Why File Integrity Monitoring Without Agents Is The Same, and Better, and Worse than using Agents

Introduction
Agentless FIMAgent versus Agentless is a perennial debate for any monitoring requirements and is something that has been written about previously.
The summary of the previous assessment was that agent-based FIM is usually better due to the real-time detection of changes, negating the need for repeated full baseline operations, and due to the agent providing file hashing, even though there is an additional management overhead for the installation and maintenance of agent software.
But what about Agentless systems that purport to provide hashing? Seemingly being able to encircle all requirements and deliver functionality of an agent-based FIM solution but still without using an agent?

What Is So Scary About Agents Anyway?
The problem with all agents is one of maintenance. First the agent itself needs to be deployed and installed on the endpoint. Usually this will also require other components like Java or Mono to be enabled at the endpoint too, and these all have their own requirements for maintenance too.
WSUS/Windows Update Services and Update Manager functions in Ubuntu all make life much easier now to maintain packaged programs but it is accepted than introducing more components to any system will only ever increase the range of ‘things that can go wrong’.
So we’ll make that 1-0 to Agentless for ease of implementation and maintenance, even though both functions can be automated to a greater or lesser degree – good FIM solutions will automatically update their agent components if new versions are released.

System Resources – Which Option Is More Efficient?
No agent means the agentless system must operate on a polled basis, and operating on a polled basis means the monitoring system is blind to any security events or configuration changes that occur until the next poll. This could mean that security threats go undetected for hours, and in the case of rootkit malware, irreparable damage could have been done before anybody knows that there is a problem.
Poll intervals can be reduced, but the nature of an agentless system is that every attribute for every object or file being monitored must be gathered for every poll because, unlike there is with an agent-based FIM solution, there is no means of tracking and recording changes as they happen.
The consequence of this is that agentless polls are as heavy in terms of system resources as the initial baselining operation of an agent-based system. Every single file and attribute must be recorded for every poll, regardless of whether changes have occurred or not.
Worse still, all the data collected must be dragged across the network to be analyzed centrally, and again, this load is repeated for every single poll. This also makes agentless scans slow to operate.
By contrast, an agent-based FIM solution will work through the full baseline process once only, and then use its vantage point on the endpoint host to record changes to the baseline in real-time as they occur. Being host-based also gives the agent access to the OS as changes are made, thereby enabling capture of ‘Who made the Change’ data too.
The agent gives a much more host-resource and network efficient solution, operating a changes-only function. If there are no changes to record, no host resources are used and no network capacity used either. The agentless poll will always use a full baseline’s worth of resource for every scheduled scan. Furthermore this makes running a report significantly slower than using an agent that already has up to date baselines of the information needed in the report.
This easily levels the scores up at 1-1.

Security Considerations of Agentless versus Agent-Based FIM Solutions
Finally, there is a further consideration for the agentless solution that doesn’t apply to the agent-based FIM option. By requiring the agentless solution to login and execute commands on the server to gather baseline information, the agentless solution server needs an Account with network access to the host. The Account provisioned will need sufficiently high privileges to access folders and files that need to be tracked and by definition, these are typically the most sensitive objects on the server in terms of security governance. Use of Private Keys can be used to help restrict access to a degree, but an agentless solution will always carry with it an additional inherent security risk over and above that posed by agent-based technology.
I would call that a clear 2-1 to the Agent, being more efficient, faster and more effective in reporting threats in real-time.

File Hashing – What is the Advantage?
The classic approach to file integrity monitoring is to record all the file attributes for a file, then perform a comparison of the same data to see if any have changed.
For more detail of how the file make-up or contents has changed, mainly relevant to Linux/Unix text-based configuration files or web application configuration files, then the contents may be compared side-by-side to show changes.
Using a file hash (more accurately a cryptographic file hash) is an elegant and very neat way of summarizing a file’s composition in a single, simple, unique code. This provides several key benefits -
  • Regardless of the size and complexity (text or binary) of the file being tracked, a fixed length but unique code can be created for any file – comparing hash values for files is a simple but highly sensitive way to check whether there have been any changes or not
  • The hash is unique for each file and, due to the algorithms used to generate cryptographic hashes, even tiny changes result in significant variations in the hash values returned, making changes obvious
  • The hash is portable so the same file held on different servers will return the same identical hash value, providing a forensic-level ‘DNA Fingerprint’ for the file and version
Therefore cryptographic hashing is an important dimension to file integrity monitoring, however, the standard Windows OS programs and components do not offer a readily useable mechanism for delivering this function.
So a further big advantage of using an agent-based FIM solution is that cryptographic hashing can be provided on a all platforms, unlike a pure agentless solution.
3-1 to the Agent and it looks like it is going to be hard for the agentless solution to get back in the game!

When is Agentless FIM Really the Same as an Agent-Based FIM Solution?
Most vendors like Tripwire provide a clear-cut agent-based and Agentless option with the pros and cons of each option understood.
The third option is where the best of the agentless and agent-based solutions come together to encircle all capabilities. This kind of solution is positioned as agentless, and yet delivers the agent-based features . The solution behaves like an agentless solution, in as much as it functions on a scheduled full-scan basis, logging in to devices to track file characteristics. However there is no need to pre-install an agent to run FIM, so the solution feels like it is agentless.
In practice the solution requires an Administrator logon to the servers to be scanned. The system then logs on and executes a whole sequence of command line scripted commands to check file integrity, but will also pipe across a program to help perform file hashing. This program – some say agent - will then be deleted after the scan.
So is this solution agentless? No, although it does remove the major hassle with an agent-based solution in that it automates the initial deployment of the agent.
What are the other benefits of this approach? None really. It is less secure than an installed agent - providing an Admin logon that can be used across the whole network is arguably weakening security before you even start.
It is massively less efficient than a local agent - piping programs across the network, then executing a bunch of scripts, then dragging all the results back across the network is hugely inefficient compared to an agent that runs locally, does its baselines and compares locally and then only if it needs to, sends results back.
It is also fundamentally not a very effective way to keep your estate secure - which kind of misses the point of doing it in the first place! Reason being you only get to know that security is weakened or actually compromised when you next run a scan - always too late! An agent-based FIM solution will detect config drift and FIM changes in real-time - you know you have a potential risk within seconds of it arising, complete with details of who made the change.

Summary
So in summary, agentless is less efficient, less secure and less able to keep your estate secure (and the most effective Agentless solutions still use a temporary agent anyway). The ease of deployment of Agentless is tempting, but this can always be automated using any one of a number of software distribution solutions. Perhaps the best solution still is to reserve the option on both and choose the best approach on balance? For example, Firewall appliances will always need to be handled using scripted, Agentless interrogation, while Windows servers can only truly be audited for vulnerabilities using a cryptographic hashing, real-time change detection agent.

Wednesday 23 January 2013

Windows server 2008 hardening

Prevention of security breaches is always seen as the best approach to protecting key data assets. Hardening a server in line with acknowledged best practices in secure configuration is still the most effective means of protecting your Server data.

Windows server 2008 hardening Deriving the right checklist for your Server 2008 estate requires an iterative process, starting with an ‘off the shelf’ hardening checklist and comparing this to your current hardened build standard for Server 2008.
There will be discrepancies, for example, a decision will need to be made on settings to adopt such as Password Length. Some checklist authorities recommend an 8 character minimum, others 12 characters, and in fact, Server 2008 R2 allows up to 24 characters to be used. Password length and complexity significantly strengthens security – a typical brute force attack on a system with a 7 character password will take on average just 4 days to crack. An 8 character password with complexity i.e. including symbols, will take on average 23 years.

Server Hardening Policy background

Any server deployed in its default state will naturally be lacking in even basic security defenses. This leaves it vulnerable to compromise. A standard framework for your server security policy should include the following attributes defining password, local user accounts and the Windows Audit and Security policies. This sample Server 2008 hardening checklist will help to get your server more secure but please see also the sample Server 2008 services hardening checklist and FIM policy.

And, if you want to make life simpler…

NNT Change Tracker Enterprise provides an automated tool for auditing servers, firewalls, router and other network devices for compliance with a full range of hardened build checklists. Once a hardened build baseline has been established, any drift from compliance with the required build standard will be reported. To enhance security protection further, Change Tracker also provides system-wide, real-time file integrity monitoring to detect any Trojan, backdoor or other malware infiltrating a secure server. Request a trial or demonstration here
Account Policies
  • Enforce password history – 24
  • Maximum Password Age - 42 days
  • Minimum Password Age –2 days
  • Minimum password length - 8 characters
  • Password Complexity - Enable
  • Store Password using Reversible Encryption for all Users in the Domain - Disable
  • Account Lockout Duration - 30 minutes
  • Account Lockout Threshold – 5 attempts
  • Reset Account Lockout Counter - 30 minutes
  • Enforce User Logon Restrictions - Enable
  • Maximum Lifetime for Service Ticket - 600 minutes
  • Maximum Lifetime for User Ticket - 8 hours
  • Maximum Lifetime for User Ticket Renewal - 7 days
  • Maximum Tolerance for Computer Clock Synchronization - 5 minutes
Windows Audit Policy and Advanced Security Audit Policy (Group Policy)
All Event Log files must be set to 2048KB and must be set to overwrite events as needed.
  • Audit account logon event - Success, Failure
  • Audit account management - Success, Failure
  • Audit directory service access - Failure
  • Audit logon events - Success, Failure
  • Audit object access - Success, Failure
  • Audit policy change - Success, Failure
  • Audit process tracking - Not configured
  • Audit privilege use - Success, Failure
  • Audit system events - Success, Failure
  • Audit Authentication Policy Change - Success
  • System: System Integrity - Success, Failure
  • Security System Extension - Success, Failure
  • Security State Change - Success, Failure
  • Logoff - Success, Failure
  • Logon - Success, Failure
  • Special Logon - Success, Failure
  • File System - Success, Failure
  • Registry - Success, Failure
  • Sensitive Privilege Use - Success, Failure
Windows Local Security Policy / Group Policy - User Rights Assignment Settings
  • Access Credential Manager as a Trusted Caller - <no one>
  • Allow Access to this Computer from the Network - (Restrict the Access this computer from the network user right to only those users and groups who require access to the computer) Example: Administrators, Domain Administrators
  • Act as Part of the Operating System - <no one>
  • Add Workstations to Domain – Administrators
  • Adjust Memory Quotas for a Process - Administrators, Local Service and Network Service only
  • Allow Log on Locally - Administrators
  • Allow log on through Remote Desktop Services/Terminal Services - Remote Desktop Users, Administrators
  • Back up Files and Directories - Administrators
  • Bypass Traverse Checking - Restrict the Bypass traverse checking user right to only those users and groups who require access to the computer – for example, Users, network service, local service, Administrators
Windows Local Security Policy / Group Policy - User Rights Assignment Settings
- Contd.
  • Change the System Time – Administrators, Domain Administrators
  • Change the Time Zone - Users
  • Create a Page File - Administrators
  • Create a Token Object - <no one>
  • Create Global Objects - Administrators
  • Create Permanent Shared Objects - <no one>
  • Create Symbolic Links - Administrators
  • Debug Programs - <no one>
  • Deny Access to this Computer from the Network – ANONYMOUS LOGON, Built-in local Administrator account, Local Guest account, All service accounts,
  • Deny Log on as a Batch Job - <no one>
  • Deny Log on as a Service - <no one>
  • Deny Log on Locally - ASPNET account on computers that are configured with the Web Server role
  • Deny log on through Terminal Services/RDP – Local Guest account, All service accounts
  • Enable Computer and User Accounts to be Trusted for Delegation - <no one>
  • Force Shutdown from a Remote System – Administrators
  • Take Ownership of Files or other Objects - Administrators
  • Generate Security Audits - Local Service and Network Service only
  • Impersonate a Client after Authentication – Administrators, Local Service and Network Service only
  • Increase a Process Working Set - <no one>
  • Increase Scheduling Priority - <no one>
  • Load and Unload Device Drivers – Administrators
  • Lock Pages in Memory - <no one>
  • Manage Auditing and Security Log – Local Administrator only
  • Modify an Object Label - <no one>
  • Modify Firmware Environment Values - Local Administrator only
  • Perform Volume Maintenance Tasks - Local Administrator only
  • Profile Single Process - Local Administrator only
  • Profile System Performance - Local Administrator only
  • Replace a Process Level Token - Local Service and Network Service only
  • Restore Files and Directories - Local Administrator only
  • Shut Down the System – Administrators
  • Synchronize Directory Service Data - <no one>
Windows Local Security Policy / Group Policy - Security Options
  • Administrator Account Status -Disabled
  • Guest Account Status - Disabled
  • Limit Local Account Use of Blank Passwords to Console Logon Only - Enabled
  • Rename Administrator Account – Must be set to something other than Administrator
  • Rename Guest Account - Must be set to something other than Guest
  • Audit the Access of Global System Objects -Disabled
  • Audit the use of Backup and Restore Privilege - Enabled
  • Force Audit Policy Subcategory Settings to Override Audit Policy Category Settings – Enabled
  • Shut Down System Immediately if Unable to Log Security Audits - Enabled
  • Prevent Users from Installing Printer Drivers when connecting to Shared Printers – Enabled
  • Machine Access Restrictions in Security Descriptor Definition Language (SDDL) – Bespoke for each environment
  • Machine Launch Restrictions in Security Descriptor Definition Language (SDDL) – Bespoke for each environment
  • Allowed to Format and Eject Removable Media – Administrators
  • Prevent Users from Installing Printer Drivers – Enabled
  • Allow Server Operators to Schedule Tasks - Disabled
  • Digitally Encrypt or Sign Secure Channel Data (Always) - Enabled
  • Digitally Encrypt or Sign Secure Channel Data (when possible) - Enabled
  • Disable Machine Account Password Changes - Disabled
  • Maximum Machine Account Password Age - 30 days
  • Require Strong (Windows 2000 or later) Session Key – Enabled
  • Interactive Logon: Display User Information when the Session is Locked - Enabled
  • interactive logon: Do Not Display Last User Name - Enabled
  • Interactive logon: Do Not Require CTRL+ALT+DEL - Disabled
  • Interactive logon: Message Text for Users Attempting to Log On – For example, ‘By using this computer system you are subject to the 'Computer Systems Policy' of New Net Technologies. The policy is available on the NNT Intranet and should be checked regularly for any updates’
  • Interactive logon: Message Title for Users Attempting to Log on- For example ‘Warning – Authorized Users Only – Disconnect now if you are not unauthorized to use this system’
  • Number of Previous Logons to Cache (in case domain controller is not available) – 0
  • Interactive Logon: Prompt User to Change Password before Expiration – 14 days
  • Interactive Logon: Require Domain Controller Authentication to Unlock Workstation - Enabled
  • Microsoft Network Client: Digitally Sign Communications (always) – Enabled
  • Microsoft Network Server: Digitally Sign Communications (always) - Enabled
  • Microsoft Network Client: Digitally Sign Communications (if server agrees) - Enabled
  • Microsoft Network Server: Digitally Sign Communications (if client agrees) – Enabled
Windows Local Security Policy / Group Policy - Security Options – Contd.
  • Microsoft network client: Send Unencrypted Password to Connect to Third-party SMB servers -Disabled
  • Microsoft Network Server: Amount of Idle Time required before Suspending a Session - 15 minutes
  • Microsoft Network Server: Disconnect clients when Logon Hours Expire – Enabled
  • Microsoft Network Server: Server SPN target Name Validation LevelAccept if Provided by Client or Required from Client
  • Microsoft Network Server: Digitally Sign Communications (always) – Enabled
  • Network Access: Allow anonymous SID/name translation – Disabled
  • Network Access: Do not allow anonymous enumeration of SAM accounts – Enabled
  • Network Access: Do not allow storage of passwords or credentials for network authentication – Enabled
  • Network Access: Let Everyone Permissions Apply to Anonymous Users - Disable
  • Network Access: Named Pipes that can be Accessed Anonymously – Set to Null, review system functionality
  • Network Access: Remotely Accessible Registry Paths and Sub-paths - Set to Null, review system functionality
  • Network Access: Shares that can be Accessed Anonymously - <no one>
  • Network Access: Sharing and Security Model for Local Accounts – For Network Servers, Classic – local users authenticate as themselves’. On end-user computers, Guest only – local users authenticate as guest
  • Network Security: Allow Local System NULL session fallback – Disabled
  • Network Security: Allow Local System to use computer identity for NTLM – Enabled
  • Network Security: Allow PKU2U authentication requests to this computer to use online identities - Disabled
  • Network Security: Do not store LAN Manager Hash value on Next password Change – Enabled
  • Network Security: Force Logoff when Logon Hours Expire - Enabled
  • Network Security: LAN Manager authentication level - Send NTLMv2 response only\refuse LM & NTLM
  • Network Security: LDAP Client Signing Requirements - Negotiate Signing
  • Network security: Minimum session security for NTLM SSP based (including secure RPC) clients - Require NTLMv2 session security
  • Network security: Minimum session security for NTLM SSP based (including secure RPC) servers - Require NTLMv2 session security
  • Domain controller: LDAP server signing requirements - Require signing
  • Domain controller: Refuse machine account password changes - Disabled
Windows Local Security Policy / Group Policy - Security Options – Contd.
  • MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing) - Highest protection, source routing is completely disabled
  • MSS: (EnableICMPRedirect) Allow ICMP redirects to override OSPF generated routes – Disabled
  • MSS: (ScreenSaverGracePeriod) The time in seconds before the screen saver grace period expires (0 recommended) – 0
  • System Objects: Require case insensitivity for non-Windows subsystems – Enabled
  • System Cryptography: Force strong key protection for user keys stored on the computer - User must enter a password each time they use a key
  • System Cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing - Enable
  • System objects: Default owner for objects created by members of the Administrators group - Object Creator
  • System objects: Require case insensitivity for non-Windows subsystems - Enable
  • System objects: Strengthen default permissions of internal system objects (e.g., Symbolic Links) - Enable
  • System settings: Optional subsystems – Null value
  • Recovery Console: Allow automatic administrative logon – Disabled
  • Recovery Console: Allow floppy copy and access to all drives and all folders - Disabled
  • Domain Controllers Policy- if present in scope - Domain controller: Allow server operators to schedule tasks – Disabled
  • System settings: Use Certificate Rules on Windows Executables for Software Restriction Policies - Enable
  • User Account Control: Admin Approval Mode for the Built-in Administrator account – Enable
  • User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop - Disable
  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode - Prompt for consent
  • User Account Control: Behavior of the elevation prompt for standard users - Prompt for credentials
  • User Account Control: Detect application installations and prompt for elevation – Enable
  • User Account Control: Only elevate executables that are signed and validated – Enable
  • User Account Control: Only elevate UIAccess applications that are installed in secure locations – Enable
  • Enable the User Account Control: Only elevate UIAccess applications that are installed in secure locations policy setting – Enable
  • User Account Control: Run all administrators in Admin Approval Mode – Enable
  • User Account Control: Switch to the secure desktop when prompting for elevation – Enable
  • User Account Control: Virtualize file and registry write failures to per-user locations – Enable