Wednesday, 7 September 2011

PCI Compliance Server Hardening doesn’t have to be Hard

Harden Server Configuration to remove Vulnerabilities

"PCI DSS Version 2.0 Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters"

From the moment a server is powered up it becomes vulnerable to attack. Assuming that leaving your key application servers turned off is not an option it will be necessary to implement security measures advocated by the PCI DSS.

PCI Requirement 2 calls for configuration hardening of servers, EPoS PC's and network devices. The headlines of the requirement call for removal of default usernames and passwords, and a need to stop any unnecessary services. However, beyond these initial measures there are a vast number of additional configuration setting changes recommended by 'best practice' authorities (such as SANS Institute, Center for Internet Security (CIS) and NIST) all of which help to mitigate security threats. If you haven't already adopted a hardened configuration standard then any of these organizations can assist, although a good configuration auditing and config change tracking system will typically be pre-packed with a server hardening checklist you can adopt. This type of system will automate not just the initial hardening assessment but will also do so on a continuous automatic basis so you can be alerted when any configuration drift occurs.

As with most elements of the PCI DSS Requirements, there are a number of checks and balances to provide evidence that adequate hardening measures have been applied. In common with the overall ethos of the PCI DSS, there is always a high degree of overlap to guarantee comprehensive coverage.

Similarly, event log management and file integrity monitoring measures will serve to provide additional checks to verify security measures have not been changed or compromised at all times.
Active Testing of PCI DSS Security Measures - Pen Testing and Vulnerability Scanning
PCI Requirement 11 covers Penetration Testing and Vulnerability Scanning - we'll discuss these in turn.

Pen Testing / Penetration Testing
Any internet facing devices are exposed to somewhere in excess of 2 billion potential hackers (source: ITU website - 'Key Facts') and while firewalls and intrusion detection technologies help to allow good users in and keep bad traffic out, the fact remains that an 'open' website is always going to be vulnerable to attack. Penetration testing takes the form of an active assessment of whether the internet facing devices and servers can be compromised. Typically a 'blended' approach is used combining automated, scripted scans and tests for common hacking vectors with manually orchestrated hacking techniques.

Vulnerability Scanning / ASV Scans
Whereas a Pen Test is for externally accessible devices, internal network connected devices are tested using an ASV scan (ASV is a PCI Security Council term for any organization or individual who has been validated as an Approved Scanning Vendor). This kind of internal vulnerability assessment is known as a Vulnerability Scan.

This is typically a more intensive active assessment for devices than a Pen Test, assessing operating system and application patching levels. Again the ASV vulnerability scan can be either fully automated using an on-site appliance or take a blended approach, part-automatic-part-manually orchestrated.

PCI DSS Hardening Methodologies - How do I harden my Server?
For Windows servers, numerous security features and best practice measures are implemented via the server's Local Security Policy. Group Policy can be used as a convenient way to update the Security Policy of multiple devices in bulk, but of course one common way to enhance security is to isolate servers from a domain to allow precise 'hand-picked' access permissions, in which case the Local Security Policy will need to be configured directly.

In addition, unnecessary services should be disabled, built-in accounts should be renamed and passwords changed from defaults, drive and folder permissions should be restricted - the list of actual and potential vulnerabilities is extensive and always growing.

It should be mentioned that there is a whole other area of vulnerability management relating to patches and application updates. Whilst these all carry the potential to be just as harmful as configuration-based vulnerabilities, patch or application-based vulnerabilities are inherently easier to manage, given that Windows Updates and major applications all have automated, self-updating capabilities. Furthermore, operating system and application-based vulnerabilities can be remediated - that is to say, eliminated permanently - as opposed to configuration-based vulnerabilities which can only ever be mitigated. In other words, configuration-based vulnerabilities can be just as easily re-introduced at any time as they are to remove in the first place.

In conclusion, this is yet another area of the PCI DSS that is ideally handled using intelligent, automated technology. The best products available combine comprehensive 'best practice' security policy and hardening checklists with continuous vulnerability assessments. Any configuration drift is identified immediately and alerted, while summary reports can be produced to give an 'at a glance' reassurance that nothing has changed.
The active role of a continuous configuration change tracking technology can also be used as a vantage point to implement file integrity monitoring too, guaranteeing system and application files do not change and that malware cannot be introduced onto the server without detection. Likewise, SIM, SIEM (Security Information and Event Management) or plain old Event Log Management technology also provides a full audit trail of security events for in scope devices.
The good news is you don't need to turn off your servers just to keep them secure.

Tuesday, 6 September 2011

How Logging And File Integrity Monitoring Technologies Can Augment Process and Procedure

Documentation Of PCI Compliance Processes? No Thanks!
Small Company PCI Compliance
For many Merchants subject to the PCI DSS, September is always a significant deadline for proving that compliance with the security measures of the PCI DSS has been met.

Unless you are a Tier 1 merchant (transacting in excess of 6 million card sales each year) and being audited by a PCI Security Standards Council QSA (Qualified Security Assessor) then you will be using the Self-Assessment route. SAQ D is the most commonly used Self Assessment Questionnaire for medium to large scale merchants.

Regardless of which type of Merchant your organization is classified as, the issues are firstly to put measures in place to meet compliance with the requirements, (so either install some security technology, e.g. a file integrity monitor, or define and document security procedures), and secondly, to prove that the measures are effective.

For smaller merchants, processes are typically not documented because there has previously been no need to do so. It stands to reason that for a small-scale IT Department, processes are commensurately simple to explain and operate, and as such, wont have needed to be documented. This being the case, however, it could also be argued that the documentation of processes, and proving that they work, is also very simple.
For instance, the change management process may be as simple as 'if any of us need to make a change, we discuss it or just send an email to the others for their information, then enter details onto a shared spreadsheet document'.

Clearly there is ample potential for human error in a process like this and for an 'inside man' hack to be perpetrated, even if the risk is low and the subsequent identification of the perpetrator straightforward.
So in this case, documenting the process is easy, but proving that it is infallible is another matter. There are too many scenarios where the process can fail, principally due to human error, but this also makes it inadequate as a means of ensuring changes cannot be made without detection. This is why many small companies lose sleep over PCI Compliance, worrying how far measures need to be taken and just how much security is enough?

Process Checks and Balances - Automated
PCI DSS Requirement 10 mandates the logging of all significant security events from the PCI estate, while PCI DSS Requirement 11.5 mandates the use of File Integrity Monitoring technology. For many organizations taking a 'checkbox' approach to PCI Compliance, the implementation of both technologies is seen as just another hassle to get through for the sake of the PCI DSS.

However, take a step back and look at the PCI DSS as a whole. The emphasis is on good security measures with sound best practices. In other words, for each dimension of security advocated by the PCI DSS there is a need to document and test related processes.

It therefore becomes clear that logging and FIM are not just overlay technologies to plug gaps left by the firewalling, hardening and antivirus measures, but integral means of verifying that your net security stance is effective.

Any file change or configuration change reported should be investigated and verified then acknowledged as an approved change. The process is automated, but simple and robust.
Similarly, a new account or privilege being assigned will be reported via your log management system, prompting an investigation and ultimately a record of the acknowledgment. As such, implementation of event log management and file integrity checker technologies can actually provide the processes needed for PCI DSS compliance. You could have a whole shelf full of change management processes and procedures, or alternatively, simply refer to your log management and File Integrity Monitoring reporting system.