Data Privacy Blog

Key Management Kit


Contact us

webinars

podcast

 

Facebook Google+ Twitter LinkedIn

Our Blog to Your Inbox

Your email:

Posts by category

Townsend Security Data Privacy Blog

Current Articles | RSS Feed RSS Feed

Three Things to Know about PGP Encryption & the IBM z

  
  
  
  

Pretty Good Privacy (PGP) Encryption is a solid path to provable and defensible security, and PGP Command Line sets the standard for IBM enterprise customers.

Pretty Good Privacy (PGP) encryption is one of the most widely deployed whole file encryption technologies that has stood the test of time among the world’s largest financial, medical, industrial, and services companies. Download the PGP z podcast It works on all of the major operating system platforms and makes it easy to deploy strong encryption to protect data assets and file exchange. PGP is also well recognized and accepted across a broad number of compliance regulations as a secure way to protect sensitive data as it is in transit to your trading partners. PGP encryption can help businesses meet PCI-DSS, HIPAA/HITECH, SOX, and FISMA compliance regulations.

Here are three key things to know about PGP encryption for your IBM System z Mainframe, and how to discuss them with your technology providers:

1) Always encrypt and decrypt sensitive data on the platform where it is created. This is the only way to satisfy regulatory security and privacy notification requirements.

Moving data to a PC for encryption and decryption tasks greatly increases the chances of loss and puts your most sensitive data at risk.  In order not to defeat your data security goals it is important to encrypt and decrypt data directly on the platform.

2) The best PGP encryption solutions manage PGP keys directly on the platform without the need for an external PC system, or key generation on a PC.

Using a PC to generate or manage PGP keys exposes the keys on the most vulnerable system. The loss of PGP keys may trigger expensive and time-consuming privacy notification requirements and force the change of PGP keys with all of your trading partners.

3) The best data security solutions will provide you with automation tools that help minimize additional programming and meet your integration requirements.

Most Enterprise customers find that the cost of the software for an encryption solution is small compared to the cost of integrating the solution into their business applications. Data must be extracted from business applications, encrypted using PGP, transmitted to a trading partner, archived for future access, and tracked for regulatory audit. When receiving an encrypted file from a trading partner the file must be decrypted, transferred to an IBM z library, and processed into the business application. All of these operations have to be automated to avoid expensive and time-consuming manual intervention.

While the IBM System z Mainframe has always had a well-earned reputation for security, recently IBM modernized and extended their high-end enterprise server, the IBM System z Mainframe with the new z13 model. With full cross-platform support you can encrypt and decrypt data on the IBM Mainframe regardless of its origination or destination.

For over a decade Townsend Security has been bringing PGP encryption to Mainframe customers to help them solve some of the most difficult problems with encryption. As partners with Symantec we provide IBM enterprise customers running IBM System z and IBM i (AS/400, iSeries) with the same strong encryption solution that runs on Windows, Linux, Mac, Unix, and other platforms.

With the commercial PGP implementation from Symantec comes full support for OpenPGP standard, which really make a difference for enterprise businesses. Here are just a few of the things we’ve done with PGP to embrace the IBM System z Mainframe architecture:

    • Native z/OS Batch operation
    • Support for USS operation
    • Text mode enhancements for z/OS datasets
    • Integrated EBCDIC to ASCII conversion using built-in IBM facilities
    • Simplified IBM System z machine and partition licensing
    • Support for self-decrypting archives targeting Windows, Mac, and Linux!
    • A rich set of working JCL samples
    • As always we offer a free 30-day PGP evaluation on your own IBM Mainframe

PGP Command Line is the gold standard for whole file encryption, and you don’t have to settle for less. When you base your company reputation on something mission-critical like PGP encryption, you deserve the comfort of knowing that there’s a support team there ready to stand behind you.

Listen to the podcast for more in-depth information and a discussion on how PGP meets compliance regulations, and how Townsend Security, the only Symantec partner on the IBM i (AS/400) platform as well as the IBM z mainframe providing PGP Command Line 9, can help IBM enterprise customers with defensible data security!

 

Download the Podcast for PGP z


Related Posts Plugin for WordPress, Blogger...

Introducing Key Connection for Encryptionizer

  
  
  
  

Easier Encryption and Key Management for SQL Server Standard & Web Editions

Your IT environment is ever changing. As technologies evolve, you constantly have to upgrade systems and migrate to new platforms to accommodate these changes. This often results in complex IT environments that are comprised of multiple platforms and operating systems, and data located in disparate locations. Protecting sensitive data in a multi-platform environment that’s made up of both old and new technologies can be one of the most frustrating aspects of data security. White Paper: Key Management in a Multi-Platform Environment

One of Townsend Security’s core missions is to help customers protect sensitive data, regardless of where that data resides, in a cohesive way. One area where we’ve seen the need to improve upon this is around encrypting data in Standard versions of Microsoft SQL Server. Because Microsoft does not provide transparent data encryption (TDE) capability on SQL Server Standard and Web editions, Microsoft customers using these older editions struggle with implementing easy and fast transparent encryption.

In order to simplify the encryption process for Standard and Web Edition users, we have partnered with NetLib, a database encryption solution provider that supports easy and automatic folder, file, and application encryption in Windows as well as database and column level encryption for all Microsoft SQL Server editions. NetLib Encryptionizer easily protects data in SQL Server (2000-2014), Standard, Web, Workgroup and Express Editions. NetLib provides column level encryption as well using triggers and views without the need to write SQL statements or any other development.  

NetLib’s customers choose NetLib Encryptionizer for their FIPS 140-2 compliant encryption solution, for which they require FIPS 140-2 compliant encryption key management as well. As a critical step in our partnership, Townsend Security built Key Connection for Encryptionizer, a no-cost plug-in module that allows Encryptionizer customers to easily secure encryption keys using Townsend Security’s Alliance Key Manager.  

System audits and logging are critical to detecting and alerting you to malicious activity in your systems. Key Connection for Encryptionizer allows users to fully audit the encryption and key management process. Encryptionizer audit logging of user-file interaction as well as audit logging of the key life cycle in Alliance Key Manager provides a comprehensive logging service. Additionally, encrypting data on SQL Server Standard and Web editions typically involves some level of development. With NetLib Encryptionizer, users can encrypt data using a simple point-and-click configuration.

Townsend Security is proud to partner with NetLib to provide an easier method to encrypting data in Microsoft SQL Server Standard and Web editions. To learn more about managing encryption keys for data encryption in complex environments, download the white paper, Key Management in the Multi-Platform Environment.

White P


 





Related Posts Plugin for WordPress, Blogger...

Overcome Security Challenges with Your VMware Environment

  
  
  
  

Prioritize Your Data Security Plan and Encryption Strategy

New Call-to-action For many businesses, moving to the cloud means storing or processing credit card numbers, financial information, health care data, and other personally identifiable information (PII) in a virtual, shared environment. When using an environment that is inherently less secure, how does an organization meet industry data security requirements and prevent unwanted access to sensitive data?

In order to achieve a comprehensive data security plan in a VMware environment, organizations should consider the following steps:

Take Inventory of Your Sensitive Data

Every data security project should start by making an inventory of sensitive data in your IT environment. If you do not know where to start, first consider the compliance regulations you fall under. For example, do you process credit cards? If so, you must locate and encrypt primary account numbers (PAN), expiration date, cardholder name, and service codes where they are processed, transmitted, or stored in order to meet PCI compliance. If your company is a financial institution, include Non-Public Information (NPI) about consumers, and if you are in the medical segment, you must also locate all Protected Health Information (PHI) for patients. Finally, locate all data that is considered Personally Identifiable Information (PII) which is any information that can uniquely identify an individual (social security number, phone number, email address, etc.). Business plans, computer source code, and other digital assets should make the list, too.

Once you have a list of the kinds of information that you should protect, find and document the places this information is stored. This will include databases in your virtual machines, unstructured data in content management systems, log files, and everywhere else sensitive data comes to rest or can be found in the clear.

After you have a full inventory of your sensitive data, prioritize your plan of attack to secure that information with encryption and protect your encryption keys with a key management solution. The most sensitive information, such as credit card numbers, medical or financial data, is more valuable to cyber criminals and should be encrypted first. Creating this map of where your sensitive data resides and prioritizing which data to encrypt is not only a requirement for many compliance regulations, but will help to focus your resources as well.  

What to do:

  • Define sensitive data for your organization.
  • Using manual and automated procedures, make an inventory of all of the places you process and store sensitive data.
  • Create a prioritized plan on how you will encrypt the sensitive information affected by compliance regulations.

Implement Encryption and Encryption Key Management

While encryption is critical to protecting data, it is only half of the equation. Your key management solution will determine how effective your data security strategy ultimately is. When encrypting information in your applications and databases, it is crucial to protect encryption keys from loss. Storing encryption keys with the data they protect, or using non-standard methods of key storage, will not protect you in the event of a data breach.

For businesses who are already encrypting data, the most common cause of an audit failure is improper storage and protection of the encryption keys. Doing encryption key management right is often the hardest part of securing data. For this reason, it is paramount to choose a key management solution that is compliant and tested against the highest standards:

  • Your VMware key management solution should be based on FIPS 140-2 compliant key management software (find out if your key management vendor offers FIPS 140-2 compliant key management on the NIST website look it up on the NIST web site.
  • A key management solution should also conform to the industry standard Key Management Interoperability Protocol (KMIP) as published by OASIS. Ask for the KMIP Interoperability Report from the KMIP testing process.

Encrypting sensitive data on your virtual machine protects your data at the source, and is the only way to definitively prevent unwanted access to sensitive data. With VMware environments, businesses that need to protect sensitive data can use encryption and encryption key management to secure data, comply with industry security standards, protect against data loss, and help prevent data breaches.

What to look for:

  • Use industry standard encryption algorithms such as AES to protect your sensitive data. Avoid non-standard encryption methods.
  • Your encryption solution should support installation in any application workgroup that you define for your trusted applications. Be sure your encryption vendor explains any limitations in the VMware deployment.
  • Your encryption key management solution should support deployment in a separate VMware security workgroup. Ideally, the key management solution will include internal firewall support to complement the VMware virtual firewall implementation.
  • Your key management solution is a critical part of your VMware security implementation. It should support active collection and monitoring of audit logs and operating system logs. These logs should integrate with your log collection and SIEM active monitoring systems.

As your IT environment evolves, make sure your key management evolves with you. In addition to support for VMware, be sure your key management solution is available as a hardware security module (HSM), as a Cloud HSM subscription, and as a native cloud application on major cloud service provider platforms such as Amazon Web Services and Microsoft Azure. Even if you do not have these non-VMware platforms today, it is important to consider that the evolution of your IT infrastructure is inevitable. The encryption and key management solutions you deploy today in your VMware data center should be prepared to move to cloud or hosted platforms quickly and seamlessly. A merger, acquisition, rapid growth, competitive challenges, and technology advances can force the need to migrate your solutions to new platforms.

For more detailed information, check out our eBook on VMware Encryption – 9 Critical Components of a Defensible Encryption Strategy:

VMware Encryption eBook

Related Posts Plugin for WordPress, Blogger...

Understanding Encryption and Key Management for VMware

  
  
  
  

How to implement solutions that are based on compliance standards and meet security best practices.

As more and more Enterprise businesses move into virtual and cloud environments, they face challenges and security issues in these multi-tenancy situations. VMware customers benefit from the many operational and cost efficiencies provided by VMware virtualization technologies both in traditional IT infrastructure and in cloud environments. VMware Resource Kit for Encryption and Key Management As VMware customers deploy data encryption solutions as a part of their defense-in-depth strategy, the need for compliant encryption key management can present barriers to a good encryption implementation. It is possible to deploy a proper encryption key management solution within the VMware infrastructure without the need for traditional hardware security modules (HSMs) when this approach is appropriate to the security needs of the organization.

Here is some high level guidance on how to deploy and protect a solid encryption and key management solution for VMware within your virtual or cloud environment. While these recommendations are general in nature (actual VMware deployments will use different VMware applications and architectures to meet specific user, application, and security needs) they can provide a good roadmap.

Seven General VMware Recommendations

1. Identify and Document Trusted and Un-Trusted Applications

Properly identifying application groups based on the level of trust is critical for a secure implementation of virtualized applications and encryption key management services. Create and isolate a management cluster for your core VMware applications such as vSphere, vShield, etc. Identify application groups and their associated level of trust, and isolate applications into appropriate workgroups. Avoid mixing trusted and untrusted applications in a workgroup.

You should consider creating a security workgroup to contain your third party security applications such as encryption key management, authentication services, active directory, system logging, and other applications whose primary function is to assist in securing your applications in your VMware environment.

In preparation for properly securing these environments, create an inventory of all Virtual Machines managed in each workgroup. For each workgroup and virtual machine, identify the security controls that will be required for each one (network segmentation, storage segmentation, system logging, active monitoring, etc.). VMware flow tools can assist with this documentation.

2. Restrict Physical Access

Fundamental to all IT security implementations is proper security of the physical environment. This means proper physical security controls and physical monitoring of the data center as well as good auditing and procedural controls. These physical controls should also apply to access of VMware management and security applications. You can look to the PCI Data Security Standards and guidance for information on appropriate physical controls. You can also refer to standard security guidance in SOC 2 and SOC 3 assessments for information on physical controls. When deploying on a cloud platform it is always a good idea to ask the Cloud Security Provider (CSP) for a copy of the PCI letter of attestation, or an SOC 2 / SOC 3 report.

3. Isolate Security Functions

Because security applications are often a target of cyber-criminals, you should isolate them into their own security workgroup and implement the highest level of VMware security. Only trusted VMware administrators should have access rights to the encryption key management solution, system logs, and audit reports. Be sure to actively monitor access to and use of all encryption key management, key retrieval, and encryption services.

4. Change VMware Default Passwords

Review all VMware applications used to secure and manage your VMware environment and change the default passwords as recommended by VMware. The failure to change default passwords is one of the most common causes of security breaches.

5. Implement Network Segmentation

Network segmentation is easy to accomplish with VMware network management and security applications and you should implement network segmentation to isolate applications that process sensitive information from applications that do not require as high a level of trust. Additionally, you should provide network segmentation for all third party security applications such as your encryption and key management solution. Network segmentation should include all high availability and business recovery infrastructure. Do not rely on virtual network segmentation alone; use firewalls that are capable of properly securing virtual networks.

6. Implement Defense in Depth

The VMware management and security applications provide for a high level of security and monitoring. They also provide hooks and integration with third party security applications that provide system log collection, active monitoring, intrusion detection, etc. Encryption is a critical part of a defense-in-depth strategy, and protecting encryption keys is the most important part of an encryption strategy. Regardless of the operating systems in your application Virtual Machines, your solution should provide encryption key management, key retrieval, and encryption services for your business applications and databases running in your VMware infrastructure.

7. Monitor VMware Administrative Activity

Use an appropriate SIEM solution to collect VMware application and ESXi hypervisor system logs and perform active monitoring. The log collection and SIEM active monitoring solutions should be isolated into a security workgroup that contains other third party security applications such as Townsend Security’s Alliance Key Manager.

For additional information on securing Alliance Key Manager for VMware, our encryption key management solution, request the VMware Resource Kit containing the Guidance Document and other valuable resources:

Resource Kit: Encryption and Key Management in VMware

As solutions and implementations vary a great deal, always consult with a security specialist and compliance auditor for specific guidelines for your industry and environment! Just contact us to get started!

Related Posts Plugin for WordPress, Blogger...

Basics of the EU Data Protection Working Party

  
  
  
  

Article 29 Security Guidelines on Data Protection



The Article 29 Working Party is composed of representatives of the national data protection authorities (DPA), the European Data Protection Supervisor (EDPS), and the European Commission. It is a very important platform for cooperation, and its main tasks are to:

  1. Provide expert advice from the national level to the European Commission on data protection matters.
  2. Promote the uniform application of Directive 95/46 in all Member States of the EU, as well as in Norway, Liechtenstein and Iceland.
  3. Advise the Commission on any European Community law (so called first pillar), that affects the right to protection of personal data.


Download the EU Data Privacy White Paper

Under EU law, personal data can only be gathered legally under strict conditions, for a legitimate purpose. Furthermore, persons or organisations which collect and manage personal information must protect it from misuse and must respect certain rights of the data owners which are guaranteed by EU law.

Every day within the EU, businesses, public authorities and individuals transfer vast amounts of personal data across borders. Conflicting data protection rules in different countries would disrupt international exchanges. Individuals might also be unwilling to transfer personal data abroad if they were uncertain about the level of protection in other countries.

Therefore, common EU rules have been established to ensure personal data enjoys a high standard of protection everywhere in the EU. The EU's Data Protection Directive also foresees specific rules for the transfer of personal data outside the EU to ensure the best possible protection of sensitive data when it is exported abroad.

In order to help address these EU objectives, Patrick Townsend, Founder and CEO of Townsend Security recommends the following data protection best practices:

  • Encrypt Data at Rest
    Make a full inventory of all sensitive personal information that you collect and store. Use strong encryption to protect this data on servers, PCs, laptops, tablets, mobile devices, and on backups.
  • Use Industry Standard Encryption
    Advanced Encryption Standard (AES, also known as Rijndael) is recognized world-wide as the leading standard for data encryption.
  • Use Strong Encryption Keys
    Always use cryptographically secure 128-bit or 256- bit AES encryption keys and never use passwords as encryption keys or the basis for creating encryption keys.
  • Protect Encryption Keys from Loss
    Encryption keys must be stored away from the data they protect.  Keys must be securely managed and should be compliant with the industry standards such as NIST FIPS 140-2 which is recognized and accepted worldwide.
  • Change Encryption Keys Regularly
    Change your encryption keys on a quarterly or semi-annual basis. Using one encryption key for a long period of time can expose you to a breach notification for historical data.
  • Use Strong, Industry Standard Hash Algorithms
    Never use MD5 or other weaker hash methods. Use the SHA-256 or SHA-512 methods for your hash requirements.
  • Use Keys or Salt with Your Hashes
    You can use the Hashed Message Authentication Code (HMAC) method with an encryption key or use a strong encryption key under the protection of a key manager as the salt for the hash method.

For more detailed information on these recommendations, download the white paper on the "EU Data Privacy Protections and Encryption":

Click to Request the EU Data Privacy White Paper
Related Posts Plugin for WordPress, Blogger...

Data Protection in the Cloud & PCI DSS - Logs and Log Monitoring (Part 3)

  
  
  
  

This is the third part in our series looking at recent announcements by Amazon, Microsoft and other cloud service providers regarding new encryption and key management services. Let’s talk about log collection and active monitoring as a security best practice, and as a requirement to meet PCI DSS security requirements. Since the PCI DSS guidelines implement common security best practices, they are a good starting point for evaluating the security of any application and platform that processes sensitive data. Following the practice of the first part of this series we will use the PCI document “PCI DSS Cloud Computing Guidelines, Version 2.0” as our reference point, and add in some other sources of security best practices. Even if you don’t have to meet PCI data security requirements, this should be helpful when evaluating your security posture in the cloud.

Download Whitepaper on PCI Data Security

Collecting system logs and actively monitoring them is a core component of every cyber security recommendation. Cybercriminals often gain access to IT systems and go undetected for weeks or months. This gives them the ability to work on compromising systems and stealing data over time. Active monitoring is important in the attempt to detect and thwart this compromise.

Here is what PCI says about active monitoring in Section 10 of the PCI DSS (emphasis added):

Review logs and security events for all system components to identify anomalies or suspicious activity.

Many breaches occur over days or months before being detected. Checking logs daily minimizes the amount of time and exposure of a potential breach. Regular log reviews by personnel or automated means can identify and proactively address unauthorized access to the cardholder data environment. The log review process does not have to be manual. The use of log harvesting, parsing, and alerting tools can help facilitate the process by identifying log events that need to be reviewed.

In recognition of the importance of ongoing, active monitoring the National Institute of Standards and Technology (NIST) provides this guidance in their Special Publication 800-137 “Information Security Continuous Monitoring (ISCM)” guidance:

The Risk Management Framework (RMF) developed by NIST, describes a disciplined and structured process that integrates information security and risk management activities into the system development life cycle. Ongoing monitoring is a critical part of that risk management process. In addition, an organization’s overall security architecture and accompanying security program are monitored to ensure that organization-wide operations remain within an acceptable level of risk, despite any changes that occur. Timely, relevant, and accurate information is vital, particularly when resources are limited and agencies must prioritize their efforts.

And active monitoring is a component of the SANS Top 20 security recommendations:

Collect, manage, and analyze audit logs of events that could help detect, understand, or recover from an attack.

Deficiencies in security logging and analysis allow attackers to hide their location, malicious software, and activities on victim machines. Even if the victims know that their systems have been compromised, without protected and complete logging records they are blind to the details of the attack and to subsequent actions taken by the attackers. Without solid audit logs, an attack may go unnoticed indefinitely and the particular damages done may be irreversible.

Because of poor or nonexistent log analysis processes, attackers sometimes control victim machines for months or years without anyone in the target organization knowing, even though the evidence of the attack has been recorded in unexamined log files.

Deploy a SIEM (Security Incident and Event Management) or log analytic tools for log aggregation and consolidation from multiple machines and for log correlation and analysis.

This is why actively collecting and monitoring system and application logs is critical for your security strategy.

Implementing this critical security control in a cloud environment presents some special challenges. Here is what the PCI cloud guidance says:

Additionally, the ability to maintain an accurate and complete audit trail may require logs from all levels of the infrastructure, requiring involvement from both the CSP and the client. For example, the CSP could manage system-level, operating-system, and hypervisor logs, while the client configures logging for their own VMs and applications. In this scenario, the ability to associate various log files into meaningful events would require correlation of client-controlled logs and those controlled by the CSP.

It is not enough to collect logs from a few selected points in your cloud application environment. You need to collect all of the logs from all of the components that you deploy and use in your cloud application. This is because the effectiveness of active monitoring depends on the correlation of events across your entire application, database, and network and this includes the cloud providers systems and infrastructure. Here is what ISACA says about security event correlation:

Correlation of event data is critical to uncover security breaches because security incidents are made up of a series of events that occur at various touch points throughout a network--a many-to-one process. Unlike network management, which typically is exception-based or a one-to-one process, security management is far more complex. An attack typically touches a network at multiple points and leaves marks or breadcrumbs at each. By finding and following that breadcrumb trail, a security analyst can detect and hopefully prevent the attack.

Your encryption key management system is one of those critical system components that must be monitored and whose events should be aggregated into a unified view. Key management logs would include encryption key establishment and configuration, encryption key access and use, and operating system logs of every component of the key management service. You should be able to collect and monitor logs from all parts of your applications and cloud platform.

Unfortunately, current key management services from cloud providers only provide a very limited level of access to critical component logs. You might have access to a limited audit trail of your own access to encryption keys, but no access to the key service system logs, HSM access logs, HSM audit logs, or HSM operating system logs. Without access to the logs in these components it is not possible for you to implement an effective log collection and active monitoring strategy. You are working in the dark, and without full access to all logs on all components of your cloud key management service you can’t comply with security best practices for log collection, correlation, and active monitoring.

Since key management systems are always in scope for PCI audit and are extensions of your application environment it is difficult to see how these new cloud key management services can meet PCI DSS requirements for log collection and monitoring as currently implemented.

Does this mean you can’t implement security best practices for key management in the cloud? I don’t think so. There are multiple vendors, including us (see below), who offer cloud key management solutions that provide full access to key management, configuration, key usage, application, and operating system logs.  You can deploy a key management service that fully supports security best practices for log collection and monitoring.

In part 4 of this series we’ll look at the topic of key custody and multi-tenancy and how it affects the security of your key management solution in the cloud.

Patrick


Resources

Alliance Key Manager for AWS

Alliance Key Manager for Azure

Alliance Key Manager for VMware and vCloud

Alliance Key Manager for Drupal

 

Alliance Key Manager for IBM Power Sysems

Alliance Key Manager Colud

download the Whitepaper: Meet the Challenges of PCI Compliance

Related Posts Plugin for WordPress, Blogger...

Understanding the Challenges of Data Protection in AWS

  
  
  
  

An excerpt from the latest white paper “How to Meet Best Practices for Protecting Information in AWS” by Stephen Wynkoop, SQL Server MVP, Founder & Editor of SSWUG.org

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop Working in the cloud presents several challenges unique to that environment, including significant growth and change in the area of data protection and encryption. There is much confusion about what is - and is not - encrypted and protected.  This encryption of information, and the management of the keys and access controls is a core objective of this paper. If you can render information useless if accessed illegitimately, you have successfully addressed a whole host of regulations, compliance and best practices.

The very definition of protection by cloud providers is an important part of understanding the requirements and challenges of your configurations and information protection. AWS approaches data protection in several ways that impact your systems. The first is the configuration and design of your infrastructure. This consideration includes establishing Virtual Private Clouds (VPC) and providing for encryption of some information stores. The challenge exists in understanding the protection of these information stores and determining what you need to do to bring these protections in line with your requirements and compliance areas.

As you consider your systems, data protection will come down to several important areas:

  • Physical access controls – This refers to the doors, secure access controls and other protections at the physical server and server room level.
  • Logical access controls for your systems – These are the controls you put in place to prevent unwanted access to information.
  • Data access – Data access controls are typically enforced at the information stores level.
  • Protection of data in case of a breach – This is addressed by making the information in your systems unusable if accessed in a way that is unwanted.

Stephen’s white paper also covers the impact on data protection in public vs. private clouds, security fundamentals in AWS, and the best practices for deploying an encryption key management solution including:

  • Segregation of Duties
  • Dual Control and Split Knowledge
  • Key Creation (and understanding strong keys)
  • Key Rotation
  • Protection of Keys
  • Access Controls and Audits (Logging)

In his white paper, Stephen also discusses cloud-provider-based key management services and some of the important features, options, questions, and concerns that should be considered before selecting a service or a key management solution. Some important aspects to understand are:

  • Control, Ownership, and Access - By managing your own encryption services and providing for industry-compliant key management and data protection practices, you help ensure that your data remains managed by your own secure keys.
  • Multi-Tenancy and Key Management - In a worst case scenario it’s possible that keys could be compromised.
  • Access to Keys - Many systems and architectures are based on hybrid solutions. Cases where there are systems on-premises combined with systems in the cloud are areas that will be problematic with the AWS services. Systems not on the AWS hosted services will not have access to the key management services on AWS.

There are many different considerations when thinking about the choices in your key management solution. Be sure to fully understand logs, key management, backups and other elements that provide the utility you require. Finally, be sure you’re checking for proper compliance and certification of the solutions you are considering. It is important that any solution you choose has been through a FIPS 140-2 validation, and that you have a full understanding of any PCI, HIPAA or other regulatory body requirements.

Please download the full document to learn more about protecting information in Amazon Web Services and how Townsend Security’s Alliance Key Manager for AWS provides a FIPS 140-2 compliant encryption key manager to AWS users who need to meet data privacy compliance regulations and security best practices.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop

Related Posts Plugin for WordPress, Blogger...

Data Protection in the Cloud & PCI DSS - Segmentation (Part 2)

  
  
  
  

This is the second part in our series looking at recent announcements by Amazon, Microsoft and others regarding new encryption and key management services. Let’s talk about the concept of segmentation as a security best practice, and as a strong recommendation by PCI DSS security standards. Since the PCI DSS guidelines implement common security best practices they are a good jumping off point for evaluating the security of any application and platform. Following the practice of the first part of this series we will use the PCI document “PCI DSS Cloud Computing Guidelines, Version 2.0” as our reference point. Even if you don’t have to meet PCI data security requirements, this should be helpful when evaluating your security posture in the cloud.

Download Whitepaper on PCI Data Security

Segmentation as a security concept is very simple and very fundamental. Better security can be achieved by not mixing trusted and untrusted applications, data, and networks. This concept of trusted and untrusted applications extends to the value of the data assets – when applications process highly sensitive and valuable data assets they need to be separated into trusted and secure environments. We expend more effort and resources to protect what is valuable from criminals. Conversely, when there are no valuable data assets in an environment there is no need to take the same level of effort to secure them.

This is the core reason that PCI DSS recommends segmentation of applications that process payments from non-payment applications. Here is what PCI says about non-cloud applications:

Outside of a cloud environment, individual client environments would normally be physically, organizationally, and administratively separate from each other.

So, how do the PCI DSS security requirements relate to cloud platforms? Here is what PCI says (emphasis added):

Segmentation on a cloud-computing infrastructure must provide an equivalent level of isolation as that achievable through physical network separation. Mechanisms to ensure appropriate isolation may be required at the network, operating system, and application layers; and most importantly, there should be guaranteed isolation of data that is stored.

Proper segmentation is difficult to achieve even when you have complete control over all aspects of your environment. When you add the inherently shared and multi-tenant architecture of cloud platforms this becomes a high hurdle to get over. Here is what PCI says about this challenge:

Client environments must be isolated from each other such that they can be considered separately managed entities with no connectivity between them. Any systems or components shared by the client environments, including the hypervisor and underlying systems, must not provide an access path between environments. Any shared infrastructure used to house an in-scope client environment would be in scope for that client’s PCI DSS assessment.

This brings us exactly to the concern about new cloud key management services in Azure and AWS. These new services are inherently multi-tenant in both the key management services down to the hardware security modules (HSMs) that provide the ultimate security for encryption keys. You have no idea who you are sharing the service with.

The PCI guidance tells us what this segmentation looks like in a cloud environment:

A segmented cloud environment exists when the CSP enforces isolation between client environments. Examples of how segmentation may be provided in shared cloud environments include, but are not limited to: 

  • Traditional Application Service Provider (ASP) model, where physically separate servers are provided for each client’s cardholder data environment.
  • Virtualized servers that are individually dedicated to a particular client, including any virtualized disks such as SAN, NAS or virtual database servers.
  • Environments where clients run their applications in separate logical partitions using separate database management system images and do not share disk storage or other resources.

There is no cloud service provider implementation of key management services that meet these basic requirements.

The PCI DSS guidance takes a pretty strong view about inadequate segmentation in cloud environments:

If adequate segmentation is not in place or cannot be verified, the entire cloud environment would be in-scope for any one client’s assessment. Examples of “non-segmented” cloud environments include but are not limited to:

  • Environments where organizations use the same application image on the same server and are only separated by the access control system of the operating system or the application.
  • Environments where organizations use different images of an application on the same server and are only separated by the access control system of the operating system or the application.
  • Environments where organizations’ data is stored in the same instance of the database management system’s data store.

Since key management systems are always in scope for PCI audit and are extensions of your application environment and depend entirely on the access control system of the cloud provider, it is difficult to see how these new cloud key management services can meet PCI DSS requirements as currently implemented.

Here’s the last comment by PCI on segmentation in cloud environments:

Without adequate segmentation, all clients of the shared infrastructure, as well as the CSP, would need to be verified as being PCI DSS compliant in order for any one client to be assured of the compliance of the environment. This will likely make compliance validation unachievable for the CSP or any of their clients.

Does this mean you can’t implement security best practices for key management in the cloud? I don’t think so. There are multiple vendors including us (see below) who offer cloud key management solutions that we believe can be effectively isolated and segmented on cloud platforms, or even hosted outside of the cloud.

In our part 3 of this series we’ll look at the topic of logging and active monitoring and how it affects the security of your key management solution in the cloud.

Patrick


Resources

Alliance Key Manager for AWS

Alliance Key Manager for Azure

Alliance Key Manager for VMware and vCloud

Alliance Key Manager for Drupal

Alliance Key Manager for IBM Power Systems

Alliance Key Manager Cloud HSM

download the Whitepaper: Meet the Challenges of PCI Compliance


Related Posts Plugin for WordPress, Blogger...

Data Protection in the Cloud & PCI DSS - Encryption and Key Management (Part 1)

  
  
  
  

Public and private organizations of all sizes are rapidly adopting cloud platforms and services as a way of controlling costs and simplifying administrative tasks. One of the most urgent concerns is addressing the new security challenges inherent in cloud platforms, and meeting various compliance regulations. Data protection, especially encryption and encryption key management, are central to those security concerns.

Download Whitepaper on PCI Data Security The recent announcements by Amazon, Microsoft and others regarding new encryption and key management services make it more urgent to understand the security and compliance implications of these cloud security services.

There are a number of sources for security best practices and regulatory rules for data encryption including the National Institute for Standards and Technology (NIST), the Cloud Security Alliance (CSA), the Payment Card Industry Security Standards Council (PCI SSC), the EU Data Protection Directive, and others.

Because so many organizations fall under the PCI Data Security Standards (PCI DSS) regulations, and because the PCI guidelines are mature and often referenced by security auditors, we will use the PCI recommendations and guidance as the basis for our discussion in this multi-part series.

For securing information in the cloud, the PCI Security Standards Council published the document “PCI DSS Cloud Computing Guidelines, Version 2.0” in February of 2013. This is the current guidance for PCI DSS compliance and provides recommendations and guidance for any organization that needs to meet PCI data security requirements. It is also a common benchmark for organizations who do not need to meet PCI DSS standards, but who want to meet security best practices for protecting sensitive data in the cloud. 

Disclaimer: Townsend Security, Inc. is not a Qualified Security Auditor (QSA) and the opinions in this article are not intended to provide assurance of PCI DSS compliance. For PCI DSS compliance validation, please refer to an approved QSA auditor or request a referral from Townsend Security.

First things first: Let’s tackle the most fundamental question - Who is responsible for data security in the cloud?

This one is easy to answer. You are! Not your Cloud Service Provider (CSP), not your QSA auditor, and no one else. You are ultimately responsible for insuring that you meet security best practices and PCI DSS guidance in the cloud platform.

This can be confusing when you are new to cloud computing. Cloud Service Providers often make this more confusing by claiming to be PCI compliant and you might infer that you are PCI compliant as a result of moving to their cloud. This is wrong. The PCI SSC makes it clear that you bear full and ultimate responsibility for insuring PCI DSS compliance. No major cloud service provider can make you PCI compliant just by implementing on their cloud platform. You will have to work with your CSP to insure compliance, and that is your responsibility under these standards.

Now let’s look at the PCI cloud guidance in a bit more detail. Here is what they say in the PCI DSS cloud guidance (emphasis added):

Much stock is placed in the statement “I am PCI compliant”, but what does this actually mean for the different parties involved?

Use of a PCI DSS compliant CSP does not result in PCI DSS compliance for the clients. The client must still ensure they are using the service in a compliant manner, and is also ultimately responsible for the security of their CHD—outsourcing daily management of a subset of PCI DSS requirements does not remove the client’s responsibility to ensure CHD is properly secured and that PCI DSS controls are met. The client therefore must work with the CSP to ensure that evidence is provided to verify that PCI DSS controls are maintained on an ongoing basis—an Attestation of Compliance (AOC) reflects a single point in time only; compliance requires ongoing monitoring and validation that controls are in place and working effectively.

Regarding the applicability of one party’s compliance to the other, consider the following:
a) If a CSP is compliant, this does not mean that their clients are.
b) If a CSP’s clients are compliant, this does not mean that the CSP is.
c) If a CSP and the client are compliant, this does not mean that any other clients are.

The CSP should ensure that any service offered as being “PCI compliant” is accompanied by a clear and unambiguous explanation, supported by appropriate evidence, of which aspects of the service have been validated as compliant and which have not.

Great, now you know that you are responsible for PCI DSS compliance and that you have to work with your cloud service provider to insure that their services and components are PCI DSS compliant and that you have deployed them in a compliant way.

Are the new cloud key management services PCI DSS compliant?

No.

At the time I am writing this (February 2015) there has been no claim of PCI DSS compliance by Microsoft for the new Azure Key Vault service, nor by Amazon for the new AWS Key Management Service, and there is no Attestation of Compliance (AOC) available for either service.

So what should you do?

In this case the PCI cloud guidance is very clear (emphasis added):

CSPs that have not undergone a PCI DSS compliance assessment will need to be included in their client’s assessment. The CSP will need to agree to provide the client’s assessor with access to their environment in order for the client to complete their assessment. The client’s assessor may require onsite access and detailed information from the CSP, including but not limited to:

  • Access to systems, facilities, and appropriate personnel for on-site reviews, interviews, physical walk-throughs, etc.
  • Policies and procedures, process documentation, configuration standards, training records, incident response plans, etc.
  • Evidence (such as configurations, screen shots, process reviews, etc.) to show that all applicable PCI DSS requirements are being met for the in-scope system components
  • Appropriate contract language, if applicable

More from the PCI cloud guidance (note that cloud encryption key management is a Security-as-a-Service):

Security as a Service, or SecaaS, is sometimes used to describe the delivery of security services using a SaaS-based delivery model. SecaaS solutions not directly involved in storing, processing, or transmitting CHD may still be an integral part of the security of the CDE. As an example, a SaaS-based anti-malware solution may be used to update anti-malware signatures on the client’s systems via a cloud-delivery model. In this example, the SecaaS offering is delivering a PCI DSS control to the client’s environment, and the SecaaS functionality will need to be reviewed to verify that it is meeting the applicable requirements.

This means that you, or your security auditor, will have to perform the full PCI data security assessment of the cloud service provider’s encryption and key management service and that the CSP will have to grant you full access to their facilities, staff, and procedures to accomplish this.

For both practical and logistical reasons that is not likely to happen. Most CSPs do not allow their customers unfettered access to their staff and facilities. Even if you could negotiate this, it may not be within your budget to hire a qualified auditor to perform the extensive initial and ongoing reviews that this requires.

The only reasonable conclusion you can draw is that the new encryption and key management services are not PCI DSS compliant at the present time, and you are not likely to achieve compliance through your own efforts.

In the next part of these series we will look at other security and compliance concerns about these new cloud key management services. We still have some distance to go on this topic, but if cloud security is a concern I think you will find this helpful!

download the Whitepaper: Meet the Challenges of PCI Compliance

Related Posts Plugin for WordPress, Blogger...

Is Drupal Ready for the Enterprise?

  
  
  
  

Drupal is growing up. Currently, over one million websites run on Drupal. It is the CMS of choice for the Weather Channel, American Express and the White House. And with Drupal 8 right around the corner, promising to bring new features and capabilities, it is a very attractive CMS for agencies who need to build solutions for their clients.

What Data Needs To Be Encrypted In Drupal? While these are some major wins for the platform, there is still a large segment of enterprises not quite ready to adopt Drupal. With headlines like “Drupal Sites, Assume You’ve Been Hacked” (after Drupalgeddon) it is easy to understand why there may be some hesitation. Security is a top concern for enterprises and they are scrutinizing anything that collects and stores personally identifiable information (PII) on behalf of their brand.

Increased security requirements are now trickling down via RFPs to agencies and developers who need to choose a CMS. As far as these enterprises are concerned, they don’t care what CMS is used on their project, as long as it can: (1) help manage their risk of a data breach and (2) meet compliance requirements. Enterprises know that the regulations they fall under (PCI DSS, HIPAA, FISMA, etc.) can be unforgiving in the event of a breach, especially if the proper technology was not in place.

So, is Drupal ready for the enterprise?

As a platform, yes-ish. Enterprises have security requirements that go beyond what is available in Drupal core. Fortunately, there are members within the Drupal community that understand this and have developed modules and services that easily integrate into Drupal installations. It is now up to developers to use these tools to build secure, enterprise-ready web sites and applications.

In order to win bids, developers need to not only know how to code, but also need to know security best practices. Concepts like dual control and separation of duties are now considerations when planning for web site security. Developers are also learning what compliance regulations consider sensitive information – data that they previously didn’t think twice about leaving unprotected. Beyond the obvious – credit card and social security numbers – email addresses, phone numbers, and zip codes can constitute PII that needs to be encrypted.

DrupalCon Encryption 
Slide from Hugh Forrest’s (Director, SXSW Interactive Festival) keynote from DrupalCon 2014 on the importance of encryption.

The importance of encryption and key management cannot be overstated. Encryption is the hardest part of data security, and key management is the hardest part of encryption. Storing encryption keys within the Drupal database, settings file, or even a protected file on the server, will never pass an enterprise security team’s sniff test or compliance audit. Hackers don’t break encryption, they find keys, and without key management, developers are leaving their private data open to any hacker that cares to take a look.

“We recently began talks with a Fortune 100 company regarding their platform,” said Chris Teitzel, CEO of Cellar Door Media. “Encryption was a requirement for this project. The primary question the security team brought up was how we are able to manage the keys.”

With the importance of encryption and key management realized, who is responsible for implementing it? Unfortunately, currently no major Drupal hosting providers offer it within their environments. However, it is not difficult to deploy using modules like Encrypt, Key, Form Encrypt, Field Encrypt, Key Connection, etc. These modules all have integrations with services that allow for encryption and key management outside of the Drupal installation.

It is also important to note that, while hosting providers can claim they are compliant with your specific regulation, their compliance does not extend to you, the developer. Hosting providers can attain an Attestation of Compliance (AOC) for their platform, however, it does not extend to what their customers do within their environments. Additionally, regulations like PCI DSS make it clear that in the event of a breach, it is the enterprise, not the hosting provider or development agency, that is responsible.

For Drupal to truly be a contender in the enterprise space, which it clearly is, it is prudent that members of the Drupal community understand how to secure data within their deployments, who is responsible in the event of a breach (their clients), and what they need to do to secure sensitive data in Drupal to make it a viable option. What Data Needs Encrypted In Drupal?

Related Posts Plugin for WordPress, Blogger...
All Posts