Data Privacy Blog

Key Management Kit


Contact us

webinars

podcast

 

Facebook Google+ Twitter LinkedIn

Our Blog to Your Inbox

Your email:

Posts by category

Townsend Security Data Privacy Blog

Current Articles | RSS Feed RSS Feed

Basics of the EU Data Protection Working Party

  
  
  
  

Article 29 Security Guidelines on Data Protection



The Article 29 Working Party is composed of representatives of the national data protection authorities (DPA), the European Data Protection Supervisor (EDPS), and the European Commission. It is a very important platform for cooperation, and its main tasks are to:

  1. Provide expert advice from the national level to the European Commission on data protection matters.
  2. Promote the uniform application of Directive 95/46 in all Member States of the EU, as well as in Norway, Liechtenstein and Iceland.
  3. Advise the Commission on any European Community law (so called first pillar), that affects the right to protection of personal data.


Download the EU Data Privacy White Paper

Under EU law, personal data can only be gathered legally under strict conditions, for a legitimate purpose. Furthermore, persons or organisations which collect and manage personal information must protect it from misuse and must respect certain rights of the data owners which are guaranteed by EU law.

Every day within the EU, businesses, public authorities and individuals transfer vast amounts of personal data across borders. Conflicting data protection rules in different countries would disrupt international exchanges. Individuals might also be unwilling to transfer personal data abroad if they were uncertain about the level of protection in other countries.

Therefore, common EU rules have been established to ensure personal data enjoys a high standard of protection everywhere in the EU. The EU's Data Protection Directive also foresees specific rules for the transfer of personal data outside the EU to ensure the best possible protection of sensitive data when it is exported abroad.

In order to help address these EU objectives, Patrick Townsend, Founder and CEO of Townsend Security recommends the following data protection best practices:

  • Encrypt Data at Rest
    Make a full inventory of all sensitive personal information that you collect and store. Use strong encryption to protect this data on servers, PCs, laptops, tablets, mobile devices, and on backups.
  • Use Industry Standard Encryption
    Advanced Encryption Standard (AES, also known as Rijndael) is recognized world-wide as the leading standard for data encryption.
  • Use Strong Encryption Keys
    Always use cryptographically secure 128-bit or 256- bit AES encryption keys and never use passwords as encryption keys or the basis for creating encryption keys.
  • Protect Encryption Keys from Loss
    Encryption keys must be stored away from the data they protect.  Keys must be securely managed and should be compliant with the industry standards such as NIST FIPS 140-2 which is recognized and accepted worldwide.
  • Change Encryption Keys Regularly
    Change your encryption keys on a quarterly or semi-annual basis. Using one encryption key for a long period of time can expose you to a breach notification for historical data.
  • Use Strong, Industry Standard Hash Algorithms
    Never use MD5 or other weaker hash methods. Use the SHA-256 or SHA-512 methods for your hash requirements.
  • Use Keys or Salt with Your Hashes
    You can use the Hashed Message Authentication Code (HMAC) method with an encryption key or use a strong encryption key under the protection of a key manager as the salt for the hash method.

For more detailed information on these recommendations, download the white paper on the "EU Data Privacy Protections and Encryption":

Click to Request the EU Data Privacy White Paper
Related Posts Plugin for WordPress, Blogger...

Data Protection in the Cloud & PCI DSS - Logs and Log Monitoring (Part 3)

  
  
  
  

This is the third part in our series looking at recent announcements by Amazon, Microsoft and other cloud service providers regarding new encryption and key management services. Let’s talk about log collection and active monitoring as a security best practice, and as a requirement to meet PCI DSS security requirements. Since the PCI DSS guidelines implement common security best practices, they are a good starting point for evaluating the security of any application and platform that processes sensitive data. Following the practice of the first part of this series we will use the PCI document “PCI DSS Cloud Computing Guidelines, Version 2.0” as our reference point, and add in some other sources of security best practices. Even if you don’t have to meet PCI data security requirements, this should be helpful when evaluating your security posture in the cloud.

Download Whitepaper on PCI Data Security

Collecting system logs and actively monitoring them is a core component of every cyber security recommendation. Cybercriminals often gain access to IT systems and go undetected for weeks or months. This gives them the ability to work on compromising systems and stealing data over time. Active monitoring is important in the attempt to detect and thwart this compromise.

Here is what PCI says about active monitoring in Section 10 of the PCI DSS (emphasis added):

Review logs and security events for all system components to identify anomalies or suspicious activity.

Many breaches occur over days or months before being detected. Checking logs daily minimizes the amount of time and exposure of a potential breach. Regular log reviews by personnel or automated means can identify and proactively address unauthorized access to the cardholder data environment. The log review process does not have to be manual. The use of log harvesting, parsing, and alerting tools can help facilitate the process by identifying log events that need to be reviewed.

In recognition of the importance of ongoing, active monitoring the National Institute of Standards and Technology (NIST) provides this guidance in their Special Publication 800-137 “Information Security Continuous Monitoring (ISCM)” guidance:

The Risk Management Framework (RMF) developed by NIST, describes a disciplined and structured process that integrates information security and risk management activities into the system development life cycle. Ongoing monitoring is a critical part of that risk management process. In addition, an organization’s overall security architecture and accompanying security program are monitored to ensure that organization-wide operations remain within an acceptable level of risk, despite any changes that occur. Timely, relevant, and accurate information is vital, particularly when resources are limited and agencies must prioritize their efforts.

And active monitoring is a component of the SANS Top 20 security recommendations:

Collect, manage, and analyze audit logs of events that could help detect, understand, or recover from an attack.

Deficiencies in security logging and analysis allow attackers to hide their location, malicious software, and activities on victim machines. Even if the victims know that their systems have been compromised, without protected and complete logging records they are blind to the details of the attack and to subsequent actions taken by the attackers. Without solid audit logs, an attack may go unnoticed indefinitely and the particular damages done may be irreversible.

Because of poor or nonexistent log analysis processes, attackers sometimes control victim machines for months or years without anyone in the target organization knowing, even though the evidence of the attack has been recorded in unexamined log files.

Deploy a SIEM (Security Incident and Event Management) or log analytic tools for log aggregation and consolidation from multiple machines and for log correlation and analysis.

This is why actively collecting and monitoring system and application logs is critical for your security strategy.

Implementing this critical security control in a cloud environment presents some special challenges. Here is what the PCI cloud guidance says:

Additionally, the ability to maintain an accurate and complete audit trail may require logs from all levels of the infrastructure, requiring involvement from both the CSP and the client. For example, the CSP could manage system-level, operating-system, and hypervisor logs, while the client configures logging for their own VMs and applications. In this scenario, the ability to associate various log files into meaningful events would require correlation of client-controlled logs and those controlled by the CSP.

It is not enough to collect logs from a few selected points in your cloud application environment. You need to collect all of the logs from all of the components that you deploy and use in your cloud application. This is because the effectiveness of active monitoring depends on the correlation of events across your entire application, database, and network and this includes the cloud providers systems and infrastructure. Here is what ISACA says about security event correlation:

Correlation of event data is critical to uncover security breaches because security incidents are made up of a series of events that occur at various touch points throughout a network--a many-to-one process. Unlike network management, which typically is exception-based or a one-to-one process, security management is far more complex. An attack typically touches a network at multiple points and leaves marks or breadcrumbs at each. By finding and following that breadcrumb trail, a security analyst can detect and hopefully prevent the attack.

Your encryption key management system is one of those critical system components that must be monitored and whose events should be aggregated into a unified view. Key management logs would include encryption key establishment and configuration, encryption key access and use, and operating system logs of every component of the key management service. You should be able to collect and monitor logs from all parts of your applications and cloud platform.

Unfortunately, current key management services from cloud providers only provide a very limited level of access to critical component logs. You might have access to a limited audit trail of your own access to encryption keys, but no access to the key service system logs, HSM access logs, HSM audit logs, or HSM operating system logs. Without access to the logs in these components it is not possible for you to implement an effective log collection and active monitoring strategy. You are working in the dark, and without full access to all logs on all components of your cloud key management service you can’t comply with security best practices for log collection, correlation, and active monitoring.

Since key management systems are always in scope for PCI audit and are extensions of your application environment it is difficult to see how these new cloud key management services can meet PCI DSS requirements for log collection and monitoring as currently implemented.

Does this mean you can’t implement security best practices for key management in the cloud? I don’t think so. There are multiple vendors, including us (see below), who offer cloud key management solutions that provide full access to key management, configuration, key usage, application, and operating system logs.  You can deploy a key management service that fully supports security best practices for log collection and monitoring.

In part 4 of this series we’ll look at the topic of key custody and multi-tenancy and how it affects the security of your key management solution in the cloud.

Patrick


Resources

Alliance Key Manager for AWS

Alliance Key Manager for Azure

Alliance Key Manager for VMware and vCloud

Alliance Key Manager for Drupal

 

Alliance Key Manager for IBM Power Sysems

Alliance Key Manager Colud

download the Whitepaper: Meet the Challenges of PCI Compliance

Related Posts Plugin for WordPress, Blogger...

Understanding the Challenges of Data Protection in AWS

  
  
  
  

An excerpt from the latest white paper “How to Meet Best Practices for Protecting Information in AWS” by Stephen Wynkoop, SQL Server MVP, Founder & Editor of SSWUG.org

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop Working in the cloud presents several challenges unique to that environment, including significant growth and change in the area of data protection and encryption. There is much confusion about what is - and is not - encrypted and protected.  This encryption of information, and the management of the keys and access controls is a core objective of this paper. If you can render information useless if accessed illegitimately, you have successfully addressed a whole host of regulations, compliance and best practices.

The very definition of protection by cloud providers is an important part of understanding the requirements and challenges of your configurations and information protection. AWS approaches data protection in several ways that impact your systems. The first is the configuration and design of your infrastructure. This consideration includes establishing Virtual Private Clouds (VPC) and providing for encryption of some information stores. The challenge exists in understanding the protection of these information stores and determining what you need to do to bring these protections in line with your requirements and compliance areas.

As you consider your systems, data protection will come down to several important areas:

  • Physical access controls – This refers to the doors, secure access controls and other protections at the physical server and server room level.
  • Logical access controls for your systems – These are the controls you put in place to prevent unwanted access to information.
  • Data access – Data access controls are typically enforced at the information stores level.
  • Protection of data in case of a breach – This is addressed by making the information in your systems unusable if accessed in a way that is unwanted.

Stephen’s white paper also covers the impact on data protection in public vs. private clouds, security fundamentals in AWS, and the best practices for deploying an encryption key management solution including:

  • Segregation of Duties
  • Dual Control and Split Knowledge
  • Key Creation (and understanding strong keys)
  • Key Rotation
  • Protection of Keys
  • Access Controls and Audits (Logging)

In his white paper, Stephen also discusses cloud-provider-based key management services and some of the important features, options, questions, and concerns that should be considered before selecting a service or a key management solution. Some important aspects to understand are:

  • Control, Ownership, and Access - By managing your own encryption services and providing for industry-compliant key management and data protection practices, you help ensure that your data remains managed by your own secure keys.
  • Multi-Tenancy and Key Management - In a worst case scenario it’s possible that keys could be compromised.
  • Access to Keys - Many systems and architectures are based on hybrid solutions. Cases where there are systems on-premises combined with systems in the cloud are areas that will be problematic with the AWS services. Systems not on the AWS hosted services will not have access to the key management services on AWS.

There are many different considerations when thinking about the choices in your key management solution. Be sure to fully understand logs, key management, backups and other elements that provide the utility you require. Finally, be sure you’re checking for proper compliance and certification of the solutions you are considering. It is important that any solution you choose has been through a FIPS 140-2 validation, and that you have a full understanding of any PCI, HIPAA or other regulatory body requirements.

Please download the full document to learn more about protecting information in Amazon Web Services and how Townsend Security’s Alliance Key Manager for AWS provides a FIPS 140-2 compliant encryption key manager to AWS users who need to meet data privacy compliance regulations and security best practices.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop

Related Posts Plugin for WordPress, Blogger...

Data Protection in the Cloud & PCI DSS - Segmentation (Part 2)

  
  
  
  

This is the second part in our series looking at recent announcements by Amazon, Microsoft and others regarding new encryption and key management services. Let’s talk about the concept of segmentation as a security best practice, and as a strong recommendation by PCI DSS security standards. Since the PCI DSS guidelines implement common security best practices they are a good jumping off point for evaluating the security of any application and platform. Following the practice of the first part of this series we will use the PCI document “PCI DSS Cloud Computing Guidelines, Version 2.0” as our reference point. Even if you don’t have to meet PCI data security requirements, this should be helpful when evaluating your security posture in the cloud.

Download Whitepaper on PCI Data Security

Segmentation as a security concept is very simple and very fundamental. Better security can be achieved by not mixing trusted and untrusted applications, data, and networks. This concept of trusted and untrusted applications extends to the value of the data assets – when applications process highly sensitive and valuable data assets they need to be separated into trusted and secure environments. We expend more effort and resources to protect what is valuable from criminals. Conversely, when there are no valuable data assets in an environment there is no need to take the same level of effort to secure them.

This is the core reason that PCI DSS recommends segmentation of applications that process payments from non-payment applications. Here is what PCI says about non-cloud applications:

Outside of a cloud environment, individual client environments would normally be physically, organizationally, and administratively separate from each other.

So, how do the PCI DSS security requirements relate to cloud platforms? Here is what PCI says (emphasis added):

Segmentation on a cloud-computing infrastructure must provide an equivalent level of isolation as that achievable through physical network separation. Mechanisms to ensure appropriate isolation may be required at the network, operating system, and application layers; and most importantly, there should be guaranteed isolation of data that is stored.

Proper segmentation is difficult to achieve even when you have complete control over all aspects of your environment. When you add the inherently shared and multi-tenant architecture of cloud platforms this becomes a high hurdle to get over. Here is what PCI says about this challenge:

Client environments must be isolated from each other such that they can be considered separately managed entities with no connectivity between them. Any systems or components shared by the client environments, including the hypervisor and underlying systems, must not provide an access path between environments. Any shared infrastructure used to house an in-scope client environment would be in scope for that client’s PCI DSS assessment.

This brings us exactly to the concern about new cloud key management services in Azure and AWS. These new services are inherently multi-tenant in both the key management services down to the hardware security modules (HSMs) that provide the ultimate security for encryption keys. You have no idea who you are sharing the service with.

The PCI guidance tells us what this segmentation looks like in a cloud environment:

A segmented cloud environment exists when the CSP enforces isolation between client environments. Examples of how segmentation may be provided in shared cloud environments include, but are not limited to: 

  • Traditional Application Service Provider (ASP) model, where physically separate servers are provided for each client’s cardholder data environment.
  • Virtualized servers that are individually dedicated to a particular client, including any virtualized disks such as SAN, NAS or virtual database servers.
  • Environments where clients run their applications in separate logical partitions using separate database management system images and do not share disk storage or other resources.

There is no cloud service provider implementation of key management services that meet these basic requirements.

The PCI DSS guidance takes a pretty strong view about inadequate segmentation in cloud environments:

If adequate segmentation is not in place or cannot be verified, the entire cloud environment would be in-scope for any one client’s assessment. Examples of “non-segmented” cloud environments include but are not limited to:

  • Environments where organizations use the same application image on the same server and are only separated by the access control system of the operating system or the application.
  • Environments where organizations use different images of an application on the same server and are only separated by the access control system of the operating system or the application.
  • Environments where organizations’ data is stored in the same instance of the database management system’s data store.

Since key management systems are always in scope for PCI audit and are extensions of your application environment and depend entirely on the access control system of the cloud provider, it is difficult to see how these new cloud key management services can meet PCI DSS requirements as currently implemented.

Here’s the last comment by PCI on segmentation in cloud environments:

Without adequate segmentation, all clients of the shared infrastructure, as well as the CSP, would need to be verified as being PCI DSS compliant in order for any one client to be assured of the compliance of the environment. This will likely make compliance validation unachievable for the CSP or any of their clients.

Does this mean you can’t implement security best practices for key management in the cloud? I don’t think so. There are multiple vendors including us (see below) who offer cloud key management solutions that we believe can be effectively isolated and segmented on cloud platforms, or even hosted outside of the cloud.

In our part 3 of this series we’ll look at the topic of logging and active monitoring and how it affects the security of your key management solution in the cloud.

Patrick


Resources

Alliance Key Manager for AWS

Alliance Key Manager for Azure

Alliance Key Manager for VMware and vCloud

Alliance Key Manager for Drupal

Alliance Key Manager for IBM Power Systems

Alliance Key Manager Cloud HSM

download the Whitepaper: Meet the Challenges of PCI Compliance


Related Posts Plugin for WordPress, Blogger...

Data Protection in the Cloud & PCI DSS - Encryption and Key Management (Part 1)

  
  
  
  

Public and private organizations of all sizes are rapidly adopting cloud platforms and services as a way of controlling costs and simplifying administrative tasks. One of the most urgent concerns is addressing the new security challenges inherent in cloud platforms, and meeting various compliance regulations. Data protection, especially encryption and encryption key management, are central to those security concerns.

Download Whitepaper on PCI Data Security The recent announcements by Amazon, Microsoft and others regarding new encryption and key management services make it more urgent to understand the security and compliance implications of these cloud security services.

There are a number of sources for security best practices and regulatory rules for data encryption including the National Institute for Standards and Technology (NIST), the Cloud Security Alliance (CSA), the Payment Card Industry Security Standards Council (PCI SSC), the EU Data Protection Directive, and others.

Because so many organizations fall under the PCI Data Security Standards (PCI DSS) regulations, and because the PCI guidelines are mature and often referenced by security auditors, we will use the PCI recommendations and guidance as the basis for our discussion in this multi-part series.

For securing information in the cloud, the PCI Security Standards Council published the document “PCI DSS Cloud Computing Guidelines, Version 2.0” in February of 2013. This is the current guidance for PCI DSS compliance and provides recommendations and guidance for any organization that needs to meet PCI data security requirements. It is also a common benchmark for organizations who do not need to meet PCI DSS standards, but who want to meet security best practices for protecting sensitive data in the cloud. 

Disclaimer: Townsend Security, Inc. is not a Qualified Security Auditor (QSA) and the opinions in this article are not intended to provide assurance of PCI DSS compliance. For PCI DSS compliance validation, please refer to an approved QSA auditor or request a referral from Townsend Security.

First things first: Let’s tackle the most fundamental question - Who is responsible for data security in the cloud?

This one is easy to answer. You are! Not your Cloud Service Provider (CSP), not your QSA auditor, and no one else. You are ultimately responsible for insuring that you meet security best practices and PCI DSS guidance in the cloud platform.

This can be confusing when you are new to cloud computing. Cloud Service Providers often make this more confusing by claiming to be PCI compliant and you might infer that you are PCI compliant as a result of moving to their cloud. This is wrong. The PCI SSC makes it clear that you bear full and ultimate responsibility for insuring PCI DSS compliance. No major cloud service provider can make you PCI compliant just by implementing on their cloud platform. You will have to work with your CSP to insure compliance, and that is your responsibility under these standards.

Now let’s look at the PCI cloud guidance in a bit more detail. Here is what they say in the PCI DSS cloud guidance (emphasis added):

Much stock is placed in the statement “I am PCI compliant”, but what does this actually mean for the different parties involved?

Use of a PCI DSS compliant CSP does not result in PCI DSS compliance for the clients. The client must still ensure they are using the service in a compliant manner, and is also ultimately responsible for the security of their CHD—outsourcing daily management of a subset of PCI DSS requirements does not remove the client’s responsibility to ensure CHD is properly secured and that PCI DSS controls are met. The client therefore must work with the CSP to ensure that evidence is provided to verify that PCI DSS controls are maintained on an ongoing basis—an Attestation of Compliance (AOC) reflects a single point in time only; compliance requires ongoing monitoring and validation that controls are in place and working effectively.

Regarding the applicability of one party’s compliance to the other, consider the following:
a) If a CSP is compliant, this does not mean that their clients are.
b) If a CSP’s clients are compliant, this does not mean that the CSP is.
c) If a CSP and the client are compliant, this does not mean that any other clients are.

The CSP should ensure that any service offered as being “PCI compliant” is accompanied by a clear and unambiguous explanation, supported by appropriate evidence, of which aspects of the service have been validated as compliant and which have not.

Great, now you know that you are responsible for PCI DSS compliance and that you have to work with your cloud service provider to insure that their services and components are PCI DSS compliant and that you have deployed them in a compliant way.

Are the new cloud key management services PCI DSS compliant?

No.

At the time I am writing this (February 2015) there has been no claim of PCI DSS compliance by Microsoft for the new Azure Key Vault service, nor by Amazon for the new AWS Key Management Service, and there is no Attestation of Compliance (AOC) available for either service.

So what should you do?

In this case the PCI cloud guidance is very clear (emphasis added):

CSPs that have not undergone a PCI DSS compliance assessment will need to be included in their client’s assessment. The CSP will need to agree to provide the client’s assessor with access to their environment in order for the client to complete their assessment. The client’s assessor may require onsite access and detailed information from the CSP, including but not limited to:

  • Access to systems, facilities, and appropriate personnel for on-site reviews, interviews, physical walk-throughs, etc.
  • Policies and procedures, process documentation, configuration standards, training records, incident response plans, etc.
  • Evidence (such as configurations, screen shots, process reviews, etc.) to show that all applicable PCI DSS requirements are being met for the in-scope system components
  • Appropriate contract language, if applicable

More from the PCI cloud guidance (note that cloud encryption key management is a Security-as-a-Service):

Security as a Service, or SecaaS, is sometimes used to describe the delivery of security services using a SaaS-based delivery model. SecaaS solutions not directly involved in storing, processing, or transmitting CHD may still be an integral part of the security of the CDE. As an example, a SaaS-based anti-malware solution may be used to update anti-malware signatures on the client’s systems via a cloud-delivery model. In this example, the SecaaS offering is delivering a PCI DSS control to the client’s environment, and the SecaaS functionality will need to be reviewed to verify that it is meeting the applicable requirements.

This means that you, or your security auditor, will have to perform the full PCI data security assessment of the cloud service provider’s encryption and key management service and that the CSP will have to grant you full access to their facilities, staff, and procedures to accomplish this.

For both practical and logistical reasons that is not likely to happen. Most CSPs do not allow their customers unfettered access to their staff and facilities. Even if you could negotiate this, it may not be within your budget to hire a qualified auditor to perform the extensive initial and ongoing reviews that this requires.

The only reasonable conclusion you can draw is that the new encryption and key management services are not PCI DSS compliant at the present time, and you are not likely to achieve compliance through your own efforts.

In the next part of these series we will look at other security and compliance concerns about these new cloud key management services. We still have some distance to go on this topic, but if cloud security is a concern I think you will find this helpful!

download the Whitepaper: Meet the Challenges of PCI Compliance

Related Posts Plugin for WordPress, Blogger...

Is Drupal Ready for the Enterprise?

  
  
  
  

Drupal is growing up. Currently, over one million websites run on Drupal. It is the CMS of choice for the Weather Channel, American Express and the White House. And with Drupal 8 right around the corner, promising to bring new features and capabilities, it is a very attractive CMS for agencies who need to build solutions for their clients.

What Data Needs To Be Encrypted In Drupal? While these are some major wins for the platform, there is still a large segment of enterprises not quite ready to adopt Drupal. With headlines like “Drupal Sites, Assume You’ve Been Hacked” (after Drupalgeddon) it is easy to understand why there may be some hesitation. Security is a top concern for enterprises and they are scrutinizing anything that collects and stores personally identifiable information (PII) on behalf of their brand.

Increased security requirements are now trickling down via RFPs to agencies and developers who need to choose a CMS. As far as these enterprises are concerned, they don’t care what CMS is used on their project, as long as it can: (1) help manage their risk of a data breach and (2) meet compliance requirements. Enterprises know that the regulations they fall under (PCI DSS, HIPAA, FISMA, etc.) can be unforgiving in the event of a breach, especially if the proper technology was not in place.

So, is Drupal ready for the enterprise?

As a platform, yes-ish. Enterprises have security requirements that go beyond what is available in Drupal core. Fortunately, there are members within the Drupal community that understand this and have developed modules and services that easily integrate into Drupal installations. It is now up to developers to use these tools to build secure, enterprise-ready web sites and applications.

In order to win bids, developers need to not only know how to code, but also need to know security best practices. Concepts like dual control and separation of duties are now considerations when planning for web site security. Developers are also learning what compliance regulations consider sensitive information – data that they previously didn’t think twice about leaving unprotected. Beyond the obvious – credit card and social security numbers – email addresses, phone numbers, and zip codes can constitute PII that needs to be encrypted.

DrupalCon Encryption 
Slide from Hugh Forrest’s (Director, SXSW Interactive Festival) keynote from DrupalCon 2014 on the importance of encryption.

The importance of encryption and key management cannot be overstated. Encryption is the hardest part of data security, and key management is the hardest part of encryption. Storing encryption keys within the Drupal database, settings file, or even a protected file on the server, will never pass an enterprise security team’s sniff test or compliance audit. Hackers don’t break encryption, they find keys, and without key management, developers are leaving their private data open to any hacker that cares to take a look.

“We recently began talks with a Fortune 100 company regarding their platform,” said Chris Teitzel, CEO of Cellar Door Media. “Encryption was a requirement for this project. The primary question the security team brought up was how we are able to manage the keys.”

With the importance of encryption and key management realized, who is responsible for implementing it? Unfortunately, currently no major Drupal hosting providers offer it within their environments. However, it is not difficult to deploy using modules like Encrypt, Key, Form Encrypt, Field Encrypt, Key Connection, etc. These modules all have integrations with services that allow for encryption and key management outside of the Drupal installation.

It is also important to note that, while hosting providers can claim they are compliant with your specific regulation, their compliance does not extend to you, the developer. Hosting providers can attain an Attestation of Compliance (AOC) for their platform, however, it does not extend to what their customers do within their environments. Additionally, regulations like PCI DSS make it clear that in the event of a breach, it is the enterprise, not the hosting provider or development agency, that is responsible.

For Drupal to truly be a contender in the enterprise space, which it clearly is, it is prudent that members of the Drupal community understand how to secure data within their deployments, who is responsible in the event of a breach (their clients), and what they need to do to secure sensitive data in Drupal to make it a viable option. What Data Needs Encrypted In Drupal?

Related Posts Plugin for WordPress, Blogger...

Anthem Data Breach - We Are Taking the Wrong Lesson About Encryption

  
  
  
  

We are taking the wrong lesson about encryption from the Anthem data breach. Several “experts” are weighing in with the opinion that encryption would not have prevented the breach, and even that Anthem should not bother implementing encryption to protect patient information! We don’t have a lot of information about the breach, but apparently the credentials of one or more system administrators were acquired by the attackers and used to access servers with sensitive patient data. So, if the attackers have the credentials of privileged users, it’s game over, right?

eBook The Encryption Guide Well, hold on Cowboy, you are taking the wrong lesson from this data breach!

Let’s start from the top. We have to use ALL of the tools at hand to deploy a defense in depth approach to protect our data. This means we need firewalls, intrusion detection, active monitoring, data leak prevention, anti-virus, two factor authentication, and everything else available to our security team to protect that information.  Further, it would be irresponsible to not consider encryption as an essential component as part of a defense in depth strategy. 

I am sure that Anthem already has a large number of these tools and defenses deployed in their environment. Should they just unplug them all and throw up their hands? Is surrender the best approach given the intelligence and persistence of dedicated attackers? 

Of course not, surrender should not even be in our vocabulary!

Encryption and related encryption key management tools are critical for any company that wants to protect the sensitive information of their customers (or patients, in the case of Anthem), employees, and business partners. It’s mandated by many compliance regulations such as PCI DSS which requires merchants and payment processors to encrypt credit card account numbers. It’s highly recommended to protect patient information by anyone like Anthem who is a Covered Entity under HIPAA regulations (any bets on how soon that will move from “recommended” to “required” status?). All serious security professionals know that encryption is a critical security component and recommend it is a part of an organization’s security strategy.

Does this mean encryption is the perfect defense? Of course not. Given enough authorization to sensitive data even encryption may not be able to prevent a breach.

Encryption raises the bar for the attacker. It narrows the attack surface and makes it more difficult. Unlike the situation at Anthem, in many cases an attacker compromises a non-privileged account and steals the database full of sensitive information. If the sensitive data is not encrypted, the data is lost. If the data is encrypted and you've protected the encryption key, the data is safe. Effective defenses involve a layered approach and constant vigilance.  If we use all of our tools effectively, including encryption, we have a very good chance of detecting an attack early and thwarting it.

A few months ago Adobe suffered a breach and lost millions of records. But the most sensitive data was encrypted. That story basically went away in a couple of days. Target and Sony also suffered large data breaches – do you think they wish they had been encrypting their data? You bet they do! Their stories never seem to go away.

Delay, hopelessness, and surrender are not going to help and are not justified.

This is the lesson we need to learn about encryption.

Patrick

The Encryption Guide eBook Related Posts Plugin for WordPress, Blogger...

VMware Encryption - 9 Components of a Defensible Encryption Strategy

  
  
  
  

VMware Encryption eBook We all know encrypting sensitive data such as customer, employee, and business critical data is not only crucial to protecting your company’s assets, encryption is also required by industry regulations such as the Payment Card Industry Security Standards Council (PCI SSC) and GLBA/FFIEC. Today businesses are turning to VMware virtual machines and the cloud to reduce cost and complexity within their IT environments. When companies set out to encrypt sensitive data that is stored or processed in VMware, meeting industry regulations is top of mind. Businesses also sometimes assume that meeting the encryption requirements of a regulation will protect them from a data breach as well. Unfortunately, passing a data security audit does not always guarantee a strong defense to a data breach. Where data is encrypted and how it is encrypted is often subjective to the auditor, and where one auditor might give your encryption solution a passing grade, another might fail you. If you are only looking for a passing grade, you may be implementing the bare minimum requirements. When you consider the possible deviation between one auditor to the next, it becomes clear that meeting compliance is often a low bar.

At Townsend Security we help our customers not only meet compliance, but achieve a level of security in their VMware environment that will protect them in the event of a data breach. Our new eBook, “VMware Encryption: 9 Critical Components of a Defensible Encryption Strategy,” discusses nine strategies for ensuring your VMware encryption strategy is strong enough to protect your business in the event of a data breach.

Download this eBook to learn more about these critical components and more:

1. Establish a VMware Security Roadmap
The first step in securing your VMware environment is to establish a security roadmap. Determine how encryption and key management in VMware fit into a holistic security plan, and assess security requirements that compliance regulations mandate. Assess your level of risk tolerance for the types of data you want to protect. It’s important to keep in mind that compliance regulations may not mandate the protection of some data, such as email addresses and passwords; however, you may want to encrypt this data in order to protect your brand and reputation should this data get breached. At an IT level, like other security applications that perform intrusion detection/prevention, and active monitoring, you should deploy your encryption key management virtual machine in a separate security workgroup and provide administrative controls in the same way as for other VMware and third party security applications. [Download the eBook to read more]

2. Inventory and Prioritize Sensitive Data
Every encryption project should start by making an inventory of sensitive data in your IT environment. The first step is to define “sensitive data.” Sensitive data is any customer or internal data that you must protect in order to meet compliance requirements or protect your customers, employees, and yourself from data theft and fraud. The scope of what is considered “sensitive data” and how hackers use data to commit fraud is growing. However, if you do not know where to start, first consider the compliance regulations you fall under. [Download the eBook to read more]

3. Use Industry Standard AES Encryption
Encryption protects your data at the source and is the only way to definitively prevent unwanted access to sensitive data. Academic and professional cryptographers have given us a number of encryption algorithms that you can use to protect sensitive data. Some have interesting names like Twofish, Blowfish, Serpent, Homomorphic, and GOST; however, it is critical in any professional business to use encryption algorithms accepted as international standards. Many compliance regulations require the use of standard encryption, such as AES, a globally recognized encryption standard, for encrypting data at rest. [Download the eBook to read more]

4. Encryption Key Management

Many organizations that encrypt sensitive data fail to implement an adequate encryption key management solution. While encryption is critical to protecting data, it is only half of the solution. Your key management will determine how effective your encryption strategy ultimately is. When encrypting information in your applications and databases, it is crucial to protect encryption keys from loss. Storing encryption keys with the data they protect, or using non-standard methods of key storage, will not protect you in the event of a data breach. For businesses that are already encrypting data, the most common cause of an audit failure is improper storage and protection of the encryption keys. [Download the eBook to read more]

Download “VMware Encryption: 9 Critical Components of a Defensible Encryption Strategy,” to learn 5 more critical components! Learn how to protect your customers, secure your business assets, avoid regulatory fines, and protect your brand.

VMware Encryption eBook

Related Posts Plugin for WordPress, Blogger...

PGP on IBM System z Mainframes

  
  
  
  

With the new z13 model, IBM announced another round of enhancements and improvements to the venerable IBM System z Mainframe. Focusing on mobile and social media integration, IBM is yet again modernizing and extending this high-end enterprise server.

PGP Encryption Trial IBM i While the IBM System z Mainframe has a well-earned reputation for security, how do Mainframe customers protect their data as they move towards more open, internet-based mobile and social media integration?

Pretty Good Privacy (PGP) is one path to provable and defensible security, and PGP Command Line is the de facto standard for enterprise customers.

PGP is one of the most commonly accepted and widely deployed whole file encryption technologies that has stood the test of time. It works on all of the major operating system platforms and makes it easy to deploy strong encryption to protect data assets. And it runs on the IBM System z Mainframe!

For about a decade we at Townsend Security have been bringing PGP encryption to Mainframe customers to help them solve some of the most difficult problems with encryption. As partners with Symantec we provide IBM enterprise customers running IBM System z and IBM i (AS/400, iSeries) with the same strong encryption solution that runs on Windows, Linux, Mac, Unix, and other platforms.

Incorporating the OpenPGP standard, PGP Command Line from Townsend Security and backed by Symantec, is compatible with a variety of open source PGP encryption solutions, while adding features to warm the heart of the IBM Mainframe customers. And this is the same PGP whose underlying PGP SDK has been through multiple FIPS 140-2 validations and which is FIPS 140-2 compliant today.

While retaining the core functions of PGP and the standards-based approach to encryption, we’ve been busy extending PGP specifically for the IBM Mainframe customer. Here are just a few of the things we’ve done with PGP to embrace the IBM Mainframe architecture:

  • Native z/OS Batch operation
  • Support for USS operation
  • Text mode enhancements for z/OS datasets
  • Integrated EBCDIC to ASCII conversion using built-in IBM facilities
  • Simplified IBM System z machine and partition licensing
  • Support for self-decrypting archives targeting Windows, Mac, and Linux!
  • A rich set of working JCL samples
  • Free evaluation on your own IBM Mainframe

IBM Mainframe customers never have to transfer data to a Windows or Linux server to perform encryption, and in the process exposing data to loss on those platforms. With full cross-platform support you can encrypt and decrypt data on the IBM Mainframe regardless of its origination or destination.

PGP Command Line is the gold standard for whole file encryption, and you don’t have to settle for less.

Patrick

PGP Encryption for IBM

Related Posts Plugin for WordPress, Blogger...
Tags: , ,

Securing Web Sites and Applications with Encryption & Key Management

  
  
  
  

Web site and application data security can be greatly enhanced by encrypting sensitive data. An encryption strategy is only as good as the protection of the encryption keys. Poor protection for encryption keys will lead to compliance audit failures, regulatory failures, and brand damage due to poor security practices.

eBook The Encryption Guide The following topics discuss how encryption and key management improves web application security:

Separation of Encryption Keys from Data
The separation of encryption keys from the data they protect is a core security best practice. Cybercriminals may steal sensitive data, but if that data is encrypted and the keys are not readily available, the data remains protected. The separation of keys from the data they protect is also fundamental to implementation of Separation of Duties and Dual Control. Townsend Security's Alliance Key Manager provides the mechanism by which keys are separated from the data they protect.

Separation of Duties
For critical systems, security is always improved by dividing responsibility among multiple administrators. In data protection, this concept means that people who have access to the data (users, DBAs, etc.) should not be the people who have access to the encryption keys. And the reverse is true. In order to achieve Separation of Duties you must separate the system, network, and database functions from the encryption key management functions. This is a core concept in PCI-DSS, HIPAA, GLBA/FFIEC, and other regulations. Alliance Key Manager provides for Separation of Duties by allowing different people to manage the web application data and the management of the encryption keys.

Dual Control
All critical business operations that can impact the health and existence of an organization should be managed with Dual Control. Dual Control means that it takes two individuals to perform the critical operation. Because encryption keys are the crucial secret that must be protected, Dual Control means that at least two people must authenticate to create and manage encryption keys. Alliance Key Manager implements Dual Control in the security console to meet this security best practice and regulatory requirement.

Limited Access
Security best practices require that as few people have access to encryption keys as possible to minimize the risk of loss. Be managing encryption keys in a key manager designed for this purpose, keys can be used by the applications that need them, but managed by a small number of security administrators. Alliance Key Manager allows you to grant access to only those security administrators who have the need to manage the encryption keys.

Secure Key Retrieval
Encryption keys and the Encryption Services available with Alliance Key Manager are always accessed via encrypted TLS connections. Secure connections help prevent capture of encryption keys across public and private networks, memory scraping routines, etc. Unencrypted access to Alliance Key Manager is not allowed.

Authenticated Key Retrieval
Unlike normal web servers which provide access to anyone with a certificate signed by a public certificate authority, Alliance Key Manager creates its own private CA unique to you, creates client-side certificates and private keys signed by that CA, and restricts access to only those clients who present a known certificate. This prevents outsiders from accessing the key server using publicly available certificates and keys.

Protection of Credentials
Because certificates and private keys are used as credentials for access to Alliance Key Manager, they must be protected in the Web application server. Credentials should be stored outside of the web root directory and access permission should only be granted to the web application user. For a Drupal installation, the same precautions should be taken.

Active Monitoring
Active monitoring is a core security requirement and applies to all encryption key management activity. Alliance Key Manager provides real-time audit and system logging off all key retrieval, encryption services, and key management tasks. This helps meet regulatory requirements and security best practices for all key management activity.

For more information on encryption, download the eBook:

The Encryption Guide eBook

Related Posts Plugin for WordPress, Blogger...
All Posts