Excerpt from the eBook "2016 Encryption Key Management: Industry Perspectives and Trends."
While organizations are now committed to implementing encryption, they are still struggling with getting encryption key management right. With all major operating systems, cloud platforms, and virtualization products now supporting encryption, it is relatively easy to make the decision to activate encryption to protect sensitive data. But an encryption strategy is only as good as the method used to protect encryption keys. Most audit failures for customers already using encryption involve the improper storage and protection of encryption keys.
Ignorance and fear are the driving reasons for this core security failure. Many IT professionals are still not versed in best practices for encryption key management, and IT managers fear that the loss of encryption keys or the failure of access to a key manager will render their data unusable. This leads to insecure storage of encryption keys in application code, unprotected files on the data server, and poor protection of locally stored keys.
Most encryption key management solutions have evolved over the last decade to provide unparalleled reliability and redundancy. This has largely removed the risk of key loss in critical business databases and applications. But the concern persists and inhibits the adoption of defensible key management strategies.
- Protect encryption keys with single-purpose key management security solutions.
- Never store encryption keys on the same server that houses sensitive data.
- Only deploy encryption key management solutions that are based on FIPS 140-2 compliant technology.
- Only deploy encryption key management solutions that implement the KMIP industry standard for interoperability.
- Avoid cloud service provider key management services where key management and key custody are not fully under your control.
Cloud Migration and Key Management Challenges
Cloud migration continues to a be a high priority for organizations large and small. The benefits for migrating to the cloud are clear. Reduction in cost for computing power and storage, leverage of converged infrastructure, reduction of IT administrative costs, on-demand
scalability, and many other benefits will continue the rapid migration to cloud platforms. As cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, Google App and Compute Engine, and IBM SoftLayer mature we can expect the pace of cloud migration to accelerate.
While cloud service providers are providing some encryption key management capabilities, this area will continue to be a challenge. The question of who has control of the encryption keys (key custody) and the shared resources of multi-tenant cloud service providers will continue to be headaches for organizations migrating to the cloud. The ability to exclusively manage encryption keys and deny access to the cloud service provider or any other third-party will be crucial to a good cloud key management strategy and end-customer trust. The attempt by governments and law enforcement agencies to access encrypted data through access to encryption keys will make this issue far more difficult moving forward.
Unfortunately most cloud service providers have not adopted common industry standards for encryption key management. This results in the inability of customers to easily migrate from one cloud platform to another resulting in cloud service provider lock-in. Given the rapid evolution of cloud computing and the infancy of cloud computing, customers will have to work hard to avoid this lock-in, especially in the area of encryption key management. This is unlikely to change in the near future.
- Avoid hardware-only encryption key management solutions prior to cloud migration. Make sure your key management vendor has a clear strategy for cloud migration.
- Ensure that your encryption key management solution runs natively in cloud, virtual and hardware platforms.
- Ensure that your encryption key management solution provides you with exclusive management of and access to encryption keys. Neither your cloud service provider nor your encryption key management vendor should have administrative or management access to keys. Backdoor access through common keys or key management code is unacceptable.
- Avoid cloud service provider lock-in to proprietary key management services. The cloud is still in its infancy and retaining your ability to choose and migrate between cloud platforms is important.
"This article was originally posted on Drop Guard’s blog. Drop Guard automates Drupal updates to ensure security of your site within minutes after a security release."
Encryption has gone mainstream. Thanks to the numerous data breaches (781 during 2015 in the U.S. alone) data security is a top priority for businesses of all sizes. Semi-vague language like “we ensure our data is protected” from IT teams is no longer good enough to satisfy the concerns of business executives and their customers. CEOs are losing their jobs and companies are suffering financial losses/fines that reach into the millions of dollars when poorly encrypted or un-encrypted data is lost.
Fortunately, the recent 2016 Global Encryption Trends Study shows that there is a growing shift with businesses developing an encryption strategy – Germany (61%) and the U.S.(45%) are in the lead with the primary drivers including:
- To comply with external or data security regulations
- To protect enterprise Intellectual Property (IP)
- To protect information against specific identified threats
- To protect customer personally identifiable information (PII)
Before getting too much deeper, let’s first clarify a common misperception. Yes, while the big brands are in the headlines, they aren’t the primary target of hackers. A recent threat report from Symantec found that three out of every five cyber-attacks target small and midsize companies. These businesses are targeted because they have weaker security controls and have not taken a defense-in-depth approach to security. Even though some of these businesses may be encrypting their data, chances are they are not doing it correctly.
How do you then define what properly encrypting data involves? First, it means using an industry standard encryption algorithm such as the Advanced Encryption Standard (AES), alternatively known as Rijndael. Standards are important in the encryption world. Standard encryption algorithms receive the full scrutiny of the professional cryptographic community. Further, many compliance regulations such as PCI-DSS, HIPAA/HITECH, FISMA, and others are clear that only encryption based on industry standards meet minimal regulatory requirements. Standards bodies such as NIST, ISO, and ANSI have published standards for a variety of encryption methods including AES.
But wait, my web sites don’t collect credit cards or social security numbers. Why should I care about encryption?
You would be surprised at what can be considered Personally Identifiable Information and should be encrypted. PII now includes information such as (and not limited to):
- Email address
- Login name
- Date of birth
- IP address
For a moment, pause and consider how many websites you have built that collect this type of information. As you will quickly realize, even the most basic marketing websites that collect a name and email address should deploy encryption.
The second part of properly encrypting data involves key management and understanding the important role that it plays. Your encryption key is all that stands between your data and those that want to read it. So then, what good is a locked door if the key is hidden underneath the “Welcome” mat?
Security best practices and compliance regulations say that encryption keys should be stored and managed in a different location than the encrypted data. This can present a challenge for Drupal developers as the standard has been to store it on the server outside the web root. While there is a suite of great encryption modules (Encrypt, Field Encryption, Encrypted Files, etc.), they unfortunately all stash the key in similar unsecure locations – in the Drupal database, settings file, or in a protected file on the server. Once a hacker compromises a site, they can then take the encryption keys and have access to all the sensitive data.
“Let it Key”
Let’s now bring this back to one of my favorite songs by the Beatles to paraphrase a line (excuse the pun), “Speaking words of wisdom, let it key.” The Key module provides the ability to manage all keys on a site from a singular admin interface. It can be employed by other modules (like the aforementioned encryption modules) to offload key management so the module is not responsible for protecting the key. It gives site administrators the ability to define how and where keys are stored, which, when paired with the proper key storage modules, allows the option of a high level of security and the ability to meet security best practices and compliance requirements. Drupal developers can now easily integrate enterprise level key management systems such as Townsend Security’s Alliance Key Manager or Cellar Door Media’s Lockr (both of these companies have been heavy sponsors of encryption module development).
Enterprise level key managers do more than just store keys outside of the Drupal installation. In addition to safeguarding encryption keys, they manage them through the entire lifecycle—from creation to destruction. Further, a key manager will allow for:
- Key naming and versioning
- Key change and rotation
- Secure key retrieval
- Key mirroring
- Key import and export
- Password and passphrase protection
- User and group control for key access
In addition to providing NIST-compliant AES encryption and FIPS 140-2 compliant key management, an encryption key manager also allows administrators to store their API keys outside of Drupal. Private API keys are frequently used within Drupal by services like Authorize.net, PayPal, MailChimp, etc., and all these modules store their keys in the database in the clear. If your site gets hacked, so does access to the services that you have integrated into your site. For example, if your Amazon S3 API key were in your stolen database, hackers would have access to your entire offsite S3 storage. Consider how detrimental it would be for a client to find out that a hacker has gained access to their PayPal account - after all, they were using PayPal because they didn’t want to deal with the security risks and liability of hosting their own payment processing.
By routing all keys within Drupal to a central key management location, it is now possible to have total control over all encryption keys, API keys, external logins and any other value needing to remain secret. This gives developers and site builders a useful dashboard to quickly see the status of every key, and control where it is stored.
By now, I have hopefully conveyed the importance of encryption and key management. Everyone from developers to site owners need to understand the importance of data security and the proper steps they can take to keep their site, and their businesses safe. More secure encryption also has an effect on the Drupal platform as a whole. Whether Drupal is being used to create the most basic brochure site or advanced enterprise web application, encryption and API key management are critical to ongoing success. Many businesses don’t care whether their web site is built with Drupal, WordPress, Joomla, or even Microsoft Word (yes this is still possible) as long as it is secure. By integrating the proper security controls, including encryption and key management, Drupal will continue to proliferate and be the tool of choice for enterprises looking for the highest level of data and site security.
Excerpt from the eBook "2016 Encryption Key Management: Industry Perspectives and Trends."
Virtualization Will Continue to Dominate IT Strategy & Infrastructure
Large and small enterprises will continue to grow their virtualization footprints at the same time that they are looking to migrate applications to the cloud. The cost reductions provided by the market leader VMware will ensure that the VMware customer base will continue to consolidate applications and servers on their virtualization technology and that they will continue to be a powerful player in the IT infrastructure space for many years.
While VMware is the dominant technology provider for virtualization, we will see Microsoft attempt to increase their footprint with Hyper-V, and OpenStack solutions will also expand. We expect that all of the virtualization solution providers will attempt to de ne a clear path to the cloud based on their technologies. VMware is already moving in this direction with their vCloud Air initiative, and Microsoft uses Hyper-V as the foundation for the Azure cloud.
Encryption key management solutions that only run in hardware, or that only run on cloud platforms, present substantial obstacles for businesses with virtualized data centers. The rich set of management and security tools are not able to work effectively with solutions that are outside the virtualization boundary. Customers are looking to reduce their hardware footprint, not increase it. And solutions that can’t be managed or secured in the usual way represent additional risk and cost. Encryption key management solutions should be able to run within the virtualization boundary as an approved security application. Key management vendors vary greatly in their ability to support the range of deployments from traditional IT data center, to virtualized plat- forms, to the cloud. Organizations will continue to struggle with key management across these environments.
- Encryption key management solutions should be able to run as fully native virtual machines in a VMware or Hyper-V environment.
- Encryption key management solutions should be compatible with security and management functions of the virtual platform.
- To maintain maximum business flexibility, deploy a key management solution that works well in virtual, cloud, and traditional hardware platforms.
- Look for key management solutions that carry industry security certifications such as PCI Data Security Standard (PCI DSS), etc.
Key Management Vendor Stability Loses Ground
Merger and acquisitions in the security community continue at a rapid pace. Encryption key management vendors are being absorbed into larger organizations and this trend will likely continue. The public relations around such mergers and acquisitions is always accompanied with glowing prognostications and happy talk. Unfortunately, as often happens with any merger, key management vendors may experience disruption in their organizations as a result of a merger or acquisition. A key management solution may not be strategically important to an acquirer and this can result in disinvestment in the solution negatively impacting customer support. Key management is a part of an organization’s critical infrastructure and these changes can be disruptive.
Organizations can work to minimize the potential impact of key management vendor consolidation by understanding the vendor’s organizational structure, corporate history, and financial basis. Venture backed organizations can be expected to experience an exit through a merger, acquisition, or public offering. Vendors with solutions that are not strategically important to their product mix can also experience change and disruption. Using care in key management vendor selection may be one of the most important efforts you can make. This will be a continuing challenge in the years ahead.
- Understand your key management vendor’s equity foundation and the likelihood of a merger or acquisition. If the key management vendor is largely funded by venture capital it is almost certain that the company will experience a merger or acquisition event.
- Understand your key management vendor’s management team. Have key employees been with the company for a longer period of time? This is one good indicator of organizational stability.
Vendor Customer Support is a Growing Concern
As mentioned previously, encryption key management vendors continue to be absorbed into larger organizations and this trend will likely continue. Unfortunately, as can happen with any merger, key management vendors may experience disruption in their organizations as a result of a merger or acquisition. This can directly a effect the customer support organization and your ability to get timely and reliable technical support for your encryption key management solution. Deteriorating customer support can put your organization at risk. Key management solutions are a part of your critical infrastructure and proper customer support is crucial to operational resilience.
Another side affect of reduced or under-funded customer support is the inability of your organization to expand and invest in new applications and systems. These impacts on customer support may not present short-term problems, but can impair long-term resilience and growth flexibility. Many organizations will continue to experience inadequate customer support from key management vendors.
- Understand the customer support organization of your key management vendor. Does the vendor demonstrate a strong investment in customer support? Is there adequate management of the customer support team?
- Review the Service Level Agreement (SLA) provided by your key management vendor. Be sure you understand the expected response times provided by the vendor customer support team.
- How do other organizations experience customer support from your key management vendor? Be sure to talk to reference accounts who use the key management product and who have interact- ed with the vendor’s customer support team.
The news has been heavy recently with stories of Ransomware attacks on hospitals, businesses, and even police departments. The basic Ransomware attack usually starts with a user clicking on a poisoned link or opening an infected document or file. A trojan program installed on a user PC or server then runs and denies access to that data until a ransom is paid.
The denial of access to data is often done through the encryption of all of the files on a PC. CryptoLocker is an example of this type of access denial, but there are now many variants of encrypted access denial. Other methods of access denial have been used, but encryption is now the most common method. Strong encryption is readily available to anyone including cybercriminals, and unless the attack uses poor encryption methods there is no way to unlock the data without paying the ransom to receive the decryption key.
A disturbing trend has developed with Ransomware over the last few months. In addition to encrypting a user’s PC or a single server, Ransomware has taken to encrypting network and mounted drives, even drives that are mirrors to cloud storage services. The mounted drives might even include your backup storage! The encryption of network drives affects a much larger group of users and can be devastating to the organization as a whole. And when the backup network drive is affected there is no way to even restore from that backup. Many organizations can afford to lose a single user PC - but imagine losing all of the company’s information on a central server!
Monetarily, ransoms are usually not very large, but there are exceptions. Cyber criminals know that a smaller ransom is more likely to be paid and they can then move on to the next victim. Ransom payments are usually done in Bitcoin to avoid tracking the payment through the normal banking system. While not a perfect strategy for cyber criminals, it usually works pretty well.
So, what can you do to avoid the catastrophic loss of your data from a Ransomware attack?
Old style, off-site, disconnected backup is back in fashion!
Whatever is connected to your network is at risk in a ransomware attack. Backup cartridges stored off-site at an archival service like Iron Mountain, or even stored at your local bank, can’t be damaged by Ransomware. I know that many organizations have migrated to cheaper online and virtual tape backup systems, but these may be accessible to a dedicated attacker. If your internal systems can “see” the backup storage, so can an attacker. You need to have backups that are not accessible to the attacker - put some airspace between your backups and the cyber criminals!
Tape, cartridge and disk-based backup systems have been around for quite some time, are reasonably priced, and can be quick to deploy. Here are some things to look for in backup systems:
- Tape or disk physical media can be stored off-site
- Backups should be encrypted - don’t risk the loss of an unencrypted tape or cartridge
- Don’t share keys across individual backups - every backup should have a unique key.
- Create documented procedures for backups.
- Create documented procedures for restoring from backup.
- Test your restore! Your backup strategy is only as good as your proven ability to restore from the backup.You don’t have a backup strategy until you’ve tested it with an actual restore!
- Schedule your backups so that they are automatic.
Because old-style off-site backup has been around for a while you will find good documentation and best practices about backup and recovery. You don’t have to reinvent the wheel here. Mature and proven solutions are available right now.
Addressing off-site backup may seem old-fashioned to you right now. You won’t think so if your organization falls victim to a Ransomware attack! Here at Townsend Security we use a cartridge backup solution from Quantum who are one of our partners, but you have lots of choices. Get started now!
Those of us in the data security industry often wear technology blinders as we go about the business of trying to secure the sensitive data of the organizations we serve. Every organization has limited resources and it is hard to compete with line of business needs in terms of budget and human resources. It’s an ongoing struggle that comes with the territory.
Of course, any organization that has suffered a severe data breach quickly changes its attitude towards investing in security. The internal attitudes at Target, Anthem and Sony are different today than they were in the past, and for good reasons.
For those who’ve not experienced a data breach, the organizational costs remain vague and theoretical. I thought you might like to read how an attorney views the impacts of a data breach that involves the loss of credit card information. David Zetoony, an attorney with the legal firm Bryan Cave, has written several white papers discussing aspects of security. These are very readable works and well worth the time. Even if you are not processing credit card payments, I think this article is relevant to the loss of any sensitive data.
Here is his paper on the impacts of a data breach involving credit cards.
There is a bonus section in this paper about cyber insurance. In my eBook on Key Management Trends and Predictions I mention Cyber Insurance as an evolving industry. This paper by David Zetoony delves much deeper into the issues related to Cyber Insurance. He provides some very practical advice on how to think about Cyber Insurance and how to evaluate potential coverage. If you are new to the topic, or if you’ve not reviewed your Cyber Insurance policy for more than a year, you need to read the second part of David’s paper.
Neither I nor Townsend Security has any relationship with David Zetoony and the legal firm of Bryan Cave. I stumbled on this David’s work and thought you might find this informative. For those of you making the case for increased security, you might consider sharing David’s paper with your management team and legal counsel.
This is a special blog post written for Townsend Security by the Drupal Drop Guard team.
While developing a system to automate Drupal updates and using that technology to fulfill our Drupal support contracts, we ran into many issues and questions about the workflows that integrate the update process into our overall development and deployment cycles. In this blog post, we’ll outline the best practices for handling different update types with different deployment processes – as well as the results thereof.
The general deployment workflow
Most professional Drupal developers work in a dev-stage-live environment. Using feature branches has become a valuable best-practice for deploying new features and hotfixes separately from the other features developed in the dev branch. Feature branches foster continuous delivery, although it does require additional infrastructure to test feature branches in separate instances. Let us sum up the development activity of the different branches.
This is where the development of new features happens and where the development team commits their code (or in a derived feature branch). When using feature branches, the dev branch is considered stable; features can be deployed forward separately. Nevertheless, the dev branch is there to test the integration of your locally developed changes with the code contributions of other developers, even if the current code of the dev branch hasn’t passed quality assurance. Before going live, the dev branch will be merged into the stage branch to be ready for quality assurance.
The stage branch is where code that’s about to be released (merged to the master branch and deployed to the live site) is thoroughly tested; it’s where the quality assurance happens. If the stage branch is bug-free, it will be merged into the master branch, which is the code base for the live site. The stage branch is the branch where customer acceptance happens.
The master branch contains the code base that serves the live site. No active changes happen here except hotfixes.
Hotfixes are changes applied to different environments without passing through the whole dev-stage-live development cycle. Hotfixes are handled in the same way as feature branches but with one difference: whereas feature branches start from the HEAD of the dev branch, a hotfix branch starts from the branch of the environment that requires the hotfix. In terms of security, a highly critical security update simply comes too late if it needs to go through the complete development cycle from dev to live. The same applies if there’s a bug on the live server that needs to be fixed immediately. Hotfix branches need to be merged back to the branches from which they were derived and all previous branches (e.g. if the hotfix branch was created from the master branch, it needs to be merged back to the master to bring all commits to the live site, and then it needs to be merged back to the stage and dev branch as well, so that all code changes are available for the development team)
Where to commit Drupal updates in the development workflow?
To answer this question we need to consider different types of updates. Security updates (including their criticality) and non-security updates (bug fixes and new features).
If we group them by priority we can derive the branches to which they need to be committed and also the duration of a deployment cycle. If you work in an continuous delivery environment, where you ship code continuously,the best way is to use feature branches derived from the dev branch.
Low (<=1 month):
- Bug fix updates - Feature updates
These updates should be committed by the development team and analysed for side effects. It’s still important to process these low-prio updates, as high-prio updates assume all previous code changes from earlier updates. You might miss some important quality assurance during high-prio updates to a module that hasn’t been updated for a long time.
Medium (<5 days):
- Security updates that are not critical and not highly critical
These updates should be applied in due time, as they’re related to the site's security. Since they’re not highly critical, we might decide to commit them on the stage branch and send a notification to the project lead, the quality assurance team or directly to you customer (depending on your SLA). Then, as soon as they’ve confirmed that the site works correctly, these updates will be merged to the master branch and back to stage and dev.
High (<4 hours):
- Critical and highly critical security updates
For critical and highly critical security updates we follow a "security first" strategy, ensuring that all critical security updates are applied immediately and as quickly as possible to keep the site secure. If there are bugs, we’ll fix them later! This strategy instructs us to apply updates directly to the master branch. Once the live site has been updated with the code from the master branch, we merge the updates back to the stage and dev branch. This is how we protected all our sites from Drupalgeddon in less than two hours!
Updates automation options
There are only a few ways to ensure the updates will be applied just in time and when it’s really needed, depending on the type of update. Any of those have positive and negative sides, and it’s only up to you to choose what suites you the best:
- Monitoring for updates manually or via one of available services or custom scripts, and once the security update is detected, process it according to the workflow defined in your organization. This approach works in most cases, but it requires someone to be ready to take action 24/7;
- Building a completely custom solution, which will not only detect updates, but also take care of applying them when it’s time. The only obvious drawback of this is that you have to spend a lot of time building and maintaining your custom tool.
- Using the updates automation service, such as Drop Guard, which will integrate seamlessly in your workflow and process updates in exactly the way you want. You don’t have to worry about being alerted all the time, or spending too much time on building your own solution, but be prepared to spend a few dollars on the 3rd party solution.
Requirements for automation
If you want to automate your Drupal security updates with the Drop Guard service, all you need is the following:
- Code deployment with GIT
- Trigger the update of an instance by URL using e.g. Travis.ci, Jenkins CI, DeployHQ or other services to manage your deployment or alternatively execute SSH commands from the Drop Guard server.
Also to keep in mind:
- Know what patches you’ve applied and don't forget to re-apply them during the update process (Drop Guard helps with its automated patch detection feature)
- Automated tests reduce the time you spend on quality assurance
Where to commit an update depends on its priority and on the speed with which it needs to be deployed to the live site. Update continuously to ensure the ongoing quality and security of your project and to keep it future-proof. Feature and bug fix updates are less critical but also important to apply in due time.
There are many ways of ensuring the continuous security for your website, and it’s up to you whether to go with a completely manual process, try to automate some things, or opt-in for a fully automated solution such as Drop Guard.
As the importance of encryption increases across the regulatory spectrum, it is heartening to see database vendors implementing encryption and encryption key management that meet security best practices. MongoDB Enterprise is one of the most popular modern databases and has given its customers the ability to quickly and effectively implement encryption and good key management at the level of the database itself. This means that MongoDB Enterprise customers can transparently implement encryption directly in the database without deploying third party file or volume based encryption products to accomplish this task. MongoDB customers enjoy rapid deployment of encryption, a unified management interface to the database, high performance encryption, and integrated encryption key management based on the industry standard Key Management Interoperability Protocol (KMIP).
By basing key management on the OASIS KMIP standard MongoDB is providing real benefits to their customers. These include:
- Rapid implementation of key management with no need for developer resources.
- No requirement for server-side software libraries, updates, and patching.
- Secure encryption key retrieval services based on TLS.
- Vendor neutrality for encryption key management services.
- Vendor portability and avoidance of vendor lock-in.
- Seamless migration from one key management solution to another.
MongoDB Enterprise is a case-study in the benefits of using open standards for encryption and key management. MongoDB’s customers can choose the key management solution that makes the most sense for them, and having choices reduces the cost of encryption key management.
Townsend Security's Alliance Key Manager solution works out of the box with MongoDB Enterprise to help customers meet compliance regulations which require encryption of sensitive data. It takes just a few minutes to deploy Alliance Key Manager and to configure MongoDB for key management. On first boot, Alliance Key Manager creates a set of strong encryption keys including a 256-bit AES key that can be used with MongoDB, and it creates a set of PKI certificates used for TLS authentication. The PKI certificates are copied to the MongoDB server to secure the connection to the key manager. A quick command or two from the MongoDB command line console and, Voila! Your MongoDB encryption key is now fully protected by a key encryption key on Alliance Key Manager. You have an encrypted MongoDB database and a highly redundant and recoverable encryption key management infrastructure. From the technical team at MongoDB:
“The encryption occurs transparently in the storage layer; i.e. all data files are fully encrypted from a file system perspective, and data only exists in an unencrypted state in memory and during transmission.”
After installing the key manager certificates on the server hosting the MongoDB database, here is an example of the command to enable encryption:
mongod --enableEncryption --kmipServerName akmmongo --kmipServerCAFile /etc/mongodb-kmip/AKMRootCACertificate.pem --kmipClientCertificateFile /etc/mongodb-kmip/AKMClientCertificate.pem --dbpath /var/lib/mongodb/
It’s that simple.
With Alliance Key Manager you have your choice of network-attached hardware security modules (HSMs), VMware virtual machines, and dedicated cloud instances in Microsoft Azure and Amazon Web Services. The key manager works the same way in all environments so you can even mix and match hardware, VMs, and cloud instances to achieve the level of security that is right for your organization. All instances of Alliance Key Manager are dedicated to you with no access by cloud service providers or third parties. Key custody remains exclusively under your control.
Many MongoDB customers deploy the database in VMware environments. Alliance Key Manager also runs in VMware as a virtual machine. You can deploy the key manager in a VMware security group and protect one or more instances of MongoDB across the entire VMware environment. If you are running on a VMware vCloud platform the same support is provided for key management.
For compliance, Alliance Key Manager is validated for PCI-DSS compliance in the VMware platform. This will help you achieve PCI-DSS compliance for your MongoDB database and applications. The PCI-DSS statement of compliance is here.
MongoDB is rapidly becoming the high performance, scalable, NoSQL database of choice for organizations large and small. As such it becomes a part of the critical infrastructure for business applications. Alliance Key Manager also implements high availability features such as automatic, real-time encryption key mirroring, and backup. For customers deploying the hardware version of Alliance Key Manager, there is redundancy at the hardware level with dual hot-swappable RAID disks, redundant power supplies, and redundant network interfaces. All of these features mean that your key management infrastructure will be resilient and reliable.
Alliance Key Manager will help you will meet common compliance regulations such as PCI-DSS, HIPAA, FFIEC, FISMA, the EU General Data Protection Regulation (GDPR), and other compliance schemes. Alliance Key Manager regardless of platform is based on the same FIPS 140-2 compliant software found in our enterprise level HSM. With no restrictions or license costs for client-side connections, you can protect as many MongoDB databases as you like with one Alliance Key Manager.
The product team at MongoDB did a great job of implementing encryption and key management for their customers. They deserve to be applauded for basing their solution on industry standards like KMIP. This will lower the bar for getting encryption right in the database. With Alliance Key Manager, getting encryption right means rapid deployment, conservation of developer resources, lower administration costs, and provable compliance.
A match made in techno-heaven.
The new European Union General Data Protection Regulation (EU GDPR) has now passed both the EU Council and Parliament and replaces the earlier Data Protection Directive (Directive 94/46/EC). Unlike an EU directive, this regulation does not require individual countries to pass legislation and it goes into effect immediately. Organizations have a two-year transition period to comply with the new data protection regulations, but it would be unwise to delay. Smart organizations will start work immediately so that there are no gaps upon the arrival of the deadline, and so that their public reputation is preserved. A good overview of the regulation can be found here and it contains a link to the full regulation.
There are many aspects to the new GDPR, and if you are required to meet the regulation you should take a very close look at the entire publication. Let’s look at a few of the elements of the GDPR with a focus on data protection.
What information must be protected?
The regulation uses two terms that are important to understand. The term “data subject” means an individual person. The term “personal data” means any data that either directly identifies an individual person, or which can be used indirectly to identify an individual. A few examples of data that indirectly identify an individual would include a medical identification number, location data such as an IP address, or social identity such as an email address or Facebook account.
The definition of personal information is quite broad. It would be a mistake to narrowly focus on just a few fields of data in your database, you should look for all information about a person that you store. If any information uniquely identifies a person, or if information can be combined to identify a person, it should be protected.
What constitutes a data breach?
The definition of a data breach is much broader than defined in the US. It certainly includes the the accidental loss of data or the loss of data in the course of a data breach by cybercriminals. But it also includes other activities including the accidental or unlawful:
- Destruction of personal information.
- Alteration of personal information.
- Unauthorized disclosure of information, even without criminal intent.
- Access to personal information.
In other words, assume that the data you store about an individual belongs to them exclusively, and is valuable. You are holding it in trust, and you have a fundamental responsibility to preserve and protect that information! This will be a conceptual challenge for organizations more familiar with US data protection rules.
Non-EU organizations should pay special attention to this definition of a data breach. It goes far beyond what typical regulations in the US define as a data breach.
What are my breach notification requirements?
The data breach definition applies to all personal information that is transmitted (data in motion) or stored (data at rest) or in any other way processed by your organization. In the event you experience a data breach you must notify the appropriate authorities and the individuals who are affected. There are stringent time constraints on the notification requirements and this will require special preparation to meet those requirements.
Important note: If your data is encrypted you may be exempt from some notification requirements (from Article 32):
The communication of a personal data breach to the data subject shall not be required
if the controller demonstrates to the satisfaction of the supervisory authority that it has implemented appropriate technological protection measures, and that those
measures were applied to the data concerned by the personal data breach. Such
technological protection measures shall render the data unintelligible to any person
who is not authorised to access it.
Who is covered by the regulation?
The GDPR uses the special term “Controller” for an organization that transmits, stores, or processes personal information. You are a Controller of personal information if in any way you transmit, store or process personal information. This applies in equal measure to service organizations that receive personal information in a secondary capacity.
The GDPR also uses the special term “Processor”. You are a Processor if personal information flows through a system that you control. This applies to information you provide to other organizations and to third party computing service providers such as cloud service providers (CSPs).
Are non-EU organizations covered by the EU GDPR?
Yes, if you are located outside of the EU but are doing business in the EU or operating in the EU (you are a controller or processor of personal information of EU citizens), you fall under the requirements of the EU GDPR. This will surprise many organizations who do not have offices or employees located in the EU zone.
Are there any special categories for protection?
The EU General Data Protection Regulation establishes some special categories of individuals and information that come in for additional controls. Information about children and the information of medical patients require special attention on the part of organizations who process this type of information.
What are the penalties for non-compliance with data protection requirements?
While there is some flexibility in how fines are levied for unintentional non-compliance to the GDPR and depends somewhat on which rules you are out of compliance with, the penalties can be quite severe. The failure to protect sensitive data with encryption with appropriate technical controls is considered a severe violation. No one should ignore the potential impact of these fines. For example, an enterprise that fails to protect data can be subject to fines of up to 1,000,000 EUR (1 Million Euro) or up to 2 percent of annual worldwide revenue. You can see why this new regulation is getting a lot of attention in the European Union!
See Article 79 of the GDPR for more information about fines and penalties:
Is encryption a mandate?
This is from the GDPR recitals:
(23) The principles of protection should apply to any information concerning an identified
or identifiable person. To determine whether a person is identifiable, account should
be taken of all the means likely reasonably to be used either by the controller or by any
other person to identify the individual. The principles of data protection should not
apply to data rendered anonymous in such a way that the data subject is no longer
The most common way of making data anonymous is encryption with good encryption key management.
And you should know this from Article 30 of the GDPR:
1. The controller and the processor shall implement appropriate technical and
organisational measures to ensure a level of security appropriate to the risks
represented by the processing and the nature of the personal data to be protected,
having regard to the state of the art and the costs of their implementation.
2. The controller and the processor shall, following an evaluation of the risks, take the
measures referred to in paragraph 1 to protect personal data against accidental or
unlawful destruction or accidental loss and to prevent any unlawful forms of
processing, in particular any unauthorised disclosure, dissemination or access, or
alteration of personal data.
It is likely that in almost all cases the only appropriate technical measure to ensure anonymization and security appropriate to the risk of loss is encryption with appropriate key management controls. When encryption is not specifically required we sometimes call this a “backdoor” mandate - you are not required to implement encryption, but in the context of a data breach anything else will be deemed inadequate, and subject the organization to fines. You don’t want that to happen to you.
I hope this helps you understand the basic data protection requirements of the new EU General Data Protection Regulation. I know that the regulation is complex and there remain some ambiguities. In future blog posts I will go into more detail on various aspects of the GDPR and how our solutions at Townsend Security are helping EU organizations meet the data protection requirements.
Capturing administrative credentials as a path to stealing sensitive credit card data is becoming a more common method used by cybercriminals. It is not surprising, then, that the PCI Security Standards Council would address this rising threat in the new version of the PCI Data Security Standard (PCI-DSS). For some time now the PCI council has been telling merchants, service providers, and banks that it would more aggressively respond to emerging threats, and version 3.2 of the PCI-DSS standard reflects this.
One of the most effective ways of countering this threat is to implement two factor authentication (2FA or TFA). This is also sometimes call multi-factor authentication (MFA), and the two terms are used interchangeably. If you use Google, Facebook, Yahoo, or any number of other Internet services you are probably already aware of two factor authentication as a security mechanism. With two factor authentication you no longer just provide just a password to login as an administrator of an account, or to make administrative changes to your systems. You must supply a 5 or 6-digit PIN code to complete the login sequence. The PIN code is generated separately from your signon prompt and thus is harder for cybercriminals to capture.
A password is something you know, so the second factor for authentication must be something you have such as a cell phone or token, or something you are such as a fingerprint or iris image. Secret questions don’t qualify as as second factor as they are also something you know, like the password. A general description of two factor authentication can be found here.
Prior to version 3.2 of the PCI Data Security Standard, remote users were required to use two factor authentication for access to all systems processing, transmitting, or storing credit card data. With version 3.2 this is now extended to include ALL local users performing administrative functions in the cardholder data environment (CDE). This makes sense as user PCs can be infected with malware that leads to the compromise of administrative credentials. It hardly matters anymore if the user is local or remote.
Why is this a big deal?
First, many companies processing credit card data do not have remote workers, and 2FA will be new technology. Even if you have remote administrators, they are probably authenticating with 2FA via a VPN session which will not work with internal administrators. This means evaluating new 2FA solutions, deploying them in all of your CDE environments, training employees on how to use the technology, and implementing new HR procedures to manage employees and access to 2FA.
Second, many 2FA solutions require deployment on hardware servers that must be deployed and maintained. There may be impacts on the company network and firewalls, and it means a new technology ramp-up. This includes addressing hybrid environments that may encompass traditional IT data centers, virtualized environments, and cloud applications. If the 2FA solution is based on hardware tokens that employees have to carry, you will have to manage the distribution, revocation, and replacement of tokens.
Third, many merchants have complex cardholder data environments and a 2FA project can be daunting. Think about a large box store retailer. Besides the normal check-out Point-of-Sale systems, they might have pharmacy, optical, automotive, and many other departments under the same roof. The CDE might be complex and extensive and the 2FA effort may be large depending on how much administrative work is performed locally.
Last, it is not enough to deploy a 2FA solution. It must be properly monitored in real time. An attacker may attempt to guess or brute-force attack the PIN code. A good 2FA solution will log both successful and unsuccessful PIN validation requests. The logged failures should be monitored by a SIEM solution so that the security team can react to the threat quickly.
Here at Townsend Security we provide a solution to IBM i (AS/400, iSeries) customers that is based on mobile, voice, and optional email delivery of PIN codes. Alliance Two Factor Authentication integrates directly with the global Telesign network to deliver PIN codes. A customer who needs to deploy two factor authentication can install, configure and verify a 2FA PIN code sequence in less than 30 minutes. There is no hardware to install or maintain, and there are no individual tokens to distribute and manage. We think that this solution will help many IBM i customers quickly achieve compliance with version 3.2 of PCI-DSS and PA-DSS. More information here:
In future blogs I will talk more specifically about our solution.
Excerpt from the eBook "2016 Encryption Key Management: Industry Perspectives and Trends."
Encryption and key management should move from an IT project to an integrated and seamless part of the IT infrastructure. Organizations need to be able to deploy encryption with ready-to-use infrastructure so that encryption ceases to be a barrier. In order to accomplish this encryption and key management solutions must be embedded in the IT infrastructure and enabled by policy. Key management solutions must implement the automation infrastructure that enables this type of deployment. All aspects of the provisioning of an encryption key server from network configuration, system logging, user administration, generation and distribution of credentials, key mirroring, backup and restore, and encryption key management must become API driven through standard web services.
Unfortunately, standards bodies and vendors have been slow to address this critical aspect of key management. While there is some movement to de ne some aspects of encryption key management through web services or add-on solutions like Chef, the net- work and services aspects of key managers have not been adequately addressed. This will continue to make it difficult to move key management into the realm of seamless and invisible critical infrastructure.
- Ask your key management vendor how they implement APIs for server configuration, deployment and management.
- Understand the key management vendor’s road- map and plans for key management automation.
- Ask the key management vendor for examples of customers using their Web services features.
- Understand any vendor licensing restrictions for installing management utilities.
There is No Single Source for Best of Breed Security
Understandably, customers long for a single vendor who can solve all of their security needs. Currently the process of deploying best of breed security involves working with multiple vendors whose products do not interoperate. It means spending a lot of IT resources managing a large variety of vendor products and services. While there are a handful of larger vendors attempting to provide a complete set of products, their marketing language does not match reality and there is no indication that it will for some time to come.
Looking ahead, organizations should expect to work with a number of security vendors in order to deploy best of breed security for their sensitive data. It is unlikely that this will change in the near future. Smart organizations will identify best of breed applications that are easy to use, and make the resource investments needed to acquire and manage these solutions.
- Always try to deploy the best of breed security solutions and understand that this means dealing with multiple vendors.
- Prioritize your security needs according to risk, and tackle the highest priority items first.
- Understand and empower your IT organizations to acquire and deploy the best solutions. It is always more cost effective to prevent a problem than remediate it after the fact.