As a security architect, security administrator or database administrator, one of the first big questions you face with the encryption of data at rest is how to organize, plan, and implement encryption keys to protect that data. Should you use one key for everything? Or, should you use a different key for each application? Or, perhaps you should use a different key for every table and column? Or, should you use a different key for each department? It is hard to find good security best practice guidance on this topic, so let’s put some focus around this question and see if we can come up with some general principles and guidance.
First, I like to start by identifying any applications or databases that contain highly sensitive information such as credit card numbers, social security numbers, or other personally identifiable information. These sources will be the high-value targets of cybercriminals, so you will want to protect them with your best security. For each of these applications and databases, assign encryption keys that are not used by any other application or database, and carefully monitor the use of these keys. Your encryption key management solution should help you with monitoring key usage. The objective is to protect the highly sensitive data and the related encryption keys from unauthorized access. If you have multiple sensitive applications and databases, assign each its own unique key.
Second, identify all of your major applications that are used across a broad set of departments within your company. Since these applications span multiple departments and will have a broad set of users with different needs, you should assign each of these applications their own specific encryption keys. In the event one application or database is compromised, it will not affect all of the other applications and databases.
Third, the remaining applications and databases are probably those that are used by one specific department within your organization. You will probably find that most departments in the organization have a number of specialized applications that help them get their work done. In terms of raw numbers, this might be the largest category of applications. Assign each department its own set of encryption keys that are not used by other departments. You may find that you need to sub-divide the department and assign keys for each sub-group, but the goal is to use encryption keys for the department that are not shared with other departments.
Lastly, cloud implementations are a special category and should always have separate keys. In the event that a Cloud Service Provider experiences a security breach, you will want to be sure that your internal IT systems are not affected. Assign specific encryption keys for your cloud applications and do not share the keys with internal, non-cloud applications.
Over the years I’ve occasionally seen organizations create and use a very large numbers of keys. In one case a unique key was used for every column and row in a table. In another case a different key was used for every credit card transaction. Large numbers of keys present management problems, and probably lowers overall security. Keep the number of encryption keys to a manageable level.
The above guidelines should help you protect your sensitive data and easily manage your encryption keys. There is a summary table for the above guidelines:
|Highly sensitive data and applications
||Assign and use unique and non-shared encryption keys. Do not share keys across application and database boundaries. Carefully monitor encryption key usage.
| Broadly used applications and databases
||Assign and use unique and non-shared encryption keys. Do not share keys across application and database boundaries.
| Departmental applications and data
|| Assign and use departmental encryption keys. Do not share keys among departments.
| Cloud applications
|| Assign and use unique encryption keys. Do not share encryption keys with non-cloud, IT applications.
There are always exceptions to general rules about how to deploy encryption keys for the best security. The above comments may not be appropriate for your organization, and you should always adjust your approach to your specific implementation. Hopefully the above will be helpful as you start your encryption project.
Our Alliance LogAgent customers often ask us which IBM i security events we transmit from the IBM security audit journal QAUDJRN to their log collection server or SIEM solution. There are several factors that affect which security events get collected by the IBM i operating system, and even which events are collected by Alliance LogAgent for transmission to your SIEM server. Let’s take a look at these:
When your new IBM i server is delivered it is not configured to collect any security events. You must create the QAUDJRN journal and the journal receiver as a first step. Then you must change some system values in order to activate security event collection. This is the first step in answering the question about which security events Alliance LogAgent transmits. It can only transmit the events you enable and you set these with the system values.
The first system value you must set is QAUDCTL. When you receive your new IBM i platform this system value is set to *NONE meaning that no security events are collected. You should probably change this to:
You now need to set the QAUDLVL and QAUDLVL2 system values to specify the type of events you want to collect. On a new IBM i server these system values are blank. IBM makes it easy to collect the security events through a special system value named *SECURITY. If you set the QAUDLVL system value to *SECURITY you will collect only the security-related events on the IBM platform. Of course, there are other events that you might like to collect. Press the F1 help key to view a complete list of events. If they won’t all fit in the QAUDLVL system value just add them to the QAUDLVL2 system value and specify *AUDLVL2 in the list.
You can now use the Change User Audit (CHGUSRAUD) command to audit users. I would suggest you turn on full user auditing for any security administrator, any user with All Object (*ALLOBJ) authority, and any user with audit (*AUDIT) authority.
You can also turn on object level logging with the Change Object Auditing (CHGOBJAUD) command. Be sure to specify all libraries and files that contains sensitive data. Do the same thing for IFS directories using the Change Audit (CHGAUD) command.
You’ve completed the first step in configuring security event collection. Alliance LogAgent can only report what you configure the system to collect and this first step defines those events.
Alliance LogAgent can also be configured to filter security events. The default is to report all of the events collected in the system audit journal QAUDJRN, but you can narrow these to a defined set of events. In the Alliance LogAgent configuration menu you will see an option to Work With Security Types. This will list all of the event types collected in the QAUDJRN journal. You can use function key F13 to set group patterns, or change each event. The F13 option is nice because it has a *SECURITY option that will let you set all security events on for reporting. Or, you can edit an individual security event to change its reporting status. For example, to turn off reporting of Spool File actions, edit the SF event and change the reporting option to No:
Send to log server . . . . . . . 2 1=Yes, 2=No
When you make this change Alliance LogAgent will no longer send spool file action information to your SIEM solution.
It is not wise to turn off the reporting of security events in Alliance LogAgent! You will always want to collect and report these events.
Setting the system values and configuring Alliance LogAgent security events are the primary ways you determine which events are transmitted to your log collection server. There are additional filtering options in Alliance LogAgent to include or exclude objects, IFS files and libraries and these can help you further refine the events that are transmitted.
Our customers often ask how they can manage the amount of data that Alliance LogAgent sends to their SIEM active monitoring solution. It’s an important question because most SIEM solutions license their software based on the number of Events Per Second (EPS) or by the number Gigabytes per day (GBD). So managing the volume of data has an important cost benefit as long as you don’t undermine the effectiveness of the security monitoring!
There are some things Alliance LogAgent inherently does to help with the volume of data, and there are some things you can do, too. Let’s look at both of these areas.
First Alliance LogAgent reduces the amount of data sent from the IBM security audit journal QAUDJRN by extracting only the information that has relevance to security from each journal entry. Each journal entry has a 610-byte header and most of the information in the header has no security relevance. Then the actual event information that follows can can be several hundreds of bytes in length. The average journal entry is about 1,500 bytes in length. Alliance LogAgent extracts and formats the important information into one of the Syslog formats. The result is an event with an average size of 380 bytes.
That is a 75% reduction in the amount of data sent to your SIEM solution!
Alliance LogAgent also gives you the ability to meter the number of transactions per second that you are sending. The IBM i server can generate a large number of events and throttling the transactions with this configuration option can help you reduce and control SIEM costs. Additionally, it can also help minimize the impact on your network capacity. This is a great option if your SIEM solution is licensed based on the number of Events Per Second (EPS).
In the second category are things you can do to minimize the number of events that are processed using various Alliance LogAgent configuration settings. Let’s take them one at a time:
Selectively send journal entry types
Send to log server . . . . . . . 2 1=Yes, 2=No
The IBM security audit journal QAUDJRN collects security events and general system information. Some of the general system information may have no security relevance and Alliance LogAgent allows you to suppress the transmission of these events. For example, the security audit journal may have information about printed reports (journal entry type SF for spool files) that have been produced on your system. If this information is not needed for security monitoring, you can turn off the event reporting in Alliance LogAgent. From the configuration menu take the option to Work With Security Types. You can can change the option to Send To Log Server to No:
Hint: You can also use function key F13 to select all IBM Security (*SECURITY) level events for reporting, and turn all other events off.
Filter library objects
You may have many libraries on your IBM i server that are not used for production data or which do not contain any information that has security relevance. From the configuration menu you can create an object exclusion list to exclude individual libraries, or you can exclude all libraries and objects. If you take the latter approach be sure to define libraries in the inclusion list that you want to monitor and report. By excluding non-relevant libraries and objects you can minimize the number of events that are transmitted.
Filter IFS objects
Like library exclusion and inclusion you can define IFS file system filters. From the configuration menu you will see options for IFS exclusion and inclusion rules. You can even exclude all IFS directories (exclude the “/” root directory) and then add in the IFS directories you want to include. IFS filtering lets you define individual files or entire directories and subdirectories. The “/tmp” directory is a working directory and you may wish to exclude events from that directory if there are no relevant security-related events there.
Alliance LogAgent also gives you the ability to filter certain users from reporting, too. You should use caution when implementing this type of filtering, and never filter highly privileged users. Alliance LogAgent provides a list of IBM user profiles that you might consider for exclusion, but you should review these with your IBM i security administrator before filtering these users. You can also add your own users to this list.
Filter QHST messages
The QHST message files contain important logon and logoff event information along with other messages that may not be as important. Alliance LogAgent lets you filter QHST messages to only include logon and logoff events if you wish.
Filter system values
Some of the IBM i system values have a low security value and can be suppressed by Alliance LogAgent. Alliance LogAgent provides a list of system values for your consideration and you can disable reporting changes if you decide they do not have security relevance. You can also add your own system values to the filter list.
These data compression, metering, and filtering options give you a lot of control over the amount of information that Alliance LogAgent sends to your log collection server and SIEM solution. These can help you control costs and minimize the impact on your network. The original information remains in your IBM security audit journal and system history messages file if needed for research or forensics.
In this age of “Bring Your Own”, we see the acronyms BYOD (device), BYOE (encryption), and BYOK (key) showing up all over the blog-o-sphere. BYOK is a cloud computing security model that allows cloud service customers to use the provided server-side encryption software and (bring) manage their own encryption keys.
The idea of encryption (cryptography) is almost as old as the concept of written language: if a message might fall into enemy hands, then it is important to ensure that they will not be able to read it. Most typically, encryption relies on some sort of "key" to unlock and make sense of the message it contains, and that transfers the problem of security to a new level – now the message is secure, the focus shifts to protecting the key. In the case of access to cloud services, if we are encrypting data because we are worried about its security in an unknown cloud, then why would we trust the same cloud to hold the keys without using a key management solution?
BYOK can help an organization that wishes to take advantage of cloud services to address both regulatory compliance and data privacy concerns in a third-party multi-tenant environment. This approach allows a customer to use the encryption technology that best suits the customer's needs, including the cloud provider's underlying IT infrastructure. For example, Amazon’s Simple Storage Service (S3) is about protecting data-at-rest using server-side encryption with customer-provided encryption keys (SSE-C) or “BYOK”. With the encryption key you provide as part of your data request, Amazon S3 then manages the encryption (as it writes to disks) and decryption (when you access your data). You don't need to maintain any code to perform data encryption and decryption in S3. The only thing you do is manage the encryption keys you provide to the Amazon Simple Storage Service. When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and then removes the encryption key from memory. When you retrieve data, you must provide the same encryption key as part of your data request. Amazon S3 first verifies that the encryption key you provided matches, and then decrypts the data before returning it to you.
Important to Note: Amazon S3 does not store the encryption key you provide. Instead, they store a randomly salted HMAC value of the encryption key in order to validate future requests. The salted HMAC value cannot be used to derive the value of the encryption key or to decrypt the contents of the encrypted object. That means, if you lose the encryption key, you lose the object.
Any time a company decides it wants to host its applications in the cloud, or use a SaaS application where the company’s data will be stored in the cloud, their IT security professionals have to ask a series of questions.
- Can we encrypt the data? If so, who will have access to the keys?
- How will we perform key rotation and manage the lifecycle of the encryption keys?
- Is the cloud vendor using a proprietary encryption technology that prevents us from moving our data to another vendor?
- If we use 10 SaaS applications, will we have to administrate 10 different key management solutions?
These questions are tough enough to answer when the data and encryption technologies are in a company’s own data center where it has complete control over everything. In many cases, if encryption is provided, the cloud provider holds or has access to the keys, which creates another set of problems for the end user. For one thing, a third-party having access to data in the clear is a violation of regulations such as PCI-DSS, HIPAA, GLBA and others. Also, customers have yet to establish trust in cloud platforms or SaaS providers to protect their data. There have been many high profile data breaches that make end-users nervous. Customers also fear the U.S. government will subpoena access to their data without their knowledge or permission. For companies outside the U.S. that choose to use a U.S.-hosted cloud or app, there are data privacy and residency concerns. Instinctively it feels a lot more secure to manage your own key and use BYOK instead of leaving it to the cloud provider.
A few things have become crystal clear:
- You need to know where your sensitive information is. Period.
- You need to know who has access to it. Not who you think has access to it, but who really has access to it.
- Wherever you put your sensitive information, you need to protect it. The most critical thing you need to do is to apply a strong defense in depth approach to data security, including the use of encryption and access controls.
- You need to be able to document, through audit logs and reports, who has actually accessed your information. This is true (and important) for sensitive data, as well as for compliance-regulated data.
- If you think that having your cloud service provider encrypt your data provides adequate security for your information, you probably need to rethink this.
It all boils down to this: When encrypted data is stored or processed in the cloud, the data and the keys must be kept separate and only the end-user should control the encryption keys.
Cloud storage options bring new economies and business efficiencies, but security can’t be ignored, and it can’t be simply outsourced to some other party. We believe that it is fundamental to good security to control all access to your data, including managing your own encryption keys. Managing encryption keys may sound daunting, but it doesn’t need to be. Our technology makes data security and encryption key management simple and straightforward. Our key management solution addresses all of the issues described above and can protect your data everywhere you have it stored.
"This article was originally posted on Pantheon’s blog. Pantheon is a website management platform for Drupal and WordPress."
“There are only two types of companies: those that have been hacked, and those that will be. Even that is merging into one category: those that have been hacked and will be again.” – Robert Meuller, Former FBI Director
Your website will be hacked. Your defense in depth security strategy will determine how severe the damages are.
This was the basis of “Defense in Depth: Lessons Learned from Securing 100,000 Drupal Sites”– a session presented by Nick Stielau (Pantheon), Chris Teitzel (Cellar Door Media), and myself (Townsend Security) at DrupalCon 2015.
Securing data is important (and required)
No company wants to see their name in the headlines for a data breach. A breach can mean loss of money (lots!), loss of customers, and loss of jobs. Data breaches are a very real thing and aren’t a matter of if, but when. As a Drupal developer, building security into web sites and applications needs to be a priority from the beginning, not something that can be “saved for phase two."
If the business risks aren’t convincing enough, we found that nearly everyone in our DrupalCon 2015 session fell under one compliance regulation or another – sometimes multiple. Take colleges and universities for example (a group that represented a large segment of the room). They often fall under PCI DSS because they process payments with credit cards; HIPAA because they have a student wellness center; and FERPA simply because they are an educational institution.
Sensitive data includes more than social security numbers
As a security company, one problem that we often observe is that developers don’t always know what information needs to be protected (or that they need to protect anything at all). Sensitive data extends beyond the obvious credit card or social security number. Personally Identifiable Information (PII) now includes information such as (and not limited to):
- Email address
- Login name
- IP address
And hackers are great aggregators, so even losing what seams like trivial information can have magnitudes of impact. By knowing your first pet’s name or your mother’s maiden name, hackers are well on their way to hacking your account or ultimately breaching your web site.
Developers need to think about security, even if the client isn’t
“My client isn’t asking for security.” They might not be, but a good developer would inform their client of their risks and requirements (and budget impacts) and put all the proper security controls in place. In the event of a breach, the client is ultimately responsible but you can be sure that they will be pointing fingers at you and asking why their site wasn’t secure. As the developer, you don’t want to have a breached site tarnishing your reputation. When in doubt, err on the side of more security rather than less.
In the past, security has had a reputation for being difficult but things are getting easier. Still, there is no “silver bullet” and developers need to take a Defense in Depth approach to securing their Drupal sites. This means that multiple layers of security controls are in place.
Here are a few essential security tips that were discussed in our session at DrupalCon 2015.
1) Back It Up
Backups are going to save you. If something catastrophic happens to your site, you need to be able to roll back to the latest functioning version. (Depending on your situation prior to backup, there may be additional steps that you must take.) Every organization should have a backup process as part of their site operation guidelines. Additionally, the backups should be stored securely on a different server – if your server is breached, you can no longer trust any data contained on it and you want to be confident that you are restoring your web site from a secured backup. Services like NodeSquirrel can help.
2) Use Version Control
Use a source code management tool like Git so that in the event of a breach, you can view any files in your source that may be altered and revert your Git repo if needed. Git gives you a detailed control on what files have been changed, where they have been changed, and how they have been changed. While this may clear up many of your issues temporarily, you will want to follow procedure as if the site is still infected. Without source control you would have to go line by line through the entire Drupal core and contributed/custom modules to find what changes the attacker made.
3) Use Secure Passwords & Two Factor Authentication (2FA)
Do not repeatedly use the same password. When your email gets hacked, you don’t want that to be the same password that you use for logging in to your financial institution. Instead, use a tool like 1Password, LastPass, or KeePassX to create and manage unique passwords for all of your logins. Additionally, use Two Factor Authentication (2FA) whenever possible. Two Factor Authentication is something you know (password) and something you have (like a unique number sent to a cell phone or key fob). While it can be more cumbersome, it is easier to deal with than a data breach due to stolen credentials. Just ask Target.
With nearly every compliance regulation calling for encryption, it is no longer an optional control. Luckily, there are several modules available that will leave you with less gray hair. Encrypt, Encrypt User, and Field Encrypt have made encrypting sensitive information easier than ever. The important thing to remember is, never leave your encryption key on the same server as your encrypted data, which leads us to…
5) Key Management
Encryption is said to be the hardest part of security and key management the hardest part of encryption (hackers don’t break encryption, they find your keys).
However, times are changing and key management doesn’t need to be difficult. Encryption, as well as API keys (PayPal, Authorize.net, MailChimp, etc.) should never reside on the same server as your Drupal installation. Rather, use an external key manager to manage your encryption and API keys. With modules like Key and Key Connection, key management is now almost “plug and play.”
There are more security tools available than ever, but it is up to the Drupal community at large to embrace best practices and take a defense in depth approach to data security. Just because a client didn’t ask for it, doesn’t make it optional. Breaches are not a matter of if, but when. What are you doing to prepare your site for the inevitable hack?
I recently watched a movie that really made me think about how the cryptographic landscape has evolved. Eighty years ago encryption was almost entirely the domain of military organizations. Now it is ingrained in nearly every business transaction that takes place every day. The average person hardly takes notice. Will strong encryption, secure key management, and complex passphrases be enough to stop attacks of future?
A Chink in the Armor
We can scarcely avoid them these days. The “smart phone” seems to have been the catalyst that blew our (at the very least my) cozy concept of privacy right out of the water. Most people trust that their data is secured by whatever cell service they use or by the social media site they frequent. Few people take responsibility for their own sensitive data management. Perhaps they do not feel there is a need, or perhaps they do not consider it sensitive.
I feel that this is not the right attitude. Consider, for instance, the webcam and mic. Fifteen years ago I needed to go to an electronics store to purchase a golf ball sized orb on a clip to use video chat, or spend upwards of $300 if I wanted to film my friends and I skiing. Those devices needed to be plugged in or turned on to work.
Now, just in my house alone, I have at least six HD cameras in the form of old smart-phones, laptops, and gaming devices. Most of those devices are always on by design, and vulnerable to breach. Suppose there was sensitive information within view of one of those cameras, even if it’s just a calendar. It’s worth thinking about, especially considering that today just about every device comes with an integrated camera. Video game systems can listen to our conversations and respond to verbal queues (and in some cases movement). Software can now turn speech into text accurately and reliably. Taking this into account, sensitive data now goes far beyond a credit card or social security number. Everything you say or do in your own home is now, quite possibly, sensitive data.
Rising to Meet Future Threats
Very soon the smartphone will be among the least of our worries. Things like computerized smart glasses, smart watches, and other smart appliances will start to invade our workplaces and homes. This raises a very real security concern when you think about it. All it would take is one compromised smartwatch to capture a password from a whiteboard. In fact it may not even be as sneaky as all that. I recently read a funny article that detailed three or four data security slips. In each of the instances there was a photo of an anchor with sensitive data such as a password in the shot behind them. These were photos deliberately taken without regard for what was captured in the shot. Responsibility for the photos falls on the photographer in that case.
That article did make me think though. Would crafty attackers be inclined to hack the cameras of personal devices? A smartphone that’s in your pocket most of the time might pose little threat, but what about a smart watch? Could a particularly determined attacker gain access to Database Administrators home appliances? What if they were able to learn of a passphrase or record business conversations by hacking an entertainment system? It would be worth the attempt if it meant the keys to the kingdom.
Surely you’ve implemented, or at the very least heard of the following security steps. These are the basics, the steps you take to prevent a conventional attack
- Deploy strong encryption wherever possible, and adopt a strong key management solution.
- Do not keep passwords written down, especially on whiteboards.
- Use strong passwords like phrases that include dashes, or numbers are great.
- Develop and enforce policies regarding security best practices on employee’s personal and home devices.
Finally, lets make the safe assumption that attackers are thinking outside of the box. It follows that we too must think creatively to stop data breaches. Now lets pretend that an attacker has hacked a smartwatch or webcam and acquired a password to your database. That attacker has just bypassed most of the security measures you’ve put in place. The only thing that will stop an attack at this stage is a strong two-factor authentication solution. If deployed on the breached system the attacker tries to enter the stolen passphrase. Instead of gaining access the screen displays an Alert. “A text message has been sent to your phone, please enter the 6 digit pin to continue”. Two Factor Authentication saves the day. As more and more digital devices flood the workplace the need for another line of defense become very real.
... Life Cycle of an Encryption Key!
As more companies choose virtual storage environments and move data to the cloud, protection of encryption keys become an even more important part of a solid data security strategy.
Let’s take a look at how data protection can be achieved with a solid key management solution and the proper handling of the actual encryption keys. From the creation of a key, through it’s use, and eventually to its deletion, an encryption key management solution allows an administrator to be able to securely and efficiently handle the encryption keys throughout the length of its life cycle. The key management administrator is responsible for performing a number of functions and must also follow industry best practices in order to secure the data they need to protect. There is far more to encryption key management than just storing the encryption key somewhere. Generally, a key storage device only provides storage of the encryption key, and you need to create the key elsewhere. There is a very real need, and very specific guidelines that require you to store and manage your encryption keys away from the encrypted data. With an encryption key manager, there is a whole set of management capabilities and a suite of functions that provide dual control, create separation of duties, implement two factor authentication, generate system logs, and perform audit activities, along with managing the key life cycle.
This encryption key life cycle, which is defined by the National Institute of Standards and Technology (NIST), also requires that a crypto period be defined for each key. A crypto period is the length of time that a key should be used and is determined by a number of factors based on how much data is being protected and how sensitive that data is. NIST has defined and provided some parameters on how to establish crypto periods (see special publications 800-57 - there are 3 parts) and provided guidance on best practices. Using those guidelines, each key management administrator needs to determine how long a particular encryption key should be actively used before it is rotated or retired.
These are a few of the factors that go into establishing the crypto period for a key (which maybe a few days or weeks or longer up to one or two years it really depends on the data that you're trying to protect):
- How is the data being used
- How much data is there
- How sensitive is the data
- How much damage will be done when the data is exposed or the keys are lost
Now, let’s take a closer look at the phases of a key life cycle and what a key management solution should do during these phases:
The Encryption Key Life Cycle
Key Creation (Generation & Pre-Activation):
First, the encryption key is created and stored on the key management server (which can be a hardware security module (HSM), virtual environment (VMware) or a true cloud instance). The key can be created by a sole administrator or through dual control by two administrators. The key manager creates the AES key through the use of a cryptographically secure random bit generator and stores the key, along with all it’s attributes, into the key storage database (which should be encrypted). The attributes stored with the key include its name, activation date, size, instance, the ability for the key to be deleted, as well as its rollover, mirroring, and key access attributes. The key can be activated upon its creation or set to be activated automatically or manually at a later time. The key manager should be able to create keys of three different sizes: 128, 192, or 256-bit. The encryption key manager should track current and past instances, or versions, of the encryption key. You need to be able to choose whether or not the key can be deleted, mirrored to a failover unit, and by which users or groups it can be accessed. Your key manager should allow the administrator to change many of the key’s attributes at any time.
Key Use and Rollover (Activation through Post-Activation):
The key manager should allow an activated key to be retrieved by authorized systems and users for encryption or decryption processes. It should also seamlessly manage current and past instances of the encryption key. For example, if a key is rolled every year and the version is updated, then the key manager should retain previous versions of the key but dispense only the current instance for encryption processes. Previous versions can still be retrieved in order to decrypt data encrypted with such versions of the key. The key manager should use transport layer security (TLS) connections to securely deliver the encryption key to the system and user requesting it, which prevents the key from being compromised. The encryption key manager will also roll the key either through a previously established schedule or allow an administrator to manually roll the key.
Key Revocation and Back Up (Escrow):
An administrator should be able to use the key manager to revoke or deactivate a key so that it is no longer used for encryption requests. In certain cases the key can continue to be used to decrypt data previously encrypted with it, like old backups, but even that can be restricted. A revoked key can, if needed, be reactivated by an administrator, although this would be more an exception to the rule than common practice.
Key Deletion (Destruction):
If a key is no longer in use or if it has somehow been compromised, an administrator can choose to delete the key entirely from the key storage database of the encryption key manager. The key manager will remove it and all its instances, or just certain instances, completely and make the recovery of that key impossible (other than through a restore from a backup image). This should be available as an option if sensitive data is compromised in its encrypted state. If the key is deleted, the compromised data will be completely secure since it would be impossible to recreate the encryption key for that data.
Auditing and active monitoring of critical key management systems is a fundamental security concept for protecting critical assets like data in a key management solution. The key management administrator also needs to implement access controls to be sure that only the users and applications who should be accessing encryption keys are actually doing so. A general practice of separating encrypting keys across different departments or applications should be in place. For example, you may need to protect employee data in your HR system using an encryption key, but you wouldn’t want to use that same encryption key to protect sales data or where you might have credit cards. You need to segment the usage of encryption keys to particular data so that employees in HR are accessing HR data using one key and salespeople can access sales data using a different key.
You can see that there is a significant difference between a key storage device and an encryption key management solution. Remember to always look for NIST and FIPS 140-2 compliant solutions to meet compliance requirements and follow security best practices!
To learn more about encryption key management download our ebook on Encryption Key Management Simplified.
Last week, Townsend Security CEO Patrick Townsend and I made the trip to Anaheim, CA for the IBM COMMON User Group Annual Meeting and Exposition, a meeting that brought about one thousand IBM users from around the world together to learn and network. Both Patrick and I gave classes on IBM i security. This was a great opportunity for us to learn what the top security concerns of IBM i users are today, and what strategies are most common for implementing defense-in-depth security on the IBM i.
First, it was great to learn that most IBM i users with sensitive data are encrypting. FIELDPROC, the field procedure exit point available on V7R1/V7R2 has made column-level encryption easier than ever, and many users are moving towards FIELDPROC-based encryption solutions. There was also greater interest in encryption key management, which is a critical part of any encryption solution.
One of the top questions we received regarding encryption and key management was, what are the benefits and challenges of IBM i native encryption libraries and key management? The IBM i native encryption and key management capabilities can be an easy way of protecting sensitive data on your IBM i. However, some companies who must encrypt and decrypt large amounts of data in short periods of time, or who must meet compliance regulations such as PCI-DSS or FFIEC, often run into performance issues when using the native encryption libraries and compliance issues if they must use a NIST-compliant key management solution. If a user needs to manage encryption keys in a multi-platform environment, then using a third-party key management solution that can manage keys in multiple operating systems and platforms is critical.
Greater interest in system logging was also evident. A strong system logging solution will collect security events in real time and detect a data breach as it happens. Many IBM i users were already using a log collection solution such as Splunk, AlienVault, or IBM’s QRadar SIEM solution; however, many users were also facing the challenge of collecting security events that are generated in many different formats, and need to be converted to a common format for collection, analysis, and alert management. The ability to convert these events and manage them in a cohesive way falls entirely on the capabilities of your system logging solution. We recommend IBM users focus on solutions, such as our Alliance LogAgent, that can convert logs from multiple formats into standards formats that can be read by your SIEM solution.
Lastly, Patrick presented on the importance of two-factor authentication on the IBM i. The importance of two-factor authentication has become more evident since many security experts deduced that some of the largest data breaches in the past few years perhaps could have been prevented using two-factor authentication. The Target and Anthem breaches are listed among these. Two-factor authentication is defined as an authentication method using two factors: something you have and something you know. If using two-factor authentication on the IBM i, anytime a user signs on, they will also receive a text or phone call providing them with a pin number they must enter in to their sign on client as well. Since hackers are becoming more and more adept at discovering a person’s password, two-factor authentication would stop a hacker from signing on as that person if they didn’t have access to their phone as well. Large companies such as Google and Apple are using these technologies already, and it won’t be long before use of two-factor authentication is a standard across all platforms.
Every year, COMMON gives us an opportunity to connect with IBM i users and some of our customers as well. We use this opportunity to spread the knowledge we have about the best security solutions available for the IBM i and learn from the community what new security needs coming down the line. If you weren’t able to attend COMMON this year, check out Patrick Townsend presentation on on two-factor authentication, available online here.
Encryption & Key Management… why that ampersand is so important!
We frequently talk about a variety of different data security measures and the difficulty of making information truly secure in a multi-tenant environment. What steps are we taking to protect the most valuable assets we have as companies, such as our customer’s Personally Identifiable Information (PII)? Are we starting with the most critical steps in the process and then building out from there? Let’s make sure we have the basics covered!
Encryption is the first step to keeping information secure from anyone who accesses it maliciously, it is also a clear compliance requirement and critical part of protecting data in any environment. Use industry standard encryption such as Advanced Encryption Standard (AES, also known as Rijndael) which is recognized world-wide as the leading standard for data encryption. Never use home-grown or non-standard encryption algorithms. Make sure your security partner will supply you with all of the sample code, binary libraries, applications, key retrieval and other tools you need to implement encryption and key management fast and easily. Whether your data resides in the cloud, in a virtual environment, or in your own data center; always make sure you are using the right type of encryption to protect it.
The second step to the security solution is Encryption Key Management. While encryption is critical to protecting data, it is only half of the equation. Most regulations require that encryption keys must be stored and managed away from the data they protect because storing encryption keys with the data they protect, or using non-standard methods of key storage, will not protect you in the event of a data breach. When encrypting information in your applications and databases, it is crucial to protect encryption keys from loss and securely managed from key creation, management, distribution, and archival or destruction (the full key lifecycle). In the past, key management used to be a complex and difficult task that required hardware and a team of security specialists to implement. Our key manager is available as a ready-to-use, easy-to-deploy solution that is compliant with the NIST FIPS 140-2 standard in a variety of instances:
In the Cloud - If you're running on Microsoft Azure, or in Amazon Web Services (AWS), the encryption key manager can run as a true cloud instance in a standard cloud or deploy in a virtual private cloud for added data protection for sensitive applications.
VMware - Businesses are able move their VMware infrastructure beyond traditional data centers and into the cloud with VMware’s vCloud. By using the same FIPS 140-2 compliant software found in physical appliances, enterprises can provably meet compliance requirements with a VMware based encryption key manager running in the cloud.
A Cloud HSM is a physical appliance hosted in a secure cloud with real-time encryption key and access policy mirroring. Dedicated HSMs are hosted in geographically dispersed data centers under an ITIL-based control environment and are independently validated for compliance against PCI DSS and SOC frameworks. No access is available to the cloud vendor or any unauthorized user.
A Hardware Security Module (HSM) is a physical appliance or security device that is protected and tamper evident. Built for high resiliency and redundancy it has hot swappable RAID (Redundant Array of Independent Disks) disc drives, dual power supplies, dual network interfaces, and is deployed in your IT data center. Cloud applications can connect to a remote HSM over a secure, encrypted connection.
Do you have the basics covered? If you are unsure about the status of your defense-in-depth strategy to data security, contact one of the experts on the Townsend Security team. We have a variety of resources to help you answer your most pressing questions and a variety of solutions to make sure you are protecting your data the best way possible. At Townsend Security we also take a very different philosophy and approach:
- We think that when you buy an encryption key manager, you should be able to easily deploy the solution, get all your encryption projects done properly, and have very affordable and predictable costs.
- We understand that we live in a world where budget matters to our customers, so we do not charge client-side application or connection fees.
- We know that IT resources are limited and have done a huge amount of work to make our solutions easy with out-of-the-box integrations and simplified deployments. We also provide ready-made client-side applications, encryption libraries, source code samples, as well as SDKs for developers who need them.
Check out this eBook for more information:
Pretty Good Privacy (PGP) Encryption is a solid path to provable and defensible security, and PGP Command Line sets the standard for IBM enterprise customers.
Pretty Good Privacy (PGP) encryption is one of the most widely deployed whole file encryption technologies that has stood the test of time among the world’s largest financial, medical, industrial, and services companies.
It works on all of the major operating system platforms and makes it easy to deploy strong encryption to protect data assets and file exchange. PGP is also well recognized and accepted across a broad number of compliance regulations as a secure way to protect sensitive data as it is in transit to your trading partners. PGP encryption can help businesses meet PCI-DSS, HIPAA/HITECH, SOX, and FISMA compliance regulations.
Here are three key things to know about PGP encryption for your IBM System z Mainframe, and how to discuss them with your technology providers:
1) Always encrypt and decrypt sensitive data on the platform where it is created. This is the only way to satisfy regulatory security and privacy notification requirements.
Moving data to a PC for encryption and decryption tasks greatly increases the chances of loss and puts your most sensitive data at risk. In order not to defeat your data security goals it is important to encrypt and decrypt data directly on the platform.
2) The best PGP encryption solutions manage PGP keys directly on the platform without the need for an external PC system, or key generation on a PC.
Using a PC to generate or manage PGP keys exposes the keys on the most vulnerable system. The loss of PGP keys may trigger expensive and time-consuming privacy notification requirements and force the change of PGP keys with all of your trading partners.
3) The best data security solutions will provide you with automation tools that help minimize additional programming and meet your integration requirements.
Most Enterprise customers find that the cost of the software for an encryption solution is small compared to the cost of integrating the solution into their business applications. Data must be extracted from business applications, encrypted using PGP, transmitted to a trading partner, archived for future access, and tracked for regulatory audit. When receiving an encrypted file from a trading partner the file must be decrypted, transferred to an IBM z library, and processed into the business application. All of these operations have to be automated to avoid expensive and time-consuming manual intervention.
While the IBM System z Mainframe has always had a well-earned reputation for security, recently IBM modernized and extended their high-end enterprise server, the IBM System z Mainframe with the new z13 model. With full cross-platform support you can encrypt and decrypt data on the IBM Mainframe regardless of its origination or destination.
For over a decade Townsend Security has been bringing PGP encryption to Mainframe customers to help them solve some of the most difficult problems with encryption. As partners with Symantec we provide IBM enterprise customers running IBM System z and IBM i (AS/400, iSeries) with the same strong encryption solution that runs on Windows, Linux, Mac, Unix, and other platforms.
With the commercial PGP implementation from Symantec comes full support for OpenPGP standard, which really make a difference for enterprise businesses. Here are just a few of the things we’ve done with PGP to embrace the IBM System z Mainframe architecture:
- Native z/OS Batch operation
- Support for USS operation
- Text mode enhancements for z/OS datasets
- Integrated EBCDIC to ASCII conversion using built-in IBM facilities
- Simplified IBM System z machine and partition licensing
- Support for self-decrypting archives targeting Windows, Mac, and Linux!
- A rich set of working JCL samples
- As always we offer a free 30-day PGP evaluation on your own IBM Mainframe
PGP Command Line is the gold standard for whole file encryption, and you don’t have to settle for less. When you base your company reputation on something mission-critical like PGP encryption, you deserve the comfort of knowing that there’s a support team there ready to stand behind you.
Listen to the podcast for more in-depth information and a discussion on how PGP meets compliance regulations, and how Townsend Security, the only Symantec partner on the IBM i (AS/400) platform as well as the IBM z mainframe providing PGP Command Line 9, can help IBM enterprise customers with defensible data security!