Secure memory in payment systems is one of the most important PA:DSS and PCI:DSS topics. While any device involved in payment transaction processing needs to consider memory use, back-end processing and authorisation systems also play a critical role in this story.
All payment data that is passed through any payment device needs to be stored in RAM for processing. Once the data is stored in RAM, it can be very easy for an intruder to programmatically scan actual/RAM memory and extract the desired payment data-from card/account numbers, Track 2 and Payment Tokens to clear encryption keys.
Certified QSAs use their internal tools to check for sensitive data leftovers in memory as part of the forensic stage of the PA:DSS audit. Finding such a problem such data leftovers effectively means failing the audit and the latest payments data-exploitation stories confirm the need for this rigour.
As part of EFTlab’s product development, our team evaluated the best techniques for data protection, with the following accepted as our internal standards.
There are several commonly used strategies to protect secure data in memory; each of them has strengths and weaknesses, so a combination of is recommended.
The first and most obvious one is to protect the system through access restriction. While it is up to system administrators to achieve a well protected DMZ and to implement the latest security patches & recommendations, once past this layer, the attacker’s work is too easy.
Imagine having a naive implementation like this to store the PAN data:
Compiling and running the snippet above doesn’t mean that the data were wiped out of memory. An inexpensive scan of all system buffers reveals that the memory page remains persists a long time after the application’s variable went out of the scope. A tool like HexDump quickly finds our candidate culprit:
Rather than considering in-memory encryption, EFTlab’s solution is to go the way of output data sanitisation and minimization of the time needed for data exposition in memory through the use of secure allocators. Taking advantage of C++ constructors and destructors, any memory allocation is done in a way that ensures that its memory buffer is wiped out immediately after coming out of the scope. This is achieved through the SecureAllocator template class which uses following allocator and deallocator:
For a more in-depth explanation, check the C++ documentation. A new implementation then looks like this and all memory is zeroed after usage:
Disclaimer: EFTlab’s SecureAllocator is not released as GPL, but feel free to contact EFTlab Support to get your own copy.
In the example above we covered in-memory storage, but there is also a need to secure sensitive data stored on the Host’s persistence device (Database). As in the previous use case, two main strategies can be applied: complete database encryption enforced by a Database engine; and HSM-assisted data encryption. A combination of both is preferable, but using the first brings some worries with it for disaster recovery.
EFTlab’s solution is to use an HSM (Thales/SafeNet) for sensitive data storage. An HSM first generates a number of random Key Encryption Keys (KPK/KEK) and BP-Switch then uses these keys to rotate the Data Protection Keys (DPK/DEK). These are then used to encrypt/decode all sensitive data routed into the system’s persistence, so all the heavy computing labour is handled in the secure zone of the TRMS HSM.
BP-Switch still needs to handle data padding to for the block size needed by the data encryption algorithm in use (8-bytes for DEA). Based on our research and experience, the following options were considered:
Unfortunately none of the above are secure enough. For example, EBC encryption and the 8byte trailer can still leak information on a larger number of data records. Some of these options are also not applicable for binary data storage, as values of x80 and binary zero x00 are completely acceptable data values. The final problem identified was performance – in all of the options mentioned, the application needs to tackle the last byte(s), where as sequential reading starts from the beginning. This causes unnecessary overhead to the system.
By combining all of the techniques above and adding our own internal requirements, EFTlab’s team devised a new Padding mechanism which adds padded length in front of the data, which is then followed by a number of randomly-generated bytes for padding (x04 x28 x3D x23 xC9 x31 x32 x33). Such an approach allows fast addition and processing of padding, without suffering from the trailing problem. An additional advantage of this approach is that the randomly padded data results in output that is always encrypted differently, even when the data is the same – making it less vulnerable to known types of cryptographic attacks. Following output table demonstrates and explains all applications.
To support implementation of our product, this functionality has been added to BP-Tools 15.09 release, as detailed in the screens below. We welcome your feedback on this.