The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: August 16, 2013
Cryptographic Key Management

Contents

Feedback request: Please send email to report errors or glaring omissions, and to suggest improvements.


Key Management: Introductions and Overviews

"Key management is the hardest part of cryptography and often the Achilles' heel of an otherwise secure system." — Bruce Schneier, Preface to Applied Cryptography, Second Edition

A small collection of references:


ANSI X9 Financial Industry Standards

"The Accredited Standards Committee X9 (ASC X9) has the mission to develop, establish, maintain, and promote standards for the Financial Services Industry in order to facilitate delivery of financial services and products. Under this mission ASC X9 fulfills its objectives to: (1) Support (maintain, enhance, and promote use of) existing standards; (2) Facilitate development of new, open standards based upon consensus; (3) Provide a common source for all standards affecting the Financial Services Industry; (4) Focus on current and future standards needs of the Financial Services Industry; (5) Promote use of Financial Services Industry standards; (6) Participate and promote the development of international standards... ASC X9 brings more than three decades of leadership in financial industry standards development. Its members include bankers, vendors, government agencies, associations, security experts, software producers, consultants and others who address technical issues, find the best solutions, and codify them into American National Standards (ANS). More than 200 organizations are members of ASC X9... ASC X9F (Data and Information Security): the Data and Information Security Subcommittee X9F drafts standards with the potential to reduce financial data security risk and vulnerability."

James Randall: "Key management in the ANSI X9 context means key generation, key distribution, key life cycle facility requirements, modes, IVs (cipher initial values), and message formats." Examples of ANSI X9 specifications in the key management space:

  • Key Management Protocols:
    • X9.69 Framework for Key Management Extensions
    • X9.70 Symmetric Key Distribution Using Public Key
    • X9.73 Cryptographic Message Syntax
    • X9.77 Public Key Infrastructure Protocols
  • Retail Key Management:
    • X9.24 Symmetric Key Management
    • X9.65 Triple Data Encryption Algorithm (TDEA) Implementation
    • X9.69 Key Extensions
    • X9.79 PKI Part 1 Framework
  • Public Key Cryptography:
    • X9.30 Public Key Cryptography Using Irreversible Algorithms (1-2)
    • X9.31 Digital Signatures Using Reversible Public Key Cryptography
    • X9.42 Agreement of Symmetric Keys Using Diffie-Hellman
    • X9.44 Key Establishment Using Integer Factorization Cryptography
    • X9.62 2005 X9.62-The Elliptic Curve Digital Signature Algorithm
    • X9.63 Key Agreement and Key Transport Using Elliptic Curve Cryptography
    • X9.65 TDEA Implementation Standard
    • X9.70 Symmetric Key Distribution Using Public Key
    • X9.80 Prime Number Generation Primality Testing, and Primality Certificates
    • X9.82 Parts 1-4 Random X9.82 — Number Generation
    • X9.92 Digital Signature Algorithms Giving Partial Message Recovery
  • Other:
    • X9.45 Enhanced Controls
    • X9.49 Secure Remote Access
    • X9.55 Extensions
    • X9.57 Certificate Management
    • X9.68 Short Certificates
    • X9.79 Public Key Infrastructure (PKI) Policy and Practices Framework
    • X9.84 Biometric Security
    • X9.88 Enhanced Digital Signatures
    • X9.95 Trusted Time Stamp
    • X9.96 XML Cryptographic Message Syntax (XCMS) [same abstract values, security levels, cryptographic processing as in X9.73]
    • X9.102 Wrapping of Keys and Associated Data
    • X9.111 Pen Test
    • X9.112 Wireless Part 1: General
    • X9.113 Trusted Transaction

Example Key Management Standards available from the ANSI X9 Store:

  • ANSI/X9 X9.24-2-2006 Retail Financial Services Symmetric Key Management Part 2: Using Asymmetric Techniques for the Distribution of Symmetric Keys Accredited Standards Committee X9 Incorporated. 13-Jan-2006. "This part of ANS X9.24 covers the management of keying material used for financial services such as point of sale (POS) transactions, automatic teller machine (ATM) transactions, messages among terminals and financial institutions, and interchange messages among acquirers, switches and card issuers. The scope of this part of X9.24 may apply to Internet-based transactions, but only when such applications include the use of a TRSM (as defined in section 7.2 of ANS X9.24 Part 1) to protect the private and symmetric keys. This part of ANS X9.24 deals with management of symmetric keys using asymmetric techniques and storage of asymmetric private keys using symmetric keys. Additional parts may be created in the future to address other methods of key management. This part of ANS X9.24 specifies the minimum requirements for the management of asymmetric keying material and TDEA keys used for ensuring the confidentiality and integrity of the private keys of asymmetric key pairs when stored as cryptograms on a database. Addressed are all components of the key management life cycle including generation, distribution, utilization, storage, archiving, replacement and destruction. Requirements for actions to be taken in the event of key compromise are also addressed. This part of ANS X9.24 presents overviews of the keys involved in the key transport and key agreement protocols, referencing other ANSI standards where applicable..."

  • ANSI/X9 TR-31-2005 Interoperable Secure Key Exchange Key Block Specification for Symmetric Algorithms. Accredited Standards Committee X9 Incorporated. 01-Sep-2005. "This document describes a method consistent with the requirements of ANS X9.24 Retail Financial Services Symmetric Key Management Part 1 for the secure exchange of keys and other sensitive data between two devices that share a symmetric key exchange key. This method may also be used for the storage of keys under a symmetric key. This method is designed to operate within the existing capabilities of devices used in the retail financial services industry. This document is not a security standard and is not intended to establish security requirements. It is intended instead to provide an interoperable method of implementing security requirements and policies...."

  • Retail Financial Services Symmetric Key Management Part 1: Using Symmetric Techniques Accredited Standards Committee X9 Incorporated. 04-Feb-2004. "This part of ANS X9.24-2004 covers both the manual and automated management of keying material used for financial services such as point-of-sale (POS) transactions (debit and credit), automated teller machine (ATM) transactions, messages among terminals and financial institutions, and interchange messages among acquirers, switches and card issuers. This part of ANS X9.24-2004 deals exclusively with management of symmetric keys using symmetric techniques. Additional parts may be created in the future to address other methods of key management. This part of ANS X9.24-2004 specifies the minimum requirements for the management of keying material. Addressed are all components of the key management life cycle including generation, distribution, utilization, storage, archiving, replacement and destruction of the keying material..."

  • ANSI/X9 X9.96-2004 XML Cryptographic Message Syntax (XCMS). This Standard specifies a text based Cryptographic Message Syntax (CMS) represented using XML 1.0 encoding that can be used to protect financial transactions and other documents from unauthorized disclosure and modification. The message syntax has the following characteristics: (1) Protected messages are represented using the Canonical XML Encoding Rules (cXER), and can be transferred as verbose markup text or in a compact, efficient binary representation using the Basic Encoding Rules (BER) or the canonical subset of BER, the Distinguished Encoding Rules (DER). (2) Messages are protected independently. There is no cryptographic sequencing (e.g., cipher block chaining) between messages. There need not be any real-time connection between the sender and recipient of the message. This makes the syntax suitable for use over store-and-forward systems, e.g., Automated Clearing House (ACH) or Society for Worldwide International Funds Transfer (SWIFT). Standard attributes are defined to allow applications to maintain relationships between messages, if desired. (3) The syntax is algorithm independent. It supports confidentiality, integrity, origin authentication, and non-repudiation services. Only ANSI X9-approved algorithm(s) may be used for message digest, message encryption, digital signature, message authentication, and key management. (4) Support for biometric security, enhanced certificate techniques such as compact domain certificates and key management extensions such as Constructive Key Management (CKM) are provided. (5) Selective field protection can be provided in two ways. First by combining multiple instances of this syntax into a composite message. And second by using identifier and type markup tag names to select message components to be protected in a single message, which allows reusable message components to be moved between documents without affecting the validity of the signature. (6) Precise message encoding and cryptographic processing requirements are provided...

  • ANSI/X9 X9.69-2006 Framework for Key Management Extensions. 26-Sep-2007. 31 pages. "This Standard defines methods for the generation and control of keys used in symmetric cryptographic algorithms. The Standard defines a constructive method for the creation of symmetric keys, by combining two or more secret key components. The Standard also defines a method for attaching a key usage vector to each generated key that prevents abuses and attacks against the key. The two defined methods can be used separately or in combination."

  • Guideline for Financial Services. TG-3-2006. Retail Financial Services Compliance Guideline. Online PIN Security and Key Management. 53 pages. "This guideline presents mandatory control objectives relating to general procedures and controls. The mandatory control objectives are based on requirements set forth in X9.8 (Banking, Personal Identification Number Management and Security Part 1), X9.24-2004 (Retail Financial Services Symmetric Key Management, Part 1: Using Symmetric Techniques) and X9.24-2005 (Retail Financial Services Symmetric Key Management, Part 2: Using Asymmetric Techniques for Distribution of Symmetric Keys). Sections 4.4 and 5.5 of this guideline include additional control objectives related to miscellaneous security issues, which are considered best business practices but are not covered under existing X9 standards. Each organization administering the review should evaluate the mandatory and optional control objectives for applicability... Each item within section 4's subsections addresses a different requirement. Subsection 4.1, General Security Procedures Control Objectives, involves general security practices affecting all portions of the security system. Subsection 4.2, Tamper Resistant Security Module Management Control Objectives, deals with general security controls applying to all TRSMs. Subsection 4.3, General Key Management Control Objectives, deals with general security controls affecting the management and security of encryption keys. Subsection 4.4, Additional Key Management Procedure Control Objectives, represents best business practices although not referenced to any specific ANSI mandate... Each item within section 5's subsections addresses a different requirement. Subsection 5.1, General Asymmetric Control Objectives, involves general asymmetric security practices affecting all portions of the asymmetric security system. Subsection 5.2, Asymmetric Key Management Control Objectives, addresses the general security controls affecting the management and security of asymmetric encryption keys. Subsection 5.3, Mutual Authentication Management Control Objectives, covers the requirements for ensuring all entities are fully authenticated. Subsection 5.4, Credential Management Control Objectives, deals with the security controls required for certificate of authorities and other credential management. Subsection 5.5, Additional Asymmetric Management Control Objectives, represents best business practices although not referenced by any specific ANSI mandate..."

  • ANSI/X9 TG-26-1999 Technical Guideline Managing Risk and Migration Planning: Withdrawal of ANSI X9.17. Financial Institution Key Management (Wholesale). Accredited Standards Committee X9 Incorporated / 01-Dec-1999. "Based on certain attacks on 56-bit DES described in detail in chapter 6, it is the consensus of X9 that ANSI X9.17-1995 no longer provides sufficient key management security to protect the wholesale financial industry. Hence, X9.17 is being withdrawn. This Guideline discusses: (1) using new technology to provide key management in support of the wholesale financial industry; (2) transitioning from X9.17 to the new technology; (3) measures that can be taken to ameliorate the risk inherent in X9.17 during the transition period..."

References:


DMTF Security Modeling Working Group

[Incomplete. TBD]

Security is one of the functional areas of the DMTF Server Management Working Group.

Per the presentation by Jon Hass at KMS 2008, the DMTF Security [Modeling] Working Group is creating Credential Profiles, where a Profile is a specification that normatively defines the data model interface between a WBEM Server and a WBEM Client. This includes Abstract and Specialized Profiles.

The KMS 2008 presentation "provides a brief review of the DMTF web based manageability technologies and an overview of the credential and key representation and management data models produced by the DMTF Security Working Group. In addition to defining and publishing the Simple Identity and Role Based Authorization management interface definitions, the DMTF Security WG has been working on credential management interface definitions where credentials include key based credentials like PKI public key infrastructure (PKI) and X509 and biometric credentials..."

Excerpt: DSP 1082 Credential Management Profile (Abstract Profile) provides a capability to represent and manage credentials in a managed system. Use case: Definition for the generic CIM model for managing different credentials. DSP 1096 Certificate Management Profile (Specialized Profile) provides capability to represent and manage X509 certificates. Use cases: (1) Import/management of asymmetric keys; (2) Management of key stores; (3) PKS#10 Request for certificate signing requests (CSR) generation; (4) Export/import/management of X509 certificates and certificate revocation lists (CRL)... (Examples) Import Asymmetric Key to Keystore, Import and Export X509 Certificates...

Work planned or underway in the DMTF Security Working Group: (1) Develop CIM Schema Classes for symmetric keys, biometric credentials, and other security credentials (2) Design Profiles for management of: symmetric keys, biometric and other security credentials; credential store authorization (this ties into DMTF DSP 1039 Role Based Authorization Profile)...

References:


GlobalPlatform Key Management System

GlobalPlatform Key Management System Functional Requirements, Version 1.0, November 2003 (64 pages). Available from the GP Systems Specifications repository. Excerpt: "The GlobalPlatform Key Management Functional Requirements define the minimum functional requirements required for the support of keys within the Security Architecture of GlobalPlatform. The scope extends the requirements for key management to include the requirements for cryptographic services. Application specific keys may also be supported by a GlobalPlatform Key Management System. The document identifies the general requirements for robust key management. The document covers how the keys are generated, stored, how they are used and how they are exchanged between entities. At a more specific level the document covers the cryptographic keys for the Issuer Security Domain and any supplementary Security Domains, plus the services that are needed for card enablement and application loading. Furthermore, the document covers the key management services needed by the Application Provider when personalizing applications, plus the types of keys potentially needed or supported in this area... The intention of GlobalPlatform KMS Functional Requirements is to serve as input to the GlobalPlatform Profile Specification, GlobalPlatform Messaging Specification and GlobalPlatform Scripting Specification providing a means to unambiguously identify the Key Values, the cryptographic services and the messages containing cryptographic methods, used within GlobalPlatform. Furthermore it provides a GlobalPlatform view of the roles and responsibilities of the entities handling these keys... This document is intended to be read by card issuers, application providers, system developers and system integrators playing a role in the production of Key Management systems for GlobalPlatform Chip Cards..."

Key Management System (KMS) Working Group. GlobalPlatform Systems Committee: The aim of the Systems Committee is to define open specifications that can be used for the back-office infrastructure of any smart card program, creating an industry and technology neutral smart card management environment which is able to support all market requirements, with specific support for GlobalPlatform cards... The Systems Committee's Key Management System (KMS) Working Group seeks "(1) to define interfaces between Key Management Systems (KMS), enabling control of key usage and ensuring interoperability between systems that share keys during the lifecycle of GlobalPlatform cards and applications; (2) To develop, maintain and evolve the KMS Requirements Specification..."

"The GlobalPlatform Specifications are freely available and have been adopted in the Americas, Europe, Africa and Asia Pacific by many public and private bodies. As of October 2008, 90 known GlobalPlatform-based implementations had been reported worldwide and an estimated 305.7 million GlobalPlatform-based smart cards had been deployed. Additionally, two billion mid range USIM/SIM cards worldwide are estimated to use GlobalPlatform card technology to enable over-the-air (OTA) application downloads for 3G and GSM mobile networks....."

GlobalPlatform provides a universally recognized and globally implemented suite of smart card specifications, together with market and application specific configurations of those specifications and supporting documents. Covering the entire smart card infrastructure - cards, devices and systems - these technical documents offer a dynamic and complete technology platform for the development of multi-application, multi-actor and multi-business model smart card programs. Being industry neutral, they can be used by issuers across all sectors...The primary objective of GlobalPlatform is to establish, maintain and drive adoption of specifications to enable an open and interoperable infrastructure for smart cards, devices and systems to simplify and accelerate the development, deployment and management of applications across industries and geographies. GlobalPlatform develops models and conventions needed to facilitate cross-industry application loading and management, such as back-end card systems, security, key management and application deployment."

GlobalPlatform KMS References:


IEEE P1619.3 Security in Storage Working Group (SISWG), Key Management

The IEEE Security in Storage Working Group (SISWG) is working on standards related to encrypted storage media, including both encryption and key management. Membership is open to anyone... IEEE Project P1619.3: "Key Management" was approved in February 2007 to develop a Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data. This project was chartered through December 31, 2011. The goals of the Key Management group are to: (1) Create a standard that allows secure interchange of encryption keys between devices that encrypt stored data and devices that manage keys; (2) Understand existing standards and use where possible to expedite the creation of this standard; (3) Raise public awareness of P1619.3 and encourage adoption; (4) Facilitate interchange by providing open source reference implementations.

On February 18, 2009, Matt Ball announced the availability of IEEE P1619.3/D6, edited by Bob Lockhart. The draft is available for comment through March 20, 2009. Details: IEEE P1619.3/D6. Draft Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data, prepared by the IEEE Security in Storage Working Group of the Computer Society Information Assurance Committee (Unapproved IEEE Standards Draft, 90 pages). Informative Annex D provides the XML Schema Definitions.

Excerpt: "This [draft IEEE P1619.3/D6] standard specifies an architecture for the key management infrastructure for cryptographic protection of stored data, describing interfaces, methods and algorithms. It defines methods for the storage, management, and distribution of cryptographic keys used for the protection of stored data. This standard augments existing key management methodologies to address issues specific to cryptographic protection of stored data. This includes stored data protected by compliant implementations of other standards in the IEEE 1619 family.

Cryptographic mechanisms are used as a strong way to provide secure access control to stored data. Encrypting the stored data makes it accessible only to entities that have access to the key that can decrypt the data. Therefore, the proper management of cryptographic keys is essential to the effective use of cryptography for security. Mismanagement of keys may grant access to unauthorized parties or render critical information inaccessible to all. Intelligent key management adds security by selectively distributing keys according to an organization's information security policy. There are several advantages of centralizing key management: [1] Centralized policy management; [ii] Centralized audit and reporting; [iii] Segregate the security administrators from the storage or IT administrators.

Therefore this standard specifies a key management infrastructure in which a set of key management servers (KM Server) provide key management services to clients. The key management services defined in this standard include distribution, generation, availability, enforcement, storage, archival and audit of keys and policies. A critical issue is how to assign encryption keys to broadly defined data sets. This standard proposes a matching system that allows the KM Server to provide keys to clients based on those attributes most easily available to the clients, namely data or storage identifiers. The messages sent between clients and KM Server are encoded in an extensible Key Exchange Structure. This structure associates a key to a data set, and lists policies governing key use.

Because of the complex nature of key management, several models have been developed to provide various viewpoints into the architecture of a key management infrastructure and describe the entities and objects that make up these environments. The key management models presented in the standard are as follows: (1) Key Management Architecture Model: the basic, high level model for key management infrastructures that provides a foundation for the remaining models and detailed information developed in the later sections of this standard. (2) Key Management Conceptual Model: provides the basic model for the internal and external organization of the primary components in a key management infrastructure. (3) Key Lifecycle Model: the basic state machine model that encompasses the entire set of standard key states and the transitions between these states as maintained by the key management systems that implement this standard. (4) Key Management Sequence Models: a series of models that describe the high level interactions between the primary components in the Key Management Conceptual Model. (5) Key Management Object Models. (6) Key Management Operation Models...

SISWG is sponsored jointly by the IEEE Information Assurance Standards Committee (IASC) [Jack Cole, Chair], and by IEEE Storage Systems Standards Committee (SSSC) [Curtis Anderson, Chair].

Current SISWG Officers include [2009-02] Matt Ball (Chair), Eric Hibbard (Vice Chair), and Walt Hubis (Secretary). Past Officers have included James Hughes (Chair), Serge Plotkin (Vice Chair), and Fabio Maino (Secretary). Hosting for SISWG.NET is provided by GoDaddy and is paid for by NetApp. Site maintanence is provided by Sun Microsystems.

The IEEE P1619.3 Task Group has several Standing Subcommittees [as of 2009-02]: Project Management Standing Subcommittee, which defines the schedule and other project management aspects (PM) - Walt Hubis (LSI Logic); Architecture Standing Subcommittee (ARCH), which defines the Model diagram and the interactions of components - Walt Hubis (LSI Logic); Operation and Objects Adhoc Group (OO), which defines the Application Programmers Interface (API) - Landon Noll (Cisco) and Subhash Sankuratripati (NetApp); Use Cases Adhoc Group (USE), which is developing a set of use-cases that represents the interests of the P1619.3 members - Luther Martin (Voltage Security); Messaging and Transport Adhoc Group (MSG), which deliver a proposal for the messaging format ( e.g., XML, Tag-length-value, binary) and the transport (e.g., TLS, SSL, IPSec) - Subhash Sankuratripati (NetApp) Reference Implementation Standing Subcommittee (RISS), which will create an open-source reference implementation to assist interoperability testing; Compliance Standing Subcommittee (CpSS), which defines requirements for compliance to P1619.3 and testing for compliance - Bob Griffin; Conformance Standing Subcommittee (CfSS), which defines a protection profile, possibly using Common Criteria - Eric Hibbard; Wine Tasting Standing Subcommittee (WTSS) - Landon Noll.

[Draft] PAR for a Revision to Existing Standard 1619-2007. Working Group: Security in Storage Working Group(C/IA/SIS-WG). As proposed: Title: Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices. Scope of Proposed Standard: This standard specifies the XTS cryptographic mode of operation for the AES block cipher and an XML-based key archive format for block-oriented storage devices. Purpose of Proposed Standard: The purpose of this standard is to expand the XTS cryptographic mode while maintaining backwards compatibility with existing implementations that are compliant with IEEE Std 1619-2007... [Unchanged: Need for the Project: The XTS-AES cryptographic mode of operation was submitted to NIST for consideration as an approved mode of operation under FIPS 140. A number of technical issues were raised as a result of the NIST review. This project will examine the NIST review and produce a revised standard based on the feedback from the NIST public comment period."

References:


IEEE ICSG Privilege Management Protocols (PMP) Working Group

"There is a need to improve the foundational mechanisms we use in communication protocols to establish security relationships between devices. New mechanisms are required to efficiently authenticate devices and determine "who can do what". Public key cryptography, X.509 certificates and XML security mechanisms are already available to provide some solutions to this problem area. These existing technologies can be leveraged, but are not themselves adequate or efficient solutions when complex privileges need to be managed or when the target applications are embedded devices and wireless systems."

"The Privilege Management Protocols WG will focus on the development of protocols for efficient authentication and the secure determination of 'who can do what'. The 'who' is a cryptographic based identity that supports authentication and key establishment. The 'what' consists of the manageable attributes of a system. The enforcement decisions are based on policy rules that define the relationships of entities to the manageable attributes. The target applications are embedded devices and wireless systems that require efficient and compact implementations...

The group will develop proposals for this problem area in phases that will include, but are not limited to:

  • Cryptographic protocols to support device authentication and key establishment. Existing standards based cryptographic algorithms will be used. A 'key centric' approach will be pursued that uses public-keys as the primary identification mechanism.

  • The definition of schema mechanisms and encodings for efficient management attribute representation. An extensible encoding format will be selected or defined that supports both a canonical human readable and efficient machine readable encoding. Wherever possible semantic hints will be supported to allow the meaning of the attributes and associated policies to be readily understood by humans.

  • Policy statements will describe how entities relate to their capabilities. The policy statements will be based on the same or similar definitions as the attribute encodings[...]"

About ICSG: "The IEEE-SA Industry Connections (IC) program helps incubate new standards and related services by helping like-minded organizations come together in a quick and cost-effective manner to hone and refine their thinking on rapidly changing technologies, and determine appropriate next steps. This program offers an efficient, economical, safe harbor environment for building consensus and producing shared results...

ICSG is a global group of computer security entities that have come together to pool experience and resources in combating the systematic and rapid rise in computer security threats.

In the past few years, attackers have shifted away from mass distribution of a small number of threats to micro distribution of millions of distinct threats. ICSG was established, under the umbrella of the IEEE-SA Industry Connections program, out of the desire by many in the security industry to more efficiently address these growing threats in a coordinated fashion...."

References:


IETF Provisioning of Symmetric Keys (KEYPROV) Working Group

A proposed charter and approved charter for the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group were published in January 2007.

IETF KEYPROV Working Group Description: "Current developments in deployment of Shared Symmetric Key (SSK) tokens have highlighted the need for a standard protocol for provisioning symmetric keys. The need for provisioning protocols in PKI architectures has been recognized for some time. Although the existence and architecture of these protocols provides a feasibility proof for the KEYPROV work assumptions built into these protocols mean that it is not possible to apply them to symmetric key architectures without substantial modification. In particular, the ability to provision symmetric keys and associated attributes dynamically to already issued devices such as cell phones and USB drives is highly desirable. The working group will develop the necessary protocols and data formats required to support provisioning and management of symmetric key authentication tokens, both proprietary and standards based."

The scope of the working group is "to define protocols and data formats necessary for provisioning of symmetric cryptographic keys and associated attributes. The group is considering use cases related to use of Shared Symmetric Key Tokens. Other use cases may be considered for the purpose of avoiding unnecessary restrictions in the design and ensure the potential for future extensibility..."

As of 2009-02, the IETF KEYPROV Working Group was co-chaired by Phillip Hallam-Baker (Verisign) and Hannes Tschofenig (NSN - FI/Espoo) within the IETF Security Area, directed by Tim Polk (NIST) and Pasi Eronen (Nokia).

Specification short list (latest versions):

KEYPROV Working Group Specifications (as of 2010-05-26; see URIs above for updates):

  • Dynamic Symmetric Key Provisioning Protocol (DSKPP). Produced by members of the IETF KEYPROV Working Group, part of the IETF Security Area. Standards Track Internet Draft, 'draft-ietf-keyprov-dskpp-11.txt'. May 13, 2010; expires November 14, 2010. 96 pages. See the diff with version -10 and the HTML format. Edited by Andrea Doherty (RSA, The Security Division of EMC), Mingliang Pei (Verisign, Inc), Salah Machani (Diversinet Corp), and Magnus Nystrom (Microsoft. See the Second Last Call Review announced by IESG on May 26, 2010.

    DSKPP is a client-server protocol for initialization (and configuration) of symmetric keys to locally and remotely accessible cryptographic modules. The protocol can be run with or without private-key capabilities in the cryptographic modules, and with or without an established public-key infrastructure. Two variations of the protocol support multiple usage scenarios. With the four-pass variant, keys are mutually generated by the provisioning server and cryptographic module; provisioned keys are not transferred over-the-wire or over-the-air. The two-pass variant enables secure and efficient download and installation of pre-generated symmetric keys to a cryptographic module.

    Background: "Symmetric key based cryptographic systems (e.g., those providing authentication mechanisms such as one-time passwords and challenge- response) offer performance and operational advantages over public key schemes. Such use requires a mechanism for provisioning of symmetric keys providing equivalent functionality to mechanisms such as CMP (RFC 4210) and CMC (RFC 5272) in a Public Key Infrastructure. Traditionally, cryptographic modules have been provisioned with keys during device manufacturing, and the keys have been imported to the cryptographic server using, e.g., a CD-ROM disc shipped with the devices. Some vendors also have proprietary provisioning protocols, which often have not been publicly documented; CT-KIP is one exception, per RFC 4758.

    Protocol Entities: A DSKPP provisioning transaction has three entities. (1) Server: The DSKPP provisioning server. (2) Cryptographic Module: The cryptographic module to which the symmetric keys are to be provisioned, e.g., an authentication token. (3) Client: The DSKPP client which manages communication between the cryptographic module and the key provisioning server. The principal syntax is XML and it is layered on a transport mechanism such as HTTP. While it is highly desirable for the entire communication between the DSKPP client and server to be protected by means of a transport providing confidentiality and integrity protection such as HTTP over Transport Layer Security (TLS), such protection is not sufficient to protect the exchange of the symmetric key data between the server and the cryptographic module and the DSKPP protocol is designed to permit implementations that satisfy this requirement. The server only communicates to the client. As far as the server is concerned, the client and cryptographic module may be considered to be a single entity. From a client-side security perspective, however, the client and the cryptographic module are separate logical entities and may in some implementations be separate physical entities as well. It is assumed that a device will host an application layered above the cryptographic module, and this application will manage communication between the DSKPP client and cryptographic module. The manner in which the communicating application will transfer DSKPP protocol elements to and from the cryptographic module is transparent to the DSKPP server..."

  • Symmetric Key Package Content Type. IETF KEYPROV Working Group, part of the IETF Security Area. Standards Track Internet Draft, 'draft-ietf-keyprov-symmetrickeyformat-07.txt'. April 26, 2010, expires October 26, 2010. Publication announced April 28, 2010 (Second Last Call). Extent: 27 pages. See the Diff with version -07 and the HTML format. Edited by Sean Turner (IECA, Inc) and Russ Housley (Vigil Security, LLC). "This document defines the symmetric key format content type. It is transport independent. The Cryptographic Message Syntax (RFC 5652) can be used to digitally sign, digest, authenticate, or encrypt this content type. The use cases that motivated this work are elaborated in "Portable Symmetric Key Container (PSKC). Symmetric Key Package Content Type: The symmetric key package content type is used to transfer one or more plaintext symmetric keys from one party to another. A symmetric key package may be encapsulated in one or more CMS protecting content types. This content type must be Distinguished Encoding Rules (DER) encoded, per X.690... Several attributes are defined to assist those using the symmetric key package defined in this document as part of a Portable Symmetric Key Container protocol (PSKC). The attributes fall in to three categories. The first category includes attributes that apply to a key package, and these attributes will generally appear in sKeyPkgAttrs. The second category includes attributes that apply to a particular key, and these attributes will generally appear in sKeyAttrs. The third category includes attributes that apply to a key policy. Of the attributes defined next, only the Key Identifier and Algorithm key attributes MUST be included. All other attributes are OPTIONAL. Like PSKC, the Symmetric Key Content Type supports extensibility. Primarily this is accomplished through the definition and inclusion of new attributes, but in some instances where the attribute contains more than one type the ASN.1 "..." extensibility mechanism is employed. A straightforward approach to conversion from XML types to ASN.1 is employed... Section 3.3 presents the Key Policy Attributes: Key policy attributes indicate a policy that can be attached to a key (e.g., Start Date, Expiry Date, Number of Transactions, Key Usage, PIN Policy). Appendix A (ASN.1 Module) appendix provides the normative ASN.1 definitions for the structures described in the specification using ASN.1 as defined in X.680 through X.683 (Extensible Markup Language (XML) element and attributes as defined in PSKC..."

  • Portable Symmetric Key Container (PSKC). IETF KEYPROV Working Group. Standards Track Internet Draft, 'draft-ietf-keyprov-pskc-05.txt'. January 05, 2010, expires July 9, 2010. 62 pages. HTML format. Diff with version -04. Edited by Philip Hoyer (ActivIdentity, Inc), Mingliang Pei (Verisign, Inc), and Salah Machani (Diversinet Corp). The XML namespace URI for Version 1.0 of PSKC is xmlns:pskc="urn:ietf:params:xml:ns:keyprov:pskc", Appendix A: Use Cases; Appendix B: Requirements, Section 11: XML schema for PSKC. Changes from -04 include: updated and corrected examples, updated old reference to URI, adopted and corrected editorial notes from WG Chair (Hannes); no changes were made to the XML schema. "With increasing use of symmetric key based systems, such as encryption of data at rest, or systems used for strong authentication, such as those based on one-time-password (OTP) and challenge response (CR) mechanisms, there is a need for vendor interoperability and a standard format for importing and exporting (provisioning) symmetric keys. For instance, traditionally, vendors of authentication servers and service providers have used proprietary formats for importing and exporting these keys into their systems, thus making it hard to use tokens from two different vendors. This document defines a standardized XML-based key container, called Portable Symmetric Key Container (PSKC), for transporting symmetric keys and key related meta data. The document also specifies the information elements that are required when the symmetric key is utilized for specific purposes, such as the initial counter in the MAC-Based One Time Password (HOTP) algorithm. It also requests the creation of an IANA registry for algorithm profiles where algorithms, their meta-data and PSKC transmission profile can be recorded for centralised standardised reference.

  • Additional Portable Symmetric Key Container (PSKC) Algorithm Profiles. IETF KEYPROV Working Group. Informational Internet Draft, 'draft-hoyer-keyprov-pskc-algorithm-profiles-00.txt'. December 24, 2008, expires June 27, 2009. 30 pages. HTML format. Edited by Philip Hoyer (ActivIdentity, Inc), Mingliang Pei (Verisign, Inc), Salah Machani (Diversinet Corp), and Andrea Doherty (RSA, The Security Division of EMC). "The Portable Symmetric Key Container (PSKC) contains a number of XML elements and XML attributes carrying keys and related information. Not all algorithms, however, are able to use all elements and for other algorithm certain information is mandatory. This lead to the introduction of PSKC algorithm profiles that provide further description about the mandatory and optional information elements and their semantic, including extensions that may be needed. The main PSKC specification defines two PSKC algorithm profiles, namely "HOTP" and "PIN". This document extends the initial set and specifies nine further algorithm profiles for PKSC. The document specifies a set of algorithm profiles for PKSC, namely: OCRA (OATH Challenge Response Algorithm); TOTP (OATH Time based OTP); SecurID-AES; SecurID-AES-Counter; SecurID-ALGOR; ActivIdentity-3DES; ActivIdentity-AES; ActivIdentity-DES; ActivIdentity-EVENT..."

IETF KEYPROV References:


ISO/IEC 11770: Key Management

[Incomplete. TBD]

ISO/IEC 11770 is concerned with the management of cryptographic keys.

  • ISO/IEC 11770-1:1996 Information technology — Security techniques — Key management — Part 1: Framework. 21 pages. "Defines a general model of key management that is independent of the use of any particular cryptographic algorithm. Identifies the objective of key management, basic concepts and key management services.

  • ISO/IEC 11770-2:2008 Information technology — Security techniques — Key management — Part 2: Mechanisms using symmetric techniques. 27 pages. "ISO/IEC 11770-2:2008 specifies a series of 13 mechanisms for establishing shared secret keys using symmetric cryptography. These mechanisms address three different environments for the establishment of shared secret keys: point-to-point key establishment schemes, mechanisms using a Key Distribution Centre (KDC), and techniques that use a Key Translation Centre (KTC). ISO/IEC 11770-2:2008 describes the content of messages which carry keying material or are necessary to set up the conditions under which the keying material can be established. This second edition is a technically revised version of the first edition: Mechanism 12 has been modified to address identified security shortcomings..."

    [From IHS (Part 2 Scope)]: "The purpose of key management is to provide procedures for handling cryptographic keying material to be used in symmetric or asymmetric cryptographic algorithms according to the security policy in force. This part of ISO/IEC 11770 defines key establishment mechanisms using symmetric cryptographic techniques. Key establishment mechanisms using symmetric cryptographic techniques can be derived from the entity authentication mechanisms of ISO/IEC 9798-2 and ISO/IEC 9798-4 by specifying the use of text fields available in those mechanisms. Other key establishment mechanisms exist for specific environments; see, for example, ISO 8732. Besides key establishment, the goals of such a mechanism might include unilateral or mutual authentication of the communicating entities. Further goals might be the verification of the integrity of the established key, or key confirmation. This part of ISO/IEC 11770 addresses three environments for the establishment of keys: Point-to-Point, Key Distribution Centre (KDC), and Key Translation Centre (KTC). This part of ISO/IEC 11770 describes the required content of messages which carry keying material or are necessary to set up the conditions under which the keying material can be established. It does not indicate other information which can be contained in the messages or specify other messages such as error messages. The explicit format of messages is not within the scope of this part of ISO/IEC 11770..."

  • ISO/IEC 11770-3:2008 Information technology — Security techniques — Key management — Part 3: Mechanisms Using Asymmetric Techniques. Revises ISO/IEC 11770-3:1999. "ISO/IEC 11770-3:2008 defines key management mechanisms based on asymmetric cryptographic techniques. It specifically addresses the use of asymmetric techniques to achieve the following goals. (1) Establish a shared secret key for a symmetric cryptographic technique between two entities A and B by key agreement. In a secret key agreement mechanism, the secret key is the result of a data exchange between the two entities A and B. Neither of them can predetermine the value of the shared secret key. (2) Establish a shared secret key for a symmetric cryptographic technique between two entities A and B by key transport. In a secret key transport mechanism, the secret key is chosen by one entity A and is transferred to another entity B, suitably protected by asymmetric techniques. (3) Make an entity's public key available to other entities by key transport. In a public key transport mechanism, the public key of entity A must be transferred to other entities in an authenticated way, but not requiring secrecy. Some of the mechanisms of ISO/IEC 11770-3:2008 are based on the corresponding authentication mechanisms in ISO/IEC 9798-3...

    ISO/IEC 11770-3:2008 does not cover aspects of key management such as key lifecycle management, mechanisms to generate or validate asymmetric key pairs, mechanisms to store, archive, delete, destroy, etc. keys. While ISO/IEC 11770-3:2008 does not explicitly cover the distribution of an entity's private key (of an asymmetric key pair) from a trusted third party to a requesting entity, the key transport mechanisms described can be used to achieve this. A private key can in all cases be distributed with these mechanisms where an existing, non-compromised key already exists. However, in practice the distribution of private keys is usually a manual process that relies on technological means like smart cards, etc. ISO/IEC 11770-3:2008 does not cover the implementations of the transformations used in the key management mechanisms....

  • ISO/IEC 11770-4:2006 Information technology — Security techniques — Key management — Part 4: Mechanisms based on weak secrets. "ISO/IEC 11770-4:2006 defines key establishment mechanisms based on weak secrets, i.e., secrets that can be readily memorized by a human, and hence secrets that will be chosen from a relatively small set of possibilities. It specifies cryptographic techniques specifically designed to establish one or more secret keys based on a weak secret derived from a memorized password, while preventing off-line brute-force attacks associated with the weak secret. More specifically, these mechanisms are designed to achieve one of the following three goals.

    Balanced password-authenticated key agreement: Establish one or more shared secret keys between two entities that share a common weak secret. In a balanced password-authenticated key agreement mechanism, the shared secret keys are the result of a data exchange between the two entities, the shared secret keys are established if and only if the two entities have used the same weak secret, and neither of the two entities can predetermine the values of the shared secret keys.

    Augmented password-authenticated key agreement: Establish one or more shared secret keys between two entities A and B, where A has a weak secret and B has verification data derived from a one-way function of A's weak secret. In an augmented password-authenticated key agreement mechanism, the shared secret keys are the result of a data exchange between the two entities, the shared secret keys are established if and only if the two entities have used the weak secret and the corresponding verification data, and neither of the two entities can predetermine the values of the shared secret keys.

    Password-authenticated key retrieval: Establish one or more secret keys for an entity, A, associated with another entity, B, where A has a weak secret and B has a strong secret associated with A's weak secret. In an authenticated key retrieval mechanism, the secret keys, retrievable by A (not necessarily derivable by B), are the result of a data exchange between the two entities, and the secret keys are established if and only if the two entities have used the weak secret and the associated strong secret. However, although B's strong secret is associated with A's weak secret, the strong secret does not (in itself) contain sufficient information to permit either the weak secret or the secret keys established in the mechanism to be determined.

ISO 11770 References [Incomplete]:

  • ISO 11770-2 information from IHS
  • "ISO 11770 Part 1: A Key Management Framework." By Larry Hofer. Presentation to IEEE P1619.3. April 11, 2007. "ISO 11770 Part 1 defines a framework to: Identify the Objective of Key management; Describe a general model to base key management mechanisms on; Defines basic concepts; Defines key management services; Identifies characteristics of mechanisms; Define requirements for the key material management in its lifecycle; Describes a framework for the key material management in its lifecycle. Other parts of ISO 11770 (2-4) define general mechanisms for using symmetric techniques, using asymmetric techniques (based on weak secrets)..."
  • Note: ISO/IEC 19772:2009 Information technology — Security techniques — Authenticated Encryption. "ISO/IEC 19772:2009 specifies six methods for authenticated encryption, i.e., defined ways of processing a data string with the following security objectives: data confidentiality, i.e., protection against unauthorized disclosure of data; data integrity, i.e., protection that enables the recipientauthentication, i.e., protection that enables the recipient of data to verify the identity of the data originator. All six methods specified in ISO/IEC 19772:2009 require the originator and the recipient of the protected data to share a secret key. Key management is outside the scope for ISO/IEC 19772:2009; key management techniques are defined in ISO/IEC 11770..."


KeyGen2: Key Provisioning/Management Standards Proposal

Anders Rundgren (communication 2009-02-24): "KeyGen2 is an individual standardization initiative with the goal of creating a scheme allowing any issuer of authentication, signature, and encryption keys to use mobile phones as the primary vehicle. The long-term target of the effort is combining this system with complementary mobile phone technologies such as NFC (Near Field Communication) which can make phones work as "better smart cards". KeyGen2 has mainly been developed by former RSA engineer Anders Rundgren. KeyGen2 is currently in early beta and can be tested at the public site: http://keycenter.webpki.org. KeyGen2 is meant to be deployed as Open Source, currently with Google's Android as the initial target: Android Keystore V2. A forward-looking description of this project in the making could be something like using Android as the vehicle that will eventually thwart the current userid/password explosion on the Internet but there are several other useful, more short-term and down-to-earth targets as well, including OTP (One Time Password) generation and secure key storage. KeyGen2 is based on XML Security and Internet-browser extensions. Unlike most other efforts in this area, KeyGen2 is targeting consumers using phones for on-line banking, e-government services, and eventually, through the use of "virtual" credit-cards, also for payments..."

Subject Key Attestations in KeyGen2: "For on-line (remote) provisioning of keys to Security Elements (SEs), like Smart Cards, there is a wish by issuers to be able to securely verify that the public key part they receive for inclusion in a certificate, indeed was (together with its private counterpart), generated inside of a 'genuine' SE. In fact, for compliance with security standards like FIPS 140-2, such a facility would be a prerequisite. This document shows how key-attestations performed by an SE equipped with an embedded private key and certificate for credential 'bootstrapping' have been integrated in the KeyGen2 protocol. By 'piggybacking' secret data on attested asymmetric key-pairs, the described key attestation mechanism becomes equally applicable to downloadable symmetric keys. Below is a sample featuring a single key which is used for illustrating the key-attestation support [...] On the following pages there is authentic Java code showing how KeyGen2 key attestation signatures can be validated..."

"What do KeyGen2 and KMIP have in common? Extremely little as far as I can see. KMIP appears to be a client ->server protocol; KeyGen2 is rather a web-centric server -> client protocol. The main difference between these schemes is that they build on diametrically opposed security models which I believe is just fine because they do not [at all] address the same application space in spite of the fact that they indeed both deal with Key Managent of Asymmetric and Symmetric Keys as well as other Objects. The closest description of what KMIP seems to be is a remote version of a cryptographic API. KeyGen2 is more like a replacement for WebKit's 'keygen' (used by Nokia, Google, and Apple), which was recently cast out from HTML5 as well as to Microsoft's CertEnroll which has never been suggested as a standard..."

From "Case Study of XML Based PKI Management Protocols." By Tomas Gustavsson (PrimeKey Solutions AB). Also in PDF format . Slide 14. "Keygen2: An effort creating a standard for browser-based on-line asymmetric provisioning of PKI user-certificates and keys. In addition, KeyGen2 supports an option for 'piggybacking' symmetric keys like OTP (One Time Password) seeds on PKI. Using a generic credential extension mechanism, KeyGen2 can support things like Microsoft InfoCards and downloadable code associated with a specific key. One of the core targets for KeyGen2 are mobile phones equipped with TPMs (Trusted Platform Modules), which properly applied, can securely emulate any number of smart-cards. Although TPMs definitely is not a standard utility today, it is anticipated in the future..."

KeyGen2 References:

  • Universal Provisioning. "The purpose of this site is providing developers and other interested parties with documentation, source code, binaries, and on-line test facilities for a standards proposal called KeyGen2, also known as "Universal Provisioning". What is that? Well, quoting a typical marketing department the message would probably go something like: "The ability to securely and conveniently, provision and managing all the authentication, signature and encryption keys a consumer, citizen, or employee may ever need — Any organization, any technology... for engineers it may be more appropriate just listing the core ingredients in this soup: XML, PKI, OTP, PIN, PUK, Microsoft CardSpace, TPM, MIME, HTTP, and SQL"
  • KeyGen2 - Key Provisioning/Management Standards Proposal. Posting to W3C list 'public-xmlsec'. February 18, 2009.
  • "OASIS KMIP [and KeyGen2] for Mobile Phones." Posting by Anders Rundgren to the ICF list.
  • Android Keystore V2
  • "Two-factor Authentication in the Enterprise.". Vision paper. 5 pages.
  • KeyGen2: 'Universal Provisioning'. 9 pages.
  • Using SQL Databases as Universal Keystores. By Anders Rundgren, July 2008. "The following is a short extract showing how you can use an embedded SQL database in a mobile phone as a sophisticated userkeystore. How keys are protected varies but would in a perfect implementation be performed by a TPM or similar... the code is intended to be augmented by the KeyGen2 universal provisioning system which currently supports: PKI; Symmetric keys including OTP (One Time Password) 'seeds'; Issuer-specific PUKs (Personal Unlock Keys) and associated policies; Issuer-specific PIN policies as well as preset PINs; Property bags associated with provisioned keys; Platform 'negotiation' allowing for example controlled migration from RSA to ECC keys; Downloaded algorithm code for use with a provisioned key; Information Cards — formerly Microsoft CardSpace; A generic extension mechanism allowing data and corresponding applications to be added (and discovered during platform negotiation) without changing the provisioning protocol or its implementation — this is enabled by the use a URI-based registry scheme and opaque extension 'blobs'... To facilitate life-cycle handling, provisioned objects can be remotely managed in a secure fashion. To simply the integration with the web, KeyGen2 is designed as an extension to Internet browsers..."
  • Attested Key-Pair Generation with 'Key Escrow'. See also the text version. ["IPR Declaration: This specification is herby put in the public domain. It does to the author's knowledge not infringe on any existing patent.]
  • KeyGen2 - Yet Another PKI Provisioning Protocol
  • Contact: Anders Rundgren. Tel: +46 54 96 535


U.S. National Institute of Standards and Technology (NIST)

U.S. National Institute of Standards and Technology (NIST) publications on security (including encryption and key management) have played a prominent role for many years, especially for government applications. FIPS Publications are issued by NIST after approval by the Secretary of Commerce pursuant to Section 5131 of the Information Technology Reform Act of 1996 (Public Law 104-106) and the Federal Information Security Management Act of 2002 (Public Law 107-347). NIST Special Publications in the 800 series present documents of general interest to the computer security community. The Special Publication 800 series was established in 1990 to provide a separate identity for information technology security publications. This Special Publication 800 series reports on ITL's research, guidelines, and outreach efforts in computer security, and its collaborative activities with industry, government, and academic organizations.

NIST's Cryptographic Key Management (CKM) Project seeks to "improve the overall key management strategies used by the public and private sectors in order to enhance the usability of cryptographic technology, provide scalability across cryptographic technologies, and support a global cryptographic key management infrastructure", as explained in the project FAQ document. See for example the reports and published papers from CKMS Workshops (2009, 2010, 2012) and publications such as: (a) A Framework for Designing Cryptographic Key Management Systems and (b) A Profile for U. S. Federal Cryptographic Key Management Systems (CKMS).

NIST Computer Security Division (CSD) is one of six divisions within NIST's Information Technology Laboratory. The CSD mission is to provide standards and technology to protect information systems against threats to the confidentiality of information, integrity of information and processes, and availability of information and services in order to build trust and confidence in Information Technology (IT) systems... CSD collaborates with a number of national and international agencies and standards bodies to develop secure, interoperable security standards. Federal agency collaborators include the Department of Energy, the Department of State, the National Security Agency (NSA), and the Communications Security Establishment of Canada, while national and international standards bodies include the American Standards Committee (ASC) X9 (financial industry standards), the International Organization for Standardization (ISO), the Institute of Electrical and Electronic Engineers (IEEE) and the Internet Engineering Task Force (IETF). Industry collaborators include BC5 Technologies, Certicom, Entrust Technologies, Hewlett Packard, InfoGard, Microsoft, NTRU, Pitney Bowes, RSA Security, Spyrus, and Wells Fargo.

[2013-08-16] "A Framework for Designing Cryptographic Key Management Systems." By Elaine Barker, Miles Smid, Dennis Branstad, and Santosh Chokhani (National Institute of Standards and Technology). NIST Special Publication 800-130. August 2013. 120 pages. "This Framework for Designing Cryptographic Key Management Systems (CKMS) is a description of the topics to be considered and the documentation requirements (henceforth referred to as requirements) to be addressed when designing a CKMS. The CKMS designer satisfies the requirements by selecting the policies, procedures, components (hardware, software, and firmware), and devices (groups of components) to be incorporated into the CKMS, and then specifying how these items are employed to meet the requirements of this Framework. A CKMS consists of policies, procedures, components and devices that are used to protect, manage and distribute cryptographic keys and certain specific information, called (associated) metadata herein. A CKMS includes all devices or sub-systems that can access an unencrypted key or its metadata. Encrypted keys and their cryptographically protected (bound) metadata can be handled by computers and transmitted through communications systems and stored in media that are not considered to be part of a CKMS. This CKMS Framework provides design documentation requirements for any CKMS. In other words, it describes what needs to be documented in the CKMS design. The goal of the Framework is to guide the CKMS designer in creating a complete uniform specification of the CKMS that can be used to build, procure, and evaluate the desired CKMS." See also the presentation A Draft Framework for Designing Cryptographic Key Management Systems (DSP 800-130) given at the NIST Cryptographic Key Management Workshop 2012, September 10, 2012. [cache]

NIST Special Publication 800-57 provides cryptographic key management guidance. General Guidance, Part 1 of the Recommendation for Key Management, contains basic key management gudance for users, developers and system managers regarding the "best practices" associated with the generation and use of the various classes of cryptographic keying material. General Organization and Management Requirements, Part 2 of the Recommendation, provides a framework and general guidance to support establishing cryptographic key management within an organization, and a basis for satisfying the key management aspects of statutory and policy-based security planning requirements for [U.S.] Federal government. Part 3, Application-Specific Key Management Guidance provides guidance when using the cryptographic features of current systems. From the 'Introduction':

"The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If a safe combination becomes known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded to the keys. All keys need to be protected against modification, and secret and private keys need to be protected against unauthorized disclosure. Key management provides the foundation for the secure generation, storage, distribution, and destruction of keys. Users and developers are presented with many choices in their use of cryptographic mechanisms. Inappropriate choices may result in an illusion of security, but little or no real security for the protocol or application. This recommendation (i.e., SP 800-57) provides background information and establishes frameworks to support appropriate decisions when selecting and using cryptographic mechanisms."

On October 24, 2008, NIST announced the release of a draft of Part 3 of Special Publication 800-57, Recommendation for Key Management: Application-Specific Key Management Guidance. This Recommendation provides guidance when using the cryptographic features of current systems. It is intended to help system administrators and system installers adequately secure applications based on product availability and organizational needs, and to support organizational decisions about future procurements. The guide also provides information for end users regarding application options left under their control in the normal use of the application. Recommendations are given for a select set of applications, namely: PKI, IPsec, TLS, S/MIME, Kerberos, OTAR, DNSSEC, and Encrypted File Systems. See also Special Publication 800-57 Part 1 and Part 2 below.

Security Requirements for Cryptographic Modules. NIST Draft. Federal Information Processing Standards Publication. FIPS 140-3. Will Supersede FIPS PUB 140-2, 2001-May-25. Published July 13, 2007. Draft FIPS 140-3 is the proposed revision of FIPS 140-2. The draft specifies five security levels instead of the four found in FIPS 140-2; has a separate section for software security; requires mitigation of non-invasive attacks when validating at higher security levels; introduces the concept of public security parameters; allows the deference of certain self-tests until specific conditions are met; and strengthens the requirements on user authentication and integrity testing..." [source PDF]

Security Requirements for Cryptographic Modules. Information Technology Laboratory, National Institute of Standards and Technology (NIST). FIPS 140-2. Issued: May 25, 2001. See the reference page for annexes. "This standard specifies the security requirements that will be satisfied by a cryptographic module. The standard provides four increasing, qualitative levels of security intended to cover a wide range of potential applications and environments. The security requirements cover areas related to the secure design and implementation of a cryptographic module. These areas include cryptographic module specification; cryptographic module ports and interfaces; roles, services, and authentication; finite state model; physical security; operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility (EMI/EMC); self-tests; design assurance; and mitigation of other attacks... This publication provides a standard that will be used by Federal organizations when these organizations specify that cryptographic-based security systems are to be used to provide protection for sensitive or valuable data. Protection of a cryptographic module within a security system is necessary to maintain the confidentiality and integrity of the information protected by the module."

NIST Crypto References:


OASIS Enterprise Key Management Infrastructure (EKMI) Technical Committee

The OASIS EKMI TC was chartered in December 2006 to "define symmetric key management protocols, including those for (1) Requesting a new or existing symmetric key from a server; (2) Requesting policy information from a server related to caching of keys on the client; (3) Sending a symmetric key to a requestor, based on a request; (4) Sending policy information to a requestor, based on a request; (5) Other protocol pairs as deemed necessary. In addition, the TC set out goals to document use cases, produce test suites, provide guidance on how a symmetric key-management infrastructure may be secured using asymmetric keys, and provide input on how such enterprise key-management infrastructures may be managed, operated and audited..."

The TC work was launched with the contribution of Symmetric Key Services Markup Language from StrongAuth, Inc. as draft proposal for the EKMI protocol, supported by a working implementation of this protocol. The open source StrongKey is a building block in an Enterprise Key Management Infrastructure (EKMI), with the goal of centrally managing symmetric encryption keys. The EKMI TC operates under the Royalty-Free (RF) on Limited Terms Mode of the OASIS IPR Policy.

The motivation for this TC's technical work, as expressed in the Charter 'Statement of Purpose':

"Public Key Infrastructure (PKI) technology has been around for more than a decade, and many companies have adopted it to solve specific problems in the area of public-key cryptography. Public-key cryptography has been embedded in some of the most popular tools — web clients and servers, VPN clients and servers, mail user agents, office productivity tools and many industry-specific applications — and underlies many mission-critical environments today. Additionally, there are many commercial and open-source implementations of PKI software products available in the market today. However, many companies across the world have recognized that PKI by itself, is not a solution.

There is also the perception that most standards in PKI have already been established by ISO and the PKIX (IETF), and most companies are in operations-mode with their PKIs — just using it, and adopting it to other business uses within their organizations. Consequently, there is not much left to architect and design in the PKI community.

Simultaneously, there is a new interest on the part of many companies in the management of symmetric keys used for encrypting sensitive data in their computing infrastructure. While symmetric keys have been traditionally managed by applications doing their own encryption and decryption, there is no architecture or protocol that provides for symmetric key management services across applications, operating systems, databases, etc. While there are many industry standards around protocols for the life-cycle management of asymmetric (or public/private) keys (PKCS10, PKCS7, CRMF, CMS, etc.) however, there is no standard that describes how applications may request similar life-cycle services for symmetric keys, from a server and how public-key cryptography may be used to provide such services.

Key management needs to be addressed by enterprises in its entirety — for both symmetric and asymmetric keys. While each type of technology will require specific protocols, controls and management disciplines, there is sufficient common ground in the discipline justifying the approach to look at key-management as a whole, rather than in parts..."

In January 2009, the EKMI TC released Symmetric Key Services Markup Language (SKSML) Version 1.0 as a Committee Specification. This specification "defines the first (1.0) version of the Symmetric Key Services Markup Language (SKSML), an XML-based messaging protocol, by which applications executing on computing devices may request and receive symmetric key-management services from centralized key-management servers, securely, over networks. Applications using SKSML are expected to either implement the SKSML protocol, or use a software library — called the Symmetric Key Client Library (SKCL) — that implements this protocol. SKSML messages are transported within a SOAP layer, protected by a Web Services Security (WSS) header and can be used over standard HTTP securely...

SKSML "uses SOAP and XML for encapsulating its requests and responses and can thus, be used on any platform that supports these two underlying protocols. Using a scheme that concatenates unique Domain identifiers (Private Enterprise Numbers issued by the IANA), unique SKS Server identifiers within a domain and unique Key identifiers within an SKS server, SKSML creates Global Key Identifiers (GKID) that can uniquely identify symmetric keys across the internet. SKSML relies on the Web Services Security (WSS) standard 1.0, which in turn supports the use of XML Signature and XML Encryption within the SOAP Header. Relying only the on the WSS profile that uses RSA cryptographic key-pairs and digital certificates, SKSML uses the digital signatures for authenticity and message-integrity, while using RSA-encryption for confidentiality. Using secure key-caching enabled through centrally-defined policies, SKSML supports the request and receipt of KeyCachePolicy elements by clients for the use of symmetric encryption keys even when the client is disconnected from the network and an SKS server. SKSML provides significant flexibility for defining policies on how symmetric encryption keys may be used by client applications. The KeyUsePolicy element allows Security Officers to define which applications may use a specific key, days and times of use, location of use, purpose of use, key-sizes, encryption algorithms, etc..."

EKMI References:


OASIS Key Management Interoperability Protocol (KMIP) Technical Committee

Essential KMIP documents:

On February 12, 2009, Brocade, EMC, HP, IBM, LSI, Seagate, and Thales announced the creation of a jointly developed specification for enterprise key management together with an intent to contribute this technical work to OASIS for standardization. The KMIP TC Charter and Call for Participation was issued on March 04, 2009. The Key Management Interoperability Protocol (KMIP) was initially developed by HP, IBM, RSA, and Thales in the 2007-2008 timeframe to meet the compelling needs of today's enterprise data centre environments; Brocade, LSI, and Seagate later joined the specification development effort. NetApp representatives were named in the KMIP TC proposal.

The KMIP Technical Committee, as chartered, "will develop specification(s) for the interoperability of key management services with key management clients. The specifications will address anticipated customer requirements for key lifecycle management (generation, refresh, distribution, tracking of use, life-cycle policies including states, archive, and destruction), key sharing, and long-term availability of cryptographic objects of all types (public/private keys and certificates, symmetric keys, and other forms of "shared secrets") and related areas."

KMIP "establishes a single, comprehensive protocol for communication between enterprise key management servers and cryptographic clients. By defining a protocol that can be used by any cryptographic client, ranging from a simple automated electric meter to very complex disk-arrays, KMIP enables enterprise key management servers to communicate via a single protocol to all cryptographic clients supporting that protocol. Through vendor support of KMIP, an enterprise will be able to consolidate key management in a single enterprise key management system, reducing operational and infrastructure costs while strengthening operational controls and governance of security policy. KMIP addresses the critical need for a comprehensive key management protocol built into the information infrastructure, so that enterprises can deploy effective unified key management for all their encryption, certificate-based device authentication, digital signature, and other cryptographic capabilities."

The problem addressed by KMIP, according to the published FAQ document, is "primarily that of standardizing communication between encryption systems that need to consume keys and the key management systems that create and manage those keys. Being able to encrypt and retain access to data requires that encryption keys be generated and stored. To date, organizations deploying encryption have not been able to take advantage of interoperability across encryption and the key management systems. By defining a low-level protocol that can be used to request and deliver keys between any key manager and any encryption system, KMIP enables the industry to have any encryption system communicate with any key management system. Through this interoperability, enterprise will be able to deploy a single enterprise key management infrastructure to mange keys for all encryption systems in the enterprise that require symmetric keys, asymmetric keys pairs, certificates and other security objects..."

The KMIP protocol supports all reasonable key management system related cryptographic objects. This list currently [version 0.98] includes: (1) Symmetric Keys; (2) Split (multi-part) Keys; (3) Asymmetric Key Pairs and their components; (4) Digital Certificates; (5) Derived Keys; (6) Opaque (non-interpretable) cryptographic objects.

The Charter for the OASIS KMIP TC reports that the initial TC goal is to "define an interoperable protocol for standard communication between key management servers, and clients and other actors which can utilize these keys. Secure key management for TPMs [Trusted Platform Modules] and Storage Devices will be addressed. The scope of the keys addressed is enterprise-wide, including a wide range of actors: that is, machine, software, or human participants exercising the protocol within the framework. Actors for KMIP may include: Storage Devices, Networking Devices, Personal devices with embedded storage (e.g., Personal Computers, Handheld Computers, Cell Phones), Users, Applications, Databases, Operating Systems, Input/Output Subsystems, Management Frameworks, Key Management Systems, and Agents..."

Planned deliverables from the OASIS KMIP TC include: (1) Revised KMIP Specification which defines the normative expression of the protocol, including objects, attributes, operations and other elements; (2) Updated KMIP Usage Guide which provides illustrative and explanatory information on implementing the protocol, including authentication profiles, implementation recommendations, conformance guidelines and security considerations; (3) Revised document for KMIP Use Cases and Test Cases which supplies sample use cases for KMIP, test cases for implementing those use cases, and examples of the protocol implementing those test cases; (4) Updated KMIP FAQ Document to provide guidance on what KMIP is, the problems it is intended to address, and other frequently asked questions.

KMIP TC References:


Sun Crypto Key Management System (KMS)

On February 17, 2009, Sun Microsystems announced the release of an open source key management technology, described as "the world's first generic communication protocol between a Key Manager and an encrypting device." The release terms enable partners to adopt this protocol to securely handle encryption keys without additional licensing. The protocol is implemented as a complete toolkit and is downloadable from the OpenSolaris website. According to the announcement:

Governments, finance, healthcare, retail and other vertical markets need to comply with current regulatory laws that create mandates to protect sensitive stored data. To support these requirements, this protocol is available to customers using the Sun StorageTek KMS 2.0 Key Manager and Sun StorageTek T9840D, T10000A, T10000B Enterprise Drives, as well as Sun StorageTek HP LTO4 drives shipped in Sun libraries. A number of additional partners are developing products based on this protocol, including EMC, whose RSA security division has talked about releasing it as an option on their RKM Key Manager... By releasing the Sun protocol as Open Source, Sun is taking a major step towards unifying [key management] technology. Sun continues to work with partners in the industry and with appropriate standards bodies such as IEEE 1619.3 Working Group and OASIS to further develop and formalize the interface as an industry standard. RSA is currently developing a solution using this protocol to work with their RKM key manager. IBM drive division is working on supporting this protocol for their IBM LTO4 drive shipped in Sun Libraries. Additionally, Sun has shared this protocol with numerous other industry partners including computer OEMs, back up application providers, disk array and switch manufacturers..."

The Sun Crypto KMS Agent Toolkit Project Leaders include Nancy Buehmann, Ben Baron, Matt Ball, and Scott Painter.

Industry analysis and comment about Sun's key management technology is cited below.

Sun Crypto KMS References:

  • OpenSolaris Project: Crypto KMS Agent Toolkit
  • "Sun Key Management System." By Sandy Stewart (Engineering Director, Sun Microsystems). Presentation at KMS 2008.
  • Sun announcement 2009-02-17: "Sun Establishes First Open Source Standard for Storage Encryption Solutions. Customers Can Now Consolidate Sun Crypto-Key Management System and Key Manager Solutions Across Multiple Vendors to Avoid Additional Licensing Fees and More Easily Manage All Encrypted Keys"

  • Commentary:

    • "Sun Fights Storage Encryption Battle Against HP, IBM, RSA." By Joseph F. Kovar. From CRN ChannelWeb. February 17, 2009. "Sun Microsystems unveiled its new data encryption key management technology and sent it to the open-source community, thereby challenging a separate industry effort for control of storage encryption technology... KMIP was developed by HP, IBM, RSA and Thales as a joint specification for enterprise key management aimed at simplifying how companies encrypt and safeguard data. Sun's KMS Open Source API and the KMIP APIs are different methods for making it easier for multiple companies to write to the same data storage encryption keys, Polanowski said. And it is a big deal for both camps, he said. 'The major driver of KMIP is IBM,' he said. 'Whoever controls the standard controls the spoils.' Many of the vendors involved in the KMIP security initiative also work with Sun on its KMS API, including HP and IBM with their LTO-4 tape drives as well as RSA, Polanowski said. When asked why Sun is not a part of KMIP, Polanowski responded by saying one could also ask why KMIP didn't join Sun's efforts. 'We developed our API,' he said. 'Joining KMIP would require us to revamp our efforts'."

    • "Sun Jumbles Key Management Picture." By Beth Pariseau. From SearchStorage.com. February 18, 2009. "Sun Microsystems Inc. has released an open-source protocol for enterprise encryption key management, a week after a consortium led by other major vendors submitted a standards protocol to OASIS. Sun's protocol has been part of its self-encrypting tape drives for more than a year. Company executives say their protocol is more advanced than the [KMIP] specification submitted to OASIS by a group led by EMC Corp./RSA, Hewlett-Packard Co., IBM Corp. and Thales Group. A Sun spokesperson described the OASIS submission as a 'low-level binary protocol for communication, rather than [the] more advanced XML solution used in the latest OASIS and current IEEE 1619.3 discussions.' The IEEE has also drafted its own key management standard, which it released early last year.... Robert Griffin, director of solution design for the Data Security Group at RSA, The Security Division of EMC, said representatives from the consortium and Sun will work together to try and blend the two proposed standards. Because they work at different levels, it might be possible to 'nest' one into the other, he said... Sandy Stewart, engineering director for Sun's Key Management System, said the timing of that invitation was the problem. The consortium first got together more than a year ago, but Sun claims it was only in the last month and a half that it was approached by the other vendors to participate. "To be frank, this was sprung on us at the last minute and we're still going over the details," Stewart said. But he said there have already been meetings between the vendors this week, and Sun plans to work with the consortium and OASIS to sort out the protocols..."

    • "Sun Wades Into Key Management Kerfuffle. Encryption Standards Soup Thickens." By Chris Mellor. From The Register. February 17, 2009. "Sun has thrown its open source key management ideas into the key management standards giant brandy glass, offering license-free management that it hopes will become an industry standard. The generic idea everyone is agreed upon is that encrypting devices using keys should be able to interoperate with any key management system using a standard protocol. Suppliers can then compete with their own encrypting devices and key management products, either proprietary or, as in Sun's case, open source... There appear to be two main efforts devoted to producing standard protocol to link encrypting devices and key managers: The IEEE 1619.3 committee, said to be focussed on storage-related encryption, and the OASIS Enterprise Key Management Infrastructure (EKMI) technical committee..."

    • "Sun Offers Open Source Encryption Key Management Protocol." By Lucas Mearian. From InfoWorld. February 17, 2009. "Sun announced today that it is throwing its hat into the standards arena, proposing that its open source key management API be used as a universal way to allow encrypting devices to communicate with key management systems. 'This defines the way a key manager exchanges encryption keys with an encrypted device such as a tape drive or a disk drive,' said Piotr Polanowski, Sun's encryption product manager. 'The market has been pretty fractured when it comes to key management technology and we just want to be able to offer widest availability of that. We believe it benefits our customers, and so it will ultimately benefit us as well.' Sun said its API protocol is currently available to customers using the Sun StorageTek KMS 2.0 Key Manager and StorageTek T9840D, T10000A, T10000B tape drives, as well as Sun's HP LTO4 drives shipped in Sun libraries... Polanowski said Sun's standard initiative is complementary to the KMIP effort, and he noted that the other vendors included Sun in defining their API. 'At this point, we're looking at how our solution fits into the whole framework,' he said. Sun said it will work with standards bodies such as the Institute of Electrical and Electronics Engineers (IEEE) 1619.3 Working Group and OASIS' Enterprise Key Management Infrastructure technical committee to further develop and formalize the interface as an industry standard. Sun said RSA is also now developing a solution using this protocol to work with its RKM key manager. IBM's drive division is working on supporting this protocol in its IBM LTO4 drive shipped in Sun Libraries. Additionally, Sun has shared the protocol with other industry partners, including computer OEMs, backup application providers, and disk array and switch manufacturers..."

    • Proposal to use Sun Microsystem's Open Source Agent for P1619.3 Reference Implementation. By Matt Ball (Sun Microsystems). Posting to P1619.3 list. February 10, 2009. "Just recently, Sun Microsystems delivered an open-source implementation of a key management client capable of communicating with a Sun Key Management Appliance (KMA). You can see more information at this links for: (1) the overview and the toolkit download. Sun is interested in proposing this open source software as the reference implementation for the P1619.3 KM Client, and would like to work towards making both the standard and this reference implementation converge. You can find documentation in the Ultra2AgentToolkit-doc/html folder of the toolkit. This includes WSDL files...

    • "Sun Releases First Protocol for Encryption Key Interoperability." By Chris Preimesberger. From eWEEK. February 18, 2009. "The first generic communication protocol between a key manager and an encrypting device enables a user of virtually any data encryption system to securely manage keys to the encrypted data across multivendor data centers, avoiding additional licensing fees and lots of hassle, Sun says. Because an increasing number of enterprises are considering encryption as an additional safeguard of their data, it's important to bear in mind that management of the keys that unlock encrypted data is as crucial as safeguarding the family jewels themselves. After all, encrypted data is just as vulnerable as unencrypted data to sophisticated outside threats, if the keys are easy to locate and use. To help make movement of these encryption keys more transferable and secure between systems, Sun Microsystems on Feb. 17 announced the open-source release of the first generic communication protocol between a key manager and an encrypting device. This XML-based protocol enables a user of virtually any current encryption system to securely manage keys to the encrypted data across multivendor data centers, avoiding additional licensing fees and lots of hassle.. The Sun protocol works in the following products: Sun StorageTek KMS 2.0 Key Manager; StorageTek T9840D, T10000A, T10000B enterprise drives; and Hewlett-Packard's StorageTek HP LTO4 drives that are shipped in Sun libraries. A number of additional Sun partners are developing products based on this protocol, including EMC, whose RSA security division is considering releasing it as an option for the RKM (RSA Key Manager)...."

    • "Sun Releases Open Source Standard for Storage Encryption." Staff. From DDJ. February 19, 2009. "Claims world's first open source generic communication protocol between a Key Manager and encrypting device... Governments, finance, healthcare, retail and other vertical markets need to comply with current regulatory laws that create mandates to protect sensitive stored data..."


Trusted Computing Group: Infrastructure Work Group and Key Management Services Subgroup

The Storage Workgroup of the Trusted Computing Group (TCG) formed a Key Management Services Subgroup (KMSS) to provide a specific method to manage keys necessary for use in the environment defined by the TCG Storage Specification. The goals of the TCG Key Management Services Subgroup (KMSS) are to: (1) Develop a uniform approach to managing keys across a variety of storage devices; (2) Define an extensible set of key management operations to nurture and sustain encrypted data and its associated keys; (3) Define key management audit operations that may be required to securely record all key management operations; (4) Leverage existing protocols and techniques, for example [i] Support the TCG Storage Specification, [ii] Secure Communications, [iii] Authentication, [iv]Discovery, [v] Any existing and applicable standards; (5) Define procedures, protocols, and client APIs as needed to implement these goals..." Walt Hubis (Software Architect, LSI) is [2009-03] Chair of the Trusted Computing Group Key Management Services Subgroup (KMSS) — and Chair of IEEE SISWG P1619.3 'Key Management'.

KMSS is addressing: Key management services (KMS), KMS use cases, KMS device/host secure communication, KMS device/host authentication, KMS device/host capability negotiation, KMS key policy specification, and KMS audit logs. KMSS protocols will allow the host platform or application to perform the following operations and services with trusted storage devices: Requesting key generation; Key usage; Storage of keys; Retrieving keys; Modifying keys; Searching for keys; Key access rights; Disabling of keys; Destruction of keys. KMSS and P1619.3 are cooperating to avoid overlap. A number of vendors participate on both KMSS and P1619.3. The IEEE P1619.3 standard, when finished, will deal with broader key management issues..."

The KMSS Application Note Encrypting Drives in an Array Controller Version 1.0 (Draft, November 12, 2007) supplements the TCG Storage Specification and gives developers a very specific method for managing the locking and encryption of one or more storage devices, including hard drives. This application is targeted at implementers developing driver software and applications for Full Disk Encryption (FDE) drives. It is focused on enterprise class applications including disk array systems, but is also broadly applicable to any application of FDE drives..." From the announcement "Trusted Computing Group Publishes Key Management Services Application Note, and Extends Trusted Storage to Optical Devices" (November 14, 2007): "The Trusted Computing Group, an industry organization that develops specifications to enable computing security, has published an application note to enable developers to create key management services for devices based on the organization's Trusted Storage specification. The organization also has created a second subgroup, for optical storage devices, in its Trusted Storage work group. These efforts are intended to help developers create more secure storage devices to protect data against loss and theft. The Key Management Services application note protocols will allow the host platform or application to perform the following operations and services with trusted storage devices: Levels of interaction and security; Requesting key generation; Key usage; Storage of keys; Retrieving keys; Modifying keys; Searching for keys; Key access rights; Disabling of keys; Destruction of keys This application note gives developers a very specific method for managing the basic locking, encryption, and key management of one or more hard drives..." See bibliographic details below, and commentary in "Trusted Storage for Enterprise-class Hard Disk Drives," by Michael Willett (Seagate).

From TCG KMSS Response to IEEE P1619.3 Use Cases: "The TCG Key Management Services Subgroup (KMSS) [hereby provides] guidance on what the KMSS group will be focusing on so that duplicate effort can be avoided. In general, the TCG KMSS group has agreed to focus on key management issues with respect to the TCG Storage Specification. Thus, any device that implements the TCG storage specification will be supported by the work of the KMSS. The TCG Storage Specification provides a variety of mechanisms that can be used to define and manage keys; the goals of the KMSS are to provide a specific method to manage those keys necessary for use in the environment defined by the TCG Storage Specification..." [archive/cache]

Key Management Infrastructure: The TCG model for establishing trust in TCG technology may require manufacturers to augment manufacturing processes. Services to create and maintain a database of records, one record for each part manufactured may be needed. Records need to be delivered to end customers through some means to be determined by the manufacturer. Manufacturers may need to establish public-key signing facilities suitable for signing records in low and high volumes. Some records may contain privacy sensitive information, in which it may be prudent for the manufacturer to protect. Keys used to sign records should be protected from unauthorized use or disclosure through reasonable but credible IT procedures..."

From TCG Storage Specifications and Key Management: "The Trusted Computing Group has published specifications for trusted storage. Storage manufacturers have announced and shipped products designed to those specifications, including self-encrypting drives (SED), both hard (rotating) drives and solid state drives, for laptops and data centers. Not only is cryptographic key management simplified and available for SEDs today, but recent work on key management holds great promise for unifying and standardizing key management for SEDs and other cryptographic systems across the enterprise... For laptop SEDs, many workstation security software vendors are providing local SED management systems with sophisticated capabilities and user-friendly interfaces. Such vendors are further tying this local key management support to centralized servers to support automation and other key management life cycle services across an enterprise of end-user laptops. Data center storage system vendors are also providing local (e.g., RAID controller-based) but automated SED key management that is tied back to centralized servers. Complete systems are available today for both SED laptops and data centers in the enterprise. Recall that key management must deal with the full life-cycle of keys: generation, exchange, storage, safeguarding, use, vetting, replacement and finally, destruction of a key. In addition to numerous integrated key management systems, subsets of the full enterprise key management life-cycle have been defined, built, and occasionally standardized over the years. The subset may be based on: life-cycle phase, applicable industry, grammar/syntax used, or other aspects. Recent work includes IEEE 1619.3, W3C XKMS, and the OASIS EKMI TC. Classical work includes ISO X9 10, applicable to the financial industry. Since the on-board, integrated key management for SEDs has been defined and published by the TCG, SED developers, integrators, and especially users are interested in how emerging enterprise key management systems will 'harmonize' (that is, interface) with the on-board key management. A promising exercise in this regard is the work of the OASIS Key Management Interoperability Protocol (KMIP) Technical Committee..."

See Thomas Hardjono and Greg Kazmierczak (Wave Systems), on behalf of the Trusted Computing Group (TCG), Infrastructure Work Group, in the KMS 2008 presentation: "Overview of the TPM Key Management Standard". Abstract: Today the Trusted Platform Module (TPM) hardware is arguably one of the most successful security hardware that is cheap and widely available. Currently, over 100 million PC client laptops and desktops have shipped with a TPM (version 1.2), with analysts estimating that by 2010 this number to be over 200 million. There is today considerable interest in using the TPM in the context of storage security, in particular as a tamper-resistant medium within which to store and manage cryptographic keys pertaining to storage encryption. It is conceivable that in the future drive controllers may include the TPM as a means to increase security of those devices. In this presentation we provide an overview of the TPM key management architecture, and in particular the TPM key backup and key migration protocols. An overview of the TPM key hierarchy will also be presented.... Note "The TCG TPM falls into definition of the Cryptographic Unit as defined in IEEE 1619.3 Draft D4... TPM Key Types: (1) Non-Migratable Key (NMK): A key which is bound to a single TPM. This is a key that is (statistically) unique to a single TPM and can not be migrated or exported from the TPM. (2) Migratable Key (MK): A key which is not bound to a specific TPM, and with suitable authorization, can be used outside a TPM or moved to another TPM. (3) Certifiable Migratable Key (CMK): A key whose migration from a TPM is highly controlled and the TPM can attest / certify its properties. properties. The TPM key types are defined at key creation time by the User. Migration destinations are defined and authorized by the TPM Owner...Migration Package (MP): An XML document used for the purposes of archiving or migrating one or more keys to another platform. Its contains identifiers for the package, the migrating key, and an optional number of children keys of the subject migrating key. The Migration WSDL describes the set of commands related to TPM key migration and backup/vaulting... TPM is a core part of the TCG's Network Admission Control (TNC) value proposition..."

The TCG's Storage Work Group builds upon existing TCG technologies and philosophy, and focuses on standards for security services on dedicated storage systems. One objective is to develop standards and practices for defining the same security services across dedicated storage controller interfaces, including but not limited to ATA, Serial ATA, SCSI, FibreChannel, USB Storage, IEEE 1394, Network Attached Storage (TCP/IP), and iSCSI. Storage systems include disk drives, removable media drives, flash storage, and multiple storage device systems. In early 2009, the Storage Work Group released the TCG Storage Work Group Security Subsystem Class: Opal Version 1.0, the TCG Storage Work Group Security Subsystem Class: Enterprise Version 1.0, and the TCG Storage Interface Interactions Specification, available for download from the Storage Specification Page.

TCG Storage Work Group Summary: Enterprise Security Subsystem Class Specification." The Trusted Computing Group (TCG) has identified storage as a critical element of data security. The Storage Work Group within TCG develops specifications to address the security requirements for storage. With the release of the Enterprise Security Subsystem Class (SSC) specification, the Storage Work Group addresses security for fixed-media storage devices such as hard disc drives in high performance storage systems. This enables a more secure computing environment without compromising functional integrity. The primary goal is helping users protect their information assets from compromise. Data at rest protection assures the storage owner that if their storage device is lost or stolen, their data will not be accessible without proper authentication. The Enterprise SSC specification enables the following: (1) Encrypts all user data on media (Full Disc Encryption — FDE using self encrypting storage devices); (2) Provides access control to support organizational security policies with strong authentication; (3) Employs the Trusted Computing Group as a forum for critical security review, system architecture and interoperability..." See the specification.

From the Infrastructure Work Group FAQ document: "The set of specifications consists of the Integrity Management Architecture, the application programming interface (API) specification for a measurement agent called the Platform Trust Services (or PTS) Interface, and a common XML-based data format for capturing and reporting integrity information in a client. The Integrity Management Architecture provides the common framework for defining, collecting and reporting information pertaining to the integrity of a trusted platform. Such information includes the components (software and hardware) constituting the platform, the elements that participated in its booting-up and the softwares that establish the computing environment in the platform. The Platform Trust Services (PTS) specification defines the API to a measurement agent that performs the collection, measurement and reporting of the integrity information on the platform. The PTS specification has been written to be platform independent, meaning that it is applicable to the various types of platforms or devices (e.g. PC client, server, mobile phones, etc). In order for the integrity information to be meaningful and verifiable by external entities (e.g. other devices), a common XML-based data format for representing this information has been defined in the TCG Integrity Schema specifications. The Integrity Schema itself can be understood as consisting of three major pieces derived from a single XML schema. These are the data format for collecting and reporting integrity information, the format for representing reference measurement of known values, and the format for the verification results from evaluating a report..."

The Trusted Computing Group (TCG) is a not-for-profit organization formed to develop, define, and promote open standards for hardware-enabled trusted computing and security technologies, including hardware building blocks and software interfaces, across multiple platforms, peripherals, and devices. TCG specifications will enable more secure computing environments without compromising functional integrity, privacy, or individual rights. The primary goal is to help users protect their information assets (data, passwords, keys, etc.) from compromise due to external software attack and physical theft. TCG specifications and related information such as application notes are freely available on the organization's website.

As of 2007, companies participating in the BSI, LSI. Hewlett Packard, Seagate Technology, Hitachi, Decru, Neoscale, IBM, Sun Microsystems, Dell, STMicroelectronics, StepNexus, Emulex, Fujitsu Limited, Lenovo, Lexar, Marvell, Microsoft, Quantum, RSA Security, Symantec, Intel, SanDisk, Sinosun Technology, and others.

Trusted Computing Group References:

  • Key Management Services Subgroup (KMSS)
  • TCG Infrastructure Work Group
  • Trusted Computing Group web site
  • TCG FAQ document
  • Trusted Computing Group Members
  • TCG Glossary of Technical Terms
  • TCG Storage Specifications and Key Management. From the Trusted Computing Group (TCG). December 2009. 7 pages. See the reference page.
  • TCG Storage Security Subsystem Class: Enterprise. January 2009. "The Storage Workgroup specifications are intended to provide a comprehensive architecture for putting storage devices under policy control as determined by the trusted platform host, the capabilities of the storage device to conform with the policies of the trusted platform, and the lifecycle state of the storage device as a Trusted Peripheral."
  • TCG Storage Security Subsystem Class: Opal. Specification Version 1.0. Revision 1.0. January 27, 2009. 81 pages. This specification defines the Opal Security Subsystem Class (SSC). Any SD that claims OPAL SSC compatibility shall conform to this specification. The intended audience for this specification is both trusted Storage Device manufacturers and developers that want to use these Storage Devices in their systems.
  • Application Note 1: Encrypting Drives in an Array Controller. Prepared by members of the TCG Storage Working Group. Draft. Revision 0.39. November 12, 2007. 85 pages. Contact: Storage Subsystem Working Group (SSWG). "The use case is a storage array controller, such as a RAID controller, that is managing the locking and encryption of one or more hard drives. Encryption Keys used by the drives to encrypt and decrypt the data stored on the drive are random and generated by the drive. The Encryption keys never leave the drive and are persisted by the drive. It is assumed the Encryption key is also used for decryption of the data. While the best practice is not to escrow the Encryption Keys, escrow may be required. In this case, the keys are wrapped for transport using the NIST recommendations for key wrapping. A re-provisioning command to the array controller will cryptographically erase the data on the drive by generating a new, random Encryption Key for the drive and unlocks the drive, allowing it to be used in another volume group, array, or other application, even if that application is not a secure application. In this case, the drive acts as a normal drive without any security capabilities although the data that originally contained on the drive will appear as random data, since the decryption is being performed with a different Encryption Key from that used to encrypt the data." Future Application Notes may include: Key Management for Tape Systems, Key Management for Optical Storage, and Key Management for Consumer Devices.


W3C XML Key Management (XKMS)

The W3C XML Key Management (XKMS) Activity was organized to specify protocols for distributing and registering public keys, suitable for use with the standard for XML Signatures defined by W3C and the Internet Engineering Task Force (IETF) and its companion standard for XML Encryption. The XKMS specification became a W3C Recommendation in June 2005, and has two parts: the XML Key Information Service Specification (X-KISS) and the XML Key Registration Service Specification (X-KRSS).

X-KISS allows a client to delegate part or all of the tasks required to process XML Signature 'ds:KeyInfo' elements to an XKMS service. A key objective of the protocol design is to minimize the complexity of applications using XML Signature. By becoming a client of the XKMS service, the application is relieved of the complexity and syntax of the underlying PKI used to establish trust relationships, which may be based upon a different specification such as X.509/PKIX, SPKI, or PGP. By design, the XML Signature specification does not mandate use of a particular trust policy. The signer of a document is not required to include any key information but may include a ds:KeyInfo element that specifies the key itself, a key name, X.509 certificate, a PGP key identifier etc. Alternatively, a link may be provided to a location where the full ds:KeyInfo information may be found. The information provided by the signer may therefore be insufficient by itself to perform cryptographic verification and decide whether to trust the signing key, or the information may not be in a format the client can use. For example: the key may be specified by a name only, the local trust policy of the client may require additional information in order to trust the key, and the key may be encoded in an X.509 certificate that the client cannot parse. In the case of an encryption operation, the client may not know the public key of the recipient...

X-KRSS describes a protocol for registration and subsequent management of public key information. A client of a conforming service may request that the registration service bind information to a public key. The information bound may include a name, an identifier or extended attributes defined by the implementation. The key pair to which the information is bound may be generated in advance by the client or on request generated by the service. The Registration protocol may also be used for subsequent management operations including recovery of the private key and reissue or revocation of the key binding. The protocol provides for authentication of the applicant and, in the case that the key pair is generated by the client, Proof of Possession (POP) of the private key. A means of communicating the private key to the client is provided in the case that the private key is generated by the registration service. This (XKMS-2) document specifies means of registering RSA and DSA keys and a framework for extending the protocol to support other cryptographic algorithms such as Diffie-Hellman and Elliptic Curve variants..."

XKMS does not require any particular underlying public key infrastructure (such as X.509) but is designed to be compatible with such infrastructures. The X-KISS protocol allows an application to delegate to a service the processing of key information associated with an XML signature, XML encryption, or other usage of the XML Signature ds:KeyInfo element. The X-KRSS protocol supports the registration of a key pair by a key pair holder, with the intent that the key pair subsequently be usable in conjunction with X-KISS or a Public Key Infrastructure (PKI) such as X.509 or PKIX.

W3C XKMS References:


General: News, Articles, Reports

  • [August 2012] Requirements and Desirable Features of U.S. Federal Cryptographic Key Management Systems. NIST Draft SP. 26 pages. An initial draft of the Profile requirements was made available for public comment and for discussion by participants of the NIST Cryptographic Key Management Workshop 2012 held September 10-11, 2012.

    Overview: "NIST is developing a NIST Special Publication 800-152 entitled A Profile for U. S. Federal Cryptographic Key Management Systems (CKMS) for use by Federal agencies and contractors when designing, implementing, procuring, installing, configuring, and operating a CKMS. This Profile will be based on the NIST Special Publication 800-130 entitled A Framework for Designing Cryptographic Key Management Systems. The Framework covers topics that should be considered by a product or system designer when designing a CKMS and specifies requirements for the design and its documentation. The Profile, however, will cover not only a CKMS design, but also its procurement, installation, management, and operation throughout its lifetime. Requirements will, therefore, be placed not only on a CKMS product or system, but also on people (procurement officials, installers, managers, and operators) while performing specific tasks involving the CKMS..."

  • [March 21, 2012] A Framework for Designing Cryptographic Key Management Systems. By Elaine Barker, Miles Smid, Dennis Branstad, Santosh Chokhani. NIST DRAFT Special Publication 800-130. April 2012. 112 pages. Abstract: "This Framework for Designing Cryptographic Key Management Systems (CKMS) contains topics that should be considered by a CKMS designer when developing a CKMS design specification. For each topic there are one or more documentation requirements that need to be addressed by the design specification. Thus, any CKMS that adequately addresses these requirements would have a design specification that is compliant with this Framework...

    [Intro] This Framework for Designing Cryptographic Key Management Systems (CKMS1) is a description of the topics to be considered and the documentation requirements (henceforth referred to as requirements) to be addressed when designing a CKMS. The CKMS designer satisfies the requirements by selecting the policies, procedures, components (hardware, software, and firmware), and devices (groups of components) to be incorporated into the CKMS, and then specifying how these items are employed to meet the requirements of this Framework. A CKMS consists of policies, procedures, components and devices that are used to protect manage and distribute cryptographic keys and certain specific information, called (associated) metadata herein. A CKMS includes any device or sub-system that can access an unencrypted key or its metadata. Encrypted keys and their cryptographically protected (bound) metadata can be handled by computers and transmitted through communications systems and stored in media that are not considered to be part of a CKMS...."

  • [November 19, 2009] "Federated Key Management as the Basis for Secure Cloud Computing." By Luther Martin (Chief Security Architect, Voltage Security). "Cloud computing creates security problems that most organizations have not yet had to face on a large scale: protecting data when the location of the data is generally unknown. Encryption is a useful tool for solving this problem, but using it in the cloud is hard because of the key management problems that this causes. Fully federated key management can provide the basis for protecting sensitive data in the cloud, and it's probably the basis for how we'll eventually protect such data... If you're careless with the combination to your safe, someone else can easily use it to open your safe, and the protection provided by the safe is compromised. Similarly, the cryptographic keys that you use to encrypt data need to be handled carefully. If you're careless with them then the protection provided by encryption can be essentially eliminated. Key management covers all the details of how to handle keys carefully enough to ensure this does not happen. It ensures that you don't do the cryptographic equivalent of writing the combination to your safe on a Post-it note and sticking it to the wall next to your desk.... Federation describes how different computer systems can work together. In the context of key management, federation includes how different applications can get keys from the same key server. This is an important aspect of key management that needs to be addressed before encryption can be used to protect sensitive data in the cloud, and the lack of the ability to do federation dramatically limits the usefulness of many encryption and key management products today... If all applications understand how to handle such information when it's included in a key identifier, then they can easily use this information as the basis for federated key management... This [federation] approach has already been used with great success in existing key management technologies. Systems that use Identity-Based Encryption (IBE), for example, already use more general key identifiers. This approach hasn't quite made it to other encryption technologies yet, although it probably will soon. The most recent draft of the IEEE P1619.3 Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data uses this approach to define unique identifiers for keys from which it's easy to find the URL of the key server where the key can be obtained. Once that standard is implemented, key management for encrypting backup tapes will certainly get much easier. Once this idea is extended to other technologies, we'll have the fully federated key management that we need to protect sensitive data in the cloud. This sounds simple enough, but actually writing a standard that will be the basis for doing this in an interoperable way isn't easy. Using technologies like IBE may be as close as we can come to fully federated key management until the necessary standards are completed..."

  • [November 12, 2009] "CA Launches Mainframe-Based Encryption Key Management Software." By Beth Pariseau. From Storage Soup Blog. "Claiming its approach to enterprise data security key management will assure users of reliability, CA this week launched a new Encryption Key Manager (EKM) software offering that runs on z/OS mainframe and can manage keys for CA Tape Encryption as well as IBM tape formats. Stefan Kochishan, director of storage product marketing for CA, said a lack of key management standards for encryption at the various points it's deployed in the enterprise has hindered encryption adoption. But, he argued, many customers are also concerned with the reliability of open-systems based encryption key managers, since without keys to access it, encrypted data can be lost... The new z/OS based product will manage IBM and CA tape encryption instances and automatically mirror keys among mainframes at up to three sites, including replication over SSL and digital certification for data integrity. This method allows keys to be re-created from an alternate location should the primary key manager fail, a key is accidentally deleted, or if the primary site is lost in a disaster. Users can also backup the key store to mitigate the threat of rolling corruption in the replication system..."

    According to the CA announcement: "IT organizations face new encryption key management issues as expanding compliance mandates and growing consumer concerns about privacy drive more rigorous protection of sensitive data. These issues include: (1) The time and effort required to manage keys; (2) The accuracy with which keys must be distributed to authorized users; (3) The need to ensure the availability of all keys under any conditions; (4) The need to credibly document encryption measures to auditors. CA EKM helps customers address these issues and others by providing a single, centralized interface that can be used for any combination of IBM TS1120 and IBM TS1130 tape encryption devices, as well as CA Tape Encryption subsystems... CA EKM automatically replicates encryption keys across a set of local and dispersed hosts via SSL-encrypted TCP/IP, so that keys can quickly and transparently be recovered in case of a disaster, hardware errors or a system outage. It also automatically enforces policies regarding the change of encryption keys and digital certificates, thereby mitigating the labor and risk associated with manual administration..." See also the CA announcement: "CA Encryption Key Manager Helps Simplify Compliance While Enabling Ready Access to Protected Data. Unified Automation of Critical Storage Security Helps Reduce TCO and Mitigate Risk for Multi-Vendor Storage Environments."

  • [October 27, 2009] "2009 Encryption and Key Management Industry Benchmark Report: A Risk Management Benchmark for Data Protection." By Kimberly Getgen (Principal, Trust Catalyst). Report published October 20, 2009. 33 pages. Research sponsored by Thales Information Systems Security. "Over the next 12 months, regulation requiring the protection of data and mandatory breach notification will only continue to grow. At the same, many organizations will continue to experience damaging, costly, and very public data breaches. As this survey shows, encryption is one of the most effective means to protect data. Using encryption with automated key management goes a long way toward helping organizations achieve their compliance and IT oprations objectives... Key management concerns continue to plague organizations attempting to encrypt sensitive data. Once this data is encrypted, it must be recoverable at some point in the future, with little room for error. First and foremost, data must be available. Concerns around data availability have made planning an organization's key management strategy no easy feat. A third of survey respondents (34 percent) have been planning their key management strategy for over a year (up from 26 percent in 2008)... For most applications, encrypted data needs to be recovered in less than a day, but for business-critical applications like databases, network link encryption, and payment processing applications, data often must be recovered in less than an hour... 'Where are encryption keys stored?' The most popular response for most applications was 'don't know' -- even for the applications that needed to be recovered in less than an hour. However, for respondents who knew where keys were stored, the majority of applications that needed to be recovered in an hour were most likely to be in a hardware security module (HSM)... Without HSMs or the use of automated key management tools, we believe data availability concerns will continue to stand in the way of data protection...

    "The report identifies these key findings: "(1) Unnecessary risk. The Achilles' heel of many organizations remains the same as last year: unencrypted databases and backup tapes... (2) Cost of encryption remains a top concern. Participants said cost remains the single most important factor preventing the encryption of data that should be encrypted... (3) Operational concerns delay encryption projects. Cost isn't the only barrier to encryption adoption. The decision to encrypt requires organizations to weigh other operational efficiencies against the need for data protection... (4) Lost keys disrupt business. 8 percent of organizations have experienced problems with lost encryption keys, creating security concerns (50 percent), causing data to be permanently destroyed (39 percent), or disrupting the business... (5) Key management and compliance. For the first time, participants ranked 'proving compliance requirements have been met' as the most challenging aspect of key management... (6) New encryption mandates considered helpful to data protection strategies. Regulations mandating encryption were seen as helpful in moving data protection strategies forward for an overwhelming 71 percnt of survey participants... (7) Patient and credit card data protection drives encryption spending. PCI DSS, HIPAA, and the EU Data Privacy Directive are the top three data protection regulations requiring allocation of new encryption budget ver the next 24 months... (8) Cloud not ready for prime time. 52 percent of participants cite data security concerns as being the number one barrier preventing their organization from adoping cloud..."

  • [September 26, 2009] "Full Disk Encryption Evolves. The Opal Standard Paves the Way for Hardware-Based Encryption." By Greg Shipley. From InformationWeek. "Earlier this month [2009-09], the Naval Hospital in Pensacola, Florida, began notifying thousands of individuals that personally identifiable information about them had been lost when a laptop disappeared. In August 2009, the National Guard announced that a laptop containing personal information on 131,000 members had been stolen. We could go on — rarely does a month go by without an organization revealing the loss or theft of a laptop brimming with sensitive data. Full disk encryption, or FDE, is the preferred mechanism to address this threat because, as the name implies, the technology lets IT encrypt the entire hard drive so that sensitive data is protected, no matter where it resides. But unfortunately, FDE adoption comes at a price: complex and costly deployments, additional licensing fees, and one more application for IT to support... In January 2009, the Trusted Computing Group released the final specification of the Opal Security Subsystem Class, a standard for applying hardware-based encryption. Moving hard-drive encryption into hardware has a number of advantages. For starters, it works with any OS. It also moves the computational overhead of the encryption process to dedicated processors, alleviating any computing load on the system's CPU. In addition, the encryption/decryption keys are stored in the hard-drive controller and never sit in the system's memory, making 'cold boot' attacks ineffective... Consider yourself warned: Without an integrated management infrastructure, enterprise deployment and support of Opal-compliant hard drives will be a nightmare... [the mentioned] use cases require a centralized management platform that can communicate with endpoints. We're aware of only one vendor (Wave Systems) that's shipping a management platform to tie all of this together. Wave uses a "pre-boot" operating system to set up admin and user accounts for unlocking the hard drive's encryption keys before the OS boots, and also has a Windows agent that can sync these accounts with Active Directory....On the manufacturing side, vendor support for hardware-based FDEs is good. In the last six months, Fujitsu, Hitachi, and Samsung have debuted Opal-compliant drives, and system vendors Dell and Lenovo are shipping laptops with Opal-based drives..."

  • [September 23, 2009] "Design and Implementation of a Key-Lifecycle Management System." By Mathias Björkqvist, Christian Cachin, Robert Haas, Xiao-Yu Hu, Anil Kurmus René Pawlitzek Marko Vukolić (IBM Research - Zurich, CH-8803 Rüschlikon, Switzerland). September 23, 2009 (or later). "Key management is the Achilles' heel of cryptography. This work presents a novel Key-Lifecycle Management System (KLMS), which addresses two issues that have not been addressed comprehensively so far. First, KLMS introduces a pattern-based method to simplify and to automate the deployment task for keys and certificates, i.e., the task of associating them with endpoints that use them. Currently, the best practice is often a manual process, which does not scale and suffers from human error. Our approach eliminates these problems and specifically takes into account the lifecycle of keys and certificates. The result is a centralized, scalable system, addressing the current demand for automation of key management. Second, KLMS provides a novel form of strict access control to keys and realizes the first cryptographically sound and secure access-control policy for a key-management interface. Strict access control takes into account the cryptographic semantics of certain key-management operations (such as key wrapping and key derivation) to prevent attacks through the interface, which plagued earlier key-management interfaces with less sophisticated access control. Moreover, KLMS addresses the needs of a variety of different applications and endpoints, and includes an interface to the Key Management Interoperability Protocol (KMIP) that is currently under standardization... Our implementation of the KLMS server, together with the prototype support for KMIP, measures over 70k lines of Java code... Currently, KLMS supports the four automated deployment patterns presented in Section 3.1, yet additional patterns can be added in a modular manner. The implementation of strict access control takes into account the possible size of the object attributes dependents and readers; these grow with the system's age and may pose performance issues if implemented sub-optimally. To cope with this, our implementation uses two separate global tables in the DB layer for these two attributes. For the readers table, as for the representation of ACL, DB maintains the identity of a user determined from an LDAP directory server in the form of the string representation of the user's LDAP Distinguished Name. KLMS support for hardware-security modules (HSM) is foreseen by the architecture, but has not been implemented yet. Currently, the DB layer is based on a small-footprint Apache Derby database. The experimental integration of support for KMIP includes the portable portion of KMIP client/server code (around 28.5k lines of Java code) and the KMIP/KLMS adapter code (slightly over 4k lines). With this architecture, the support for KMIP can be easily transferred to a different key server core..." [cache/archive]

  • [September 07, 2009] "Encryption Made Easier With New Key Management Tools. New Tools Solve Problem of how to Manage Encryption Keys." By Logan G. Harbaugh. From Network World. "In response to dramatic and widely publicized losses of data over the last few years, IT execs are moving to deploy encryption in every corner of the enterprise. While encryption does reduce the chances of data loss, it can also create a management nightmare, with dozens of different encryption applications using hundreds or thousands of keys. To address that problem, vendors have developed enterprise encryption key management tools. Of the dozen vendors that we identified, three accepted our invitation -- Thales, Venafi and Vormetric. Vendors who declined were Entrust, NetApp (Decru), PGP, Protegrity, RSA (EMC), SafeNet (Ingarian) and WinMagic. The still-developing state of the market is reflected in the different types of products we received — an appliance from Thales that supports a variety of key exchange standards, software from Venafi that supports a broad range of applications, and an appliance from Vormetric that replaces existing encryption on a variety of platforms, enabling one appliance to manage encryption across a broad range of applications... There are a wide range of tasks associated with key management: issuing, renewing and revoking keys; monitoring applications; reporting and logging; setting and auditing policies; management; and in some cases, discovery of applications using keys that can be managed through the system... No single management tool will be able to perform all these tasks with every possible application using keys or certificates, or at least not without considerable custom programming. Part of the reason is that standards for key exchange (providing keys to one application by another) are still under development, and even when standards are ratified, it may take years before all enterprise applications and management solutions support them... Thales has FIPS level security that some organizations that deal with the government will require, Venafi has an easy-to-use and straightforward discovery, management and deployment system, and Vormetric can handle encryption for applications that don't offer native encryption, including versions of Windows before 2008 and backups of IBM's DB2 and IDS databases..."

  • [September 04, 2009] Recommendation for Pair-Wise Key Establishment Schemes Using Integer Factorization Cryptography. NIST Special Publication 800-56B. Edited by Elaine Barker, Lily Chen, Andrew Regenscheid, and Miles Smid. August 2009. 114 pages. Keywords: assurances; integer factorization cryptography; key agreement; key confirmation; key derivation; key establishment; key management; key recovery; key transport. Abstract: "This Recommendation specifies key establishment schemes using integer factorization cryptography, based on ANS X9.44, Public Key Cryptography for the Financial Services Industry: Key Establishment Using Integer Factorization Cryptography [published August 24, 2007], which was developed by the Accredited Standards Committee (ASC) X9, Inc... A key establishment scheme can be characterized as either a key agreement scheme or a key transport scheme. This Recommendation provides asymmetric-based key agreement and key transport schemes that are based on the Rivest Shamir Adleman (RSA) algorithm... Secret cryptographic keying material may be electronically established between parties by using a key establishment scheme, that is, by using either a key agreement scheme or a key transport scheme. During key agreement, the derived secret keying material is the result of contributions made by both parties. Key agreement schemes may use either symmetric key or asymmetric key (public key) techniques. The key agreement schemes described in this Recommendation use public key techniques. The party that begins a key agreement scheme is called the initiator, and the other party is called the responder. During key transport (where one party selects the secret keying material), encrypted secret keying material is transported from the sender to the receiver. The key transport schemes described in this Recommendation use either public key techniques or a combination of public key and symmetric key techniques. The party that sends the secret keying material is called the sender, and the other party is called the receiver. The security of the Integer Factorization Cryptography (IFC) schemes in this Recommendation is based on the intractability of factoring integers that are (divisible by) products of (two or more) sufficiently large, distinct prime numbers... Many U.S. Government Information Technology (IT) systems need to employ strong cryptographic schemes to protect the integrity and confidentiality of the data that they process. [Several] algorithms have been standardized to facilitate interoperability between systems. However, the use of these algorithms requires the establishment of shared secret keying material in advance. Trusted couriers may manually distribute this secret keying material, but as the number of entities using a system grows, the work involved in the distribution of the secret keying material grows rapidly. Therefore, it is essential to support the cryptographic algorithms used in modern U.S. Government applications with automated key establishment schemes..." See also: Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised), NIST Special Publication 800-56A, March, 2007.

  • [August 20, 2009] "Lack of Data Encryption Standards Hampering Storage Security." By Beth Pariseau. From TechTarget.com. "While organisations are encrypting data in more places, the lack of standard encryption key management is a stumbling block to securing data. The options for encrypting data have expanded in recent years. Encryption started as a feature inside network devices sold by companies such as NetApp/Decru and NeoScale Systems Inc. (now owned by nCipher Plc). It didn't catch on there, but began showing up in backup software. Disk makers Fujitsu and Seagate Technology now sell self-encrypting hard drives, Brocade and Cisco offer encryption in Fibre Channel (FC) switches, and encryption is native in enterprise tape libraries from IBM and Sun and in LTO-4 tape drives. EMC, Fujitsu, Hitachi Data Systems, IBM and LSI support disk-based encryption within storage arrays. However, these vendors have separate systems for managing the keys needed to read encrypted data. Those keys must be stored, protected, backed up and tracked -- a process that becomes more unwieldy as an organisation adds encryption in different places... Key management remains the stumbling block: Key management becomes more important as encryption becomes more commonly implemented. It also becomes more of a stumbling block with encryption happening in multiple devices from different vendors, and no single standard for managing the keys... Earlier this year, a coalition of vendors led by Hewlett-Packard (HP), IBM, EMC/RSA Security and Thales Group submitted a standard for interoperability between key management systems and encryption devices to the Organisation for the Advancement of Structured Information Standards (OASIS). The spec is called the Key Management Interoperability Protocol (KMIP), and the collaborating vendors would like to see it become an industry-wide standard by the end of this year. If adopted, KMIP would allow users to attach almost any encrypting device to one preferred key management system, regardless of the vendors involved. Brocade, LSI and Seagate are also in the KMIP group..."

  • [July 30, 2009] "Pervasive Encryption and Key Management. How Pervasive Encryption Can Protect Your Data.." By Jon Oltsik (Principal Analyst, Enterprise Strategy Group [ESG]). From NetworkWorld 'Networking Nuggets and Security Snippets'. "At ESG, we have this concept called 'Pervasive Encryption.' It goes something like this: The Demand Side: Privacy and data breach laws are driving increased use of encryption technologies up and down the technology stack. The Supply Side: Cryptographic processors keep getting cheaper so they are being actively embedded into devices like disk and tape drives. More applications, databases, operating systems, and file systems are also being enhanced with encryption capabilities. Ultimately this means that more encryption technologies are implemented across the network — thus pervasive encryption... Yes, data confidentiality and integrity will improve but managing encryption keys isn't easy or well understood. According to a recent ESG Research survey, only 12% of enterprise security managers believe that their organizations are very familiar with key management concepts, technologies, and best practices. Alarmingly, 34% said they were not very familiar or not at all familiar with key management concepts, technologies, and best practices..."

  • [July 23, 2009] "Encrypted Storage and Key Management for the Cloud." By James Hughes. "What does it mean to have secure storage in the cloud? This: (1) Only I can boot my virtual machine; (2) Unauthorized tampering of my virtual machine will be detected; (3) My data is accessed solely by my virtual machine; (4) The system should not require me to enter a key or passphrase. These seemly simple goals are surprisingly elusive... The Cloud Security Alliance has issued their 'guidance'. Some may argue this guidance simply argues against using the cloud since they don't offer any advice on how to meet their requirements. That said, there are important areas that can be addressed with technical solutions. This posting is an attempt to address one, data security. I paraphrase their requirement for data security to state: encrypt your data, don't store your keys at your provider, and ensure your provider uses standard encryption by contract. I believe they got most of the goals right right, but they missed one: ensure your provider destroys all key material once it is no longer needed... Contracts with the cloud provider should include a 'no key storage' clause that states, 'Any keys provided for use will not be retained any longer than absolutely necessary'. This kind of clause is not unprecedented. The Payment Card Industry Security Standards state that merchants must not save the CVS number that is on the customer's credit card even if they use it to authenticate the customer....Using cryptography in storage is well understood. There are a plethora of products and now even standards like P1619 that address this. Why is the cloud so far behind? It may be because of a common belief that key management is complicated. Later in this document I will describe a key storage mechanism that can only be described as simple... The safeguards described here are to protect your machine while it is powered off and provide a key management scheme that allows the transition from off to on and on to off in a secure manner. I've demonstrated that key management and key storage can (and should) be separate layers, and have described a simple web based key storage API. I've have discussed what to do in the case of a known breach, what are the residual vulnerabilities and obvious extensions to this scheme. Next steps could be to work on a prototype. It will require changes to the boot process in Xen or KVM as well as some more formal crypto work. Sounds like a great Master's or even PhD project? There are papers to be written and even the possibility of standardization. Anyone want to create a standards group for this?"

  • [June 08. 2009] Cryptographic Key Management Workshop. NIST Focus Paper. Prepared by members of the Computer Security Division, U.S. National Institute of Standards and Technology, Gaithersburg, Maryland, USA. Apropos of the NIST Key Management Workshop, June 8-9, 2009. "[...] What is a key management framework? A key management framework is a basic conceptual structure that is used to specify the high-level issues and requirements for secure key management and will be the initial product of the CKM [Cryptographic Key Management] workshop. The framework will provide a structure for defining key management architectures from which key management systems can be built. The CKM framework is intended to define the components of a seamless set of technologies that will automatically create, establish, supply, store, protect, manage, update, replace, verify, lock, unlock, authenticate, audit, backup, destroy, and oversee all cryptographic keys needed for applications in the computing and communicating environments of the future. The framework will define the requirements for secure key management; the topics to be addressed include security policies, trust issues, cryptographic algorithms and key sizes for generating, distributing, storing, and protecting keys, key distribution, interoperable protocols, archiving, key recovery, key lifecycles, transparent user interfaces, etc... CKM includes policies for selecting appropriate key generation/establishment algorithms and key sizes, protocols to utilize and support the distribution of keys, protection and maintenance of keys and related data, and integration of key management with cryptographic technology to provide the required type and level of protection specified by the overall security policy and specifications... Some large-scale applications to be addressed [in the workshop] include the protection of critical infrastructure information, uniform (if not universal) health care, international finance, real-time national voting systems, integrated electronic commerce, international multimedia communications, long term information archives, Federal and State social services, and automatic data conversion conforming to technology changes, etc. New processing paradigms include Cloud/Web 2.0 and secure electronic personal data assistants capable of easily interfacing to personally private domains, such as family, finances, health, politics, religion, education, and professions. Special considerations include provisioning (i.e., providing needed parameters whenever and wherever needed) and maintenance of cryptographic keys for cryptography-based security mechanisms, and the long term integrity, availability, and confidentiality assurance of the keys used to protect data-atrest (in short and long term storage)... NIST intends to post a draft CKM framework for public comment in the fall of 2009. NIST will use the framework to develop key management guidance and to prepare possible workshops to discuss CKM-related activities. Comments on this draft framework will be solicited from the public..."

  • [June 05, 2009] "How to Protect Sensitive Data Using Database Encryption." By Christian Kirsch. From eWEEK (June 05, 2009). "Database encryption has gradually worked its way up the priority list for today's IT director. Firewalls and application security are no longer enough to protect businesses and data in the modern-day, open and complex IT environment. Mitigating this risk and complying with numerous emerging regulations are two principal drivers that are forcing database encryption onto the IT director's agenda. Here, eWEEK Knowledge Center contributor Christian Kirsch explains how these challenges can be overcome and advises on best practices for database encryption... Advanced security through database encryption is required across many different sectors and is increasingly needed to comply with regulatory mandates. The public sector, for example, uses database encryption to protect citizen privacy and national security. Initiated originally in the United States, many governments now have to meet policies requiring Federal Information Processing Standard (FIPS) validated key storage. For the financial services industry, it is not just a matter of protecting privacy but also complying with regulations such as the Payment Card Industry Data Security Standard (PCI DSS). This creates policies that not only define what data needs to be encrypted and how, but also places some strong requirements on keys and key management. In fact, Requirement 3 of PCI version 1.2 (that is, to protect stored cardholder data) seems to be one of the more difficult aspects with which to comply... It is important that database encryption is accompanied by key management; however, this is also the main barrier to database encryption. It is well-recognized that key use should be restricted and that key backup is extremely important. However, with many silos of encryption and clusters of database application servers, security officers and administrators require a centralized method to define key policy and enforce key management. Yet, just a relatively small number of Hardware Security Modules (HSMs) in the same security world can manage keys across a large spectrum of application servers, physical servers and clusters. Such a centralized strategy reduces total operational costs due to the simplification of key management. With data retention policies in some industries requiring storage for seven years or more, retaining encrypted data means that organizations need to be certain that they are also managing the storage of the key that encrypted that data..."

  • [April 13, 2009] NIST Key Management Workshop. Workshop date: June 8-9, 2009. Venue: Workshop to be held at the U.S. National Institute of Standards and Technology, Gaithersburg, Maryland, USA. Contact: Elaine Barker (technical and program questions) or Sara Caswell (administrative questions). Registration is required by May 18, 2009. For registration, send email to keymanagementworkshop@nist.gov. Overview: "Key management is a fundamental part of cryptographic technology and is considered the most difficult aspect associated with its use. Of particular concern are the scalability of the methods used to distribute keys and the usability of these methods. NIST is undertaking an effort to improve the overall key management strategies used by the public and private sectors in order to enhance usability of cryptographic technology, provide scalability across all cryptographic technologies, and support a global cryptographic key management infrastructure. The first step in achieving this goal is to conduct a workshop to identify: (1) the various obstacles in using the key management methodologies currently in use; (2) the alternative technologies that need to be accommodated; (3) alternative strategies useful in achieving the stated goal; and, (4) approaches for transitioning from the current methodologies to the most desirable method... There will be no registration fee for this workshop. Participation includes: [a] physically attending the workshop at NIST; [b] viewing the workshop presentations via WebCast at remote locations; [c] presentations; [d] discussion; [e] providing written comments and recommended relevant topics of interest...

  • [April 08, 2009] "HTML5 WG Key Management Standardization Effort." By Anders Rundgren (PrimeKey Solutions AB). Posting to to the IETF KEYPROV Working Group List archive (i.e., KEYPROV: Provisioning of Symmetric Keys). "[With respect to the the keygen element (per section 4.10.11 of the HTML 5 Draft Recommendation:] Since the PKI community at large seems to ignore the client-side of PKI in browsers, the HTML 5 designers apparently didn't find any other solution but adopting the 15 year old Netscape hack known as keygen..." Anders notes in a separate communication that KeyGen2: Key Provisioning/Management Standards Proposal "essentially started where keygen finished, which means that there is a 10 to 1 difference in functionality and complexity. One of the more fundamental enhancements is the ability to tell the issuer in a secure manner (before it has issued a key NB), if the key will be stored in secure container like a smart card or is residing on the hard-disk. This is done by adopting TrustedComputingGroup's 'device attestations' where an embedded device key vouches for various data it has control of..." Excerpt from Section 4.10.11: "The keygen element represents a key pair generator control. When the control's form is submitted, the private key is stored in the local keystore, and the public key is packaged and sent to the server. The challenge attribute may be specified. Its value will be packaged with the submitted key. The keytype attribute is an enumerated attribute. The following table lists the keywords and states for the attribute — the keywords in the left column map to the states listed in the cell in the second column on the same row as the keyword... The invalid value default state is the unknown state. The missing value default state is the RSA state. The user agent may expose a user interface for each keygen element to allow the user to configure settings of the element's key pair generator, e.g. the key length... The reset algorithm for keygen elements is to set these various configuration settings back to their defaults... This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use SSL and certificate authentication..."

  • [March 12, 2009] "Self-Encrypting Hard Drives." By Andrew (Yehuda)Lindell (Aladdin Knowledge Systems). Blog. "Self-encrypting hard drives are becoming a reality. One standard, led by the Trusted Computing Group, has been adopted by a number of vendors; see the press release here. I like this initiative a lot and really consider it a win-win situation. The cost of encryption is virtually nill because the encryption itself takes place in hardware on the drive. This means that everything is encrypted by default, without compromising performance. Note that this is a huge advantage. We may remember to encrypt our most sensitive files, but at the same time forget to encrypt our email archive, previous versions of the sensitive file, and of course the swap and hibernate files which can contain everything. Encrypting everything by default protects us from these omissions... How secure are these drives? Well, the encryption keys are generated and stored internally on the drive. Thus, the security of the system depends on the security of the key inside the drive... It is worth noting that highly sensitive files should probably still be encrypted on a higher level (using an encryption key that is stored in a separate smartcard that you take with you). Keeping the encryption key in a completely separate place is always the best practice and prevents even the most concerted efforts to decrypt. On a usability note, since the encryption keys are internal to the drive there is no key management issue. This is good because key management is often the biggest hurdle to adoption..."

  • [March 06, 2009] "TCG Sets the Drive Encryption Standard." By Roger A. Grimes. From InfoWorld. "Trusted Computing Group's Opal spec for interoperable, self-encrypting drives isn't complete, but it's a great start If you're not already a huge fan of the Trusted Computing Group, you should be. It is one of the few groups coming up with open-standard, long-term, real security solutions. Anything it does has my full support. If you've heard of the Trusted Platform Module (TPM), the cryptographic chip that is built into many motherboards and used by vendors all over the world, you've seen what the group can do. The latest handiwork of the TCG, by way of the Storage Device Working Group, is a full drive encryption standard called Opal [TCG Storage Security Subsystem Class: Opal]. The 81-page specification —full of new storage device logical specs, protocols, and data structures — might even put a system engineer to sleep, but I will try to distill some of the essential points. Critical reviews? (1) Most of the important details of implementing Opal are left up to each vendor and software manager. (2) A second important point is that key management is not defined, but left to the implementer to determine. It's good that the standard doesn't lock a manufacturer or software management vendor into a particular key management scheme. But it's bad that key management isn't required or even discussed. Where are the keys stored? How can they be extracted (either legitimately or not so legitimately)? These questions need answers before someone should rely on a vendor's crypto product. Low-cost vendors might skip on good key management, and without good key management the user is surely to suffer. On a good note, some of the software vendors ... are pretty open about their key management policies... I encourage users to thoroughly understand the key management aspects before implementing any crypto product..."

  • [February 27, 2009] "A Secure Cryptographic Token Interface." By Christian Cachin [WWW] (IBM Zurich Research Laboratory) and Nishanth Chandrany (University of California, Los Angeles, Department of Computer Science — work done at IBM Zurich Research Laboratory). IBM Zurich Research Labs Research Report. 24 pages. "Cryptographic keys must be protected from exposure. In real-world applications, they are often guarded by cryptographic tokens that employ sophisticated hardware-security measures. Several logical attacks on the key management operations of cryptographic tokens have been reported in the past, which allowed to expose keys merely by exploiting the token API in unexpected ways. This paper proposes a novel, provably secure, cryptographic token interface that supports multiple users, implements symmetric cryptosystems and public-key schemes, and provides operations for key generation, encryption, authentication, and key wrapping. The token interface allows only the most important operations found in real-world token APIs; while flexible to be of practical use, it is restricted enough so that it does not expose any key to a user without sufficient privileges. An emulation of the security policy in the industry-standard PKCS #11 interface is demonstrated... We introduce a cryptographic token interface that supports multiple users and provides operations for key generation, encryption, authentication, and key wrapping. Our token model defines an access control list (ACL) for every key. We show how to implement our token securely from a set of cryptographic primitives and prove that it respects the security policy expressed by the ACL. Our proposal constitutes the first formal model of a universal token interface with an explicit security policy, and it defines security in a strong and cryptographically sound way. It supports common key management operations and cryptographic functions, including encryption and key wrapping, which may interact in unexpected ways and were a common source of problems in earlier token interfaces. It models multiple users and goes beyond most previous token interfaces, which distinguish only between two user roles (application and administrator). We intend this model to give a basis for the key management interfaces of future hardware tokens and beyond that, for future networked key management servers. Key management in enterprises is moving towards key servers accessible online, and several emerging standardization efforts address key management interfaces. In particular, the recently published OASIS Key Management Interoperability Protocol contains all relevant key management operations of a cryptographic token interface. Available to multiple users, such key managers must implement sophisticated access-control policies and provide all relevant key management functions addressed here, including key generation, wrapping, and key derivation. At the same time, our model also isolates a subset of the features in existing cryptographic token interfaces, such that they have a clearly defined security policy and offer provable cryptographic security... In practice, a security infrastructure usually contains many distributed cryptographic tokens, and a central administrator synchronizes all relevant data among them. Future work should therefore address multiple tokens that share a common set of keys. In our model with only one token, attribute changes are instantaneous and the abstraction of the log can be implemented efficiently. Distributed tokens are not automatically synchronized; the way to a secure distributed token infrastructure lies in considering communication between tokens and refining our one-token security policy... We hope that our model lays the foundation for the design of future cryptographic interfaces in the industry, which will not allow interface attacks by design..."

  • [February 25, 2009] IEEE P1619.3 Weekly Minutes 2009-02-25. By Matt Ball. New Business: Work on P1619.3 plan/roadmap, with regard to KMIP... P1619.3 RoadMap Discussion (Ball). The group generally agreed to the following list of items concerning the P1619.3 roadmap and integration with OASIS KMIP. (1) We should '#include' the mandatory KMIP binary encoding in its entirety for our binary protocol. As needed, providing a mapping description [Plenary Item]. (2) Next, create an XML WSDL that is a mechanical translation of a subset of the KMIP binary protocol, and add on P1619.3-specific extensions. I don't think this is too hard, but will take a little time. The KMIP binary primitives have a clean mapping into standard XML-Schema objects. (3) Add in any P1619.3-specific extensions that were left out of KMIP — Need to research this and identify features in P1619.3 that are not in KMIP [Part of mapping effort]. (4) Add in the P1619.3 Namespace work. KMIP does not appear to define any namespaces, but relies on the users to hopefully create identifiers that are unique — actually, the only requirement is that they are unique in the local server context. (5) Define concrete default port bindings for the P1619.3/KMIP services. I didn't see this in the KMIP proposal. We could register ports with IANA. Maybe allow changing the ports, if needed. (6) Define an enrollment protocol. KMIP doesn't do this, but assumes that you've already whitelisted the certificates used for the SSL/TLS channel. I'm hoping to propose the Sun KMS Agent Toolkit enrollment as one option, and could include others as needed. (7) Define a discovery protocol. I didn't see this in KMIP, and again, I'd like to propose the Sun KMA discovery protocol as a starting point. (8). Deferred until next version: define a Server-to-Server communication. This may creap the scope too much, but this is another part that KMIP punted on, and would make sense to do in XML only instead of binary; KM servers are generally beafy. Ask OASIS KMIP about this, whether they want to add. (9) Review our architecture and make sure it fits well on top of the KMIP effort; KMIP doesn't really talk about architecture much, but instead jumps into the guts of it... The group reviewed these actions, and approved of the general approach. Matt Ball and Walt Hubis will work together to provide this as a roadmap of the group. The KMIP Consortium is currently looking into a mapping between P1619.3 and KMIP, and will be done within the next 2-3 weeks..."

  • [February 18, 2009] IEEE P1619.3 Weekly Minutes 2009-02-18: Discuss group reaction/response to OASIS KMIP announcement. From Matt Ball. "Questions: How does the KMIP announcement impact the work within IEEE 1619.3 What is IEEE 1619.3's stance? Matt (Sun): I think that the KMIP effort can work well with IEEE 1619.3 and that we can find ways to map the two standards onto the other. Landon (Cisco): KMIP is a complemenary effort. It would be a mistake to think of them as competing standards. Bob (Thales): Thales also sees the efforts as complementary. There was some discussion about the Intellectual Property rules of IEEE. The KMIP proposal is under the 'Royal Free under RAND' mode of OASIS. IEEE just offers RAND (Reasonable and Non-Discriminatory) terms. What about establishing a Liaison? Matt will check into this. The general consensus was that we should maintain a liaison between IEEE 1619 and OASIS KMIP. Bob Lockhart mentioned that there is an ongoing effort within the KMIP consortium to do a comparison between IEEE P1619.3 and KMIP, and show the differences. This work should be completed within a month or so. New Action Items: Matt Ball to check into establishing a Liaison between IEEE 1619 and OASIS KMIP..." See Matt Ball's comment: "Overall, I think we had a productive conversation, and found good ways for IEEE P1619.3 to collaborate with the new KMIP effort..."

  • [February 13, 2009] "OASIS KMIP for Mobile Phones." Posting by Anders Rundgren to the ICF list. "May I make you aware of another 'KM' effort not listed in the current KMIP charter? KeyGen2. Mobile Phones - A Difficult Target. KeyGen2 is a complete user-credential provsioning and management system, with mobile phones as the primary (but not only) target. You are writing that the scope of KMIP is enterprise-wide. KeyGen2 is not limited to enterprises but is intended to work over the entire spectrum of users which requires slightly 'unusual' security solutions since there is no single trusted entity, but rather a range of more or less trustworthy parties. Mobile phone-based credentials will IMO be more important in the consumer-space since we are long-term talking about virtual credit-cards as well. KeyGen2 was thus designed from the ground-and-up to support end-users having multiple credentials from different issuers. Due to the fact that mobile phones typically have a common keystore, new approaches to key-management were needed. KeyGen2 also exploits TPM-like concepts including key-attestations and enhanced machine- and human-oriented enrollment methods, taking advantage of device certificates. KMIP, based on the recently published documents at coverpages, does not seem to address these features, which in my opinion comes to no surprise since enterprise storage solutions and consumers with mobiles phone have fairy little in common. KeyGen2 would OTOH probably be a totally useless storage KM-solution :-)..."

  • [February 11, 2009] "Security Means Something Different to a Targeted Retailer." By David Taylor. "Enterprise Key Management: When Heartland's management started talking (post breach) about the need to go 'beyond PCI', they spoke specifically about end-to-end encryption. In such a scenario, data is encrypted at the initial point of capture (card swipe, Web site form or mobile payment) and remains encrypted as it travels via internal network and is stored in temp files. The data is only decrypted under very specific, controllable circumstances and by an extremely narrow set of persons and systems. We have talked with organizations that have implemented end-to-end encryption, and the real problem is enterprise key management. For companies that "grow into" end-to-end encryption, key management can rapidly become a nightmare. Based on our research with these leading companies, I'd argue that a tactical approach to key management is very counter-productive. On the other hand, enterprise key management packages tend to be expensive, often north of $500,000 for those firms with enough confidential data to be considered a target. Recommendation: Although it's possible to satisfy PCI requirements 3.4 and 3.6 without enterprise key management, I'd strongly recommend it for targeted businesses...

  • [February 9, 2009] "Keeping Stored Data Safe Within Company Walls. Storage Professionals Protect Data With Encryption and Key Management." By Stacy Collett. From ComputerWorld. "Protecting stored information is the next wave in data security. 'We're starting to see more emphasis on data at rest,' says Robert Rosen, former president of IBM user group Share and CIO at the National Institute of Arthritis and Musculoskeletal and Skin Diseases in Bethesda, MD... As companies upgrade their storage equipment, many are taking advantage of technological advances such as tape drive encryption, tape library encryption and enhancements in the way encryption keys are managed. There has also been progress in adopting the disk and tape encryption specifications of the IEEE P1619 standard, says James Damoulakis, chief technology officer at storage services provider GlassHouse Technologies Inc. Gartner Inc. has found that companies that encrypt stored data do so because they have to, not because they want to. 'There are regulatory compliance pressures — PCI or HIPAA,' says Gartner analyst Eric Ouellet, referring to the Payment Card Industry Data Security Standard and the Health Insurance Portability and Accountability Act. 'Or it's the fear that the tape will fall off the back of the truck and you'll have a disclosure issue.' Looking for an ultracheap approach? Ouellet suggests buying a hard drive with built-in encryption. Seagate, Toshiba and Hitachi are among the vendors introducing self-encrypting drives. 'It costs only a few bucks more to buy a drive with encryption,' Ouellet says. 'The applications aren't even aware there's any encryption. It's all in the background at the low-level driver level.' But keep in mind that self-encrypting drives address only storage issues, Ouellet warns... For years, encryption users have been calling on security and storage vendors to offer better interoperability when it comes to managing the keys that actually control the encryption. In response, companies such as Microsoft Corp. now allow users to store the encryption keys for data held on other vendors' key management systems. But key management will become more complex, experts say, as encryption finds its way into more and more storage devices, creating an avalanche of keys to manage. Some industry standards are being developed, such as IEEE P1619, but they address tape encryption and not the storage environment. 'We're seeing that move over to the self-encrypting drive [systems], but as far as the databases are concerned, they don't quite have a standard,' says Ouellet. For now, companies such as IBM and RSA Security Inc. provide some form of key management for external services, Ouellet says. Industry watchers say that although companies aren't clamoring for encryption and storage security, adoption will remain steady. 'There's a finite amount of resources available,' Robert Rosen says. 'There won't be a huge rush to it — but with new hardware, everything is going to be encrypted'..."

  • [January 29, 2009] "New Storage Security Specs Promote Hardware-Based Encryption." By George Hulme. From Byte and Switch. "The Trusted Computing Group (TCG) unveiled three specifications for full-disk encryption for use in all types of storage devices and encryption key management schemes... The three specifications include: (1) Storage Interface Interactions: This specification details how all of the TCG's specifications interact with storage connections and interface specifications, including ATA, ATAPI, SCSI, Fibre Channel, and others. (2) Opal: This specification details requirements for fixed storage media PCs and notebooks. (3) The Enterprise Security Subsystem Class: This specification is aimed at drives in data centers and high-volume applications, where typically there is a minimum security configuration at installation. Backers of the TCG and the new specifications include Fujitsu, Hitachi GST, IBM, LSI Corp., Seagate Technology, Samsung, Toshiba, Wave Systems, and Western Digital... Many of the hard drive manufacturers such as Fujitsu, Hitachi, and Seagate have incorporated parts of the standard in certain versions of their drives, and some businesses have adopted encryption to protect their data. Some security vendors that make encryption management software, such as Wave Systems and WinMagic, already have announced their applications are certified to the standard..."

  • [January 12, 2009] "Building Trust into Demanding Data Center Environments." By Thomas Coughlin (for the Trusted Computing Group). From Computer Technology Review. "[Other] articles have exposed the vulnerability of software-based encryption to recovery of the encryption keys from host DRAM. These vulnerabilities point out the value of hardware-based encryption to computer users and managers of data centers. The Trusted Computing Group (TCG) has created standards for hardware-based encryption on individual storage devices used in data centers; such as hard disk drives, tape and even optical disks. The group has also created key management technologies that can be used to manage this protection in the data center and throughout an enterprise. Disk drives with built-in encryption provide data security with no use of host system resources and independent of current or future applications. Since the encryption key never leaves the disk drive, the drive provides a Trusted Platform (TP). With TCG-based drives, encryption security is based upon authentication of the user. To provide this authentication, after each power-up, but during pre-boot, the user must enter a password in order to gain access to the disk drive... The Trusted Computing Group has a committee working on key management standards called the Key Management Services Subgroup (KMSS). This group is defining best practices for key and/or access control management, providing a uniform way to manage keys for a variety of storage devices and easing the development of products by producing Key Management Application Notes... Besides the TCG standards, there are other standards that deal with encryption on computers and enterprise environments. The IEEE P1619 specifications deal with encryption modes for storage devices as well as key management. There is commonality between the engineering experts leading the TCG key management standards and the IEEE P1619.3 key management standards, which will do much to make sure that these standards are consistent and support each other. Note that NIST and OASIS have also created recommendations for key management, which have been studied by both the IEEE and TCG groups... Channel encryption is handled by the various interface standards groups such as ANSI T10 (SCSI), T11 (Fibre Channel), T13 (ATA) and IETF (Internet). SNIA and DMTF provide management services that are used by key management standards. Besides the TCG KMSS and IEEE P1619.3 key management, there are also key management elements in the OASIS specifications that also flow into applications..."

  • [December 09, 2008] "Trusted Storage for Enterprise-class Hard Disk Drives." Part 2 of 2. By Michael Willett (Seagate Research; Co-chair of the Trusted Computing Group Storage Work Group). "To address keys in an enterprise environment, the Key Management Services Subgroup (KMSS) of TCG released TCG Storage Work Group Application Note 1: Encrypting Drives in an Array Controller. This document covers the historical issues and challenges in life-cycle key usage and management. For developers, the Application Note provides a detailed method and uniform approach for managing the locking and encryption of a variety of storage devices, including hard drives. The Application Note describes secure communication and authentication between the storage device and the host by addressing the operations between the host platform, an application, and trusted storage devices. In addition to establishing compliance with existing security regulations, the specification has the flexibility to meet future state and federal legislation. KMSS used a bottom-up approach to the secure storage hierarchy. For the data center, the bottoms-up approach avoids proprietary solutions across the whole hierarchy. Both the storage devices and the key management of storage directly on the drive (the bottom two layers) are therefore standardized, which is a benefit for end users because it simplifies the requirements and the cost for these layers by opening the sourcing possibilities to many suppliers. Storage devices with the same interface from a variety of suppliers drive competition to improve products for simplified implementation, lower cost, and easier migration... One of the more important aspects of encryption includes managing the encryption and authentication keys (or passwords). The authentication key unlocks the FDE drive. The encryption keys for the FDE drives are established in the factory by on-board random number generators. For security, the drive only stores the hash value of the authentication key for comparison during authentication... Adding drives is quite simple, since each drive comes with its own encryption key and encryption engine. To add a drive, the same authentication key can be used and an automated process can handle the key assignment... [Using] separate authentication and encryption keys solves the problem of losing keys and consequently losing access to encrypted data and certainly avoids the need for administrators to periodically change encryption keys... Figure 2 shows the many locations and the encryption keys as well as the central, single location for authorization key management... [To support] interoperability, TCG's Storage Work Group works in cooperation with the IEEE P1619 'Security in Storage' Working Group to develop its specifications. The activities of these two organizations have minimal overlap, if any. In contrast to IEEE P1619.3, which deals with the higher-level key management protocols, TCG addresses the controller to the drive interaction. The two are compatible. The encryption algorithms are the same in either case. TCG's standard establishes how the encryption and authentication keys are managed..." See also Part 1.

  • [December 02, 2008] "Trusted Storage for Enterprise-class Hard Disk Drives." Part 1 of 2. By Michael Willett (Seagate Research; Co-chair of the Trusted Computing Group Storage Work Group). "Large corporations and government organizations, as well as medium and even small companies, have increasingly recognized the need to protect sensitive data on storage devices. To achieve trusted storage, data encryption, including full-disk encryption (FDE), requires implementation decisions and a strategy to avoid system problems, including performance degradation. Working in an industry standards group, storage manufacturers have developed a standards-based approach to prevent data theft in the enterprise... The advantages of encrypting directly on the drive are easily demonstrated with de-duplication and decompression but there are several other system advantages: perhaps an even greater system issue is complexity. Figure 1 shows a comparison of key management with encryption on the drive versus outside the drive [Performing encryption outside of the storage system increases complexity vs; automatic encryption in the drive simplifies key management for authorization (A keys) and encryption]. Four encryption keys are eliminated from the data center when the encryption is performed at the storage system with self-encrypting drives... Encryption directly on the storage device provides the simplest and most effective means to obtain a trusted storage system. An essential component of the data protection process involves key management and self-encrypting drives. Self-encrypting drives simplify the key management process..." See also Part 2.

  • [November 27, 2008] "Twenty Rules for Amazon Cloud Security." By George Reese. From O'Reilly Community. "Is the Amazon Cloud secure? Anyone not asking that question is not doing their due diligence... The short answer is: Yes! The Amazon Cloud is secure and you can securely deploy web applications into the cloud. There are definitely concerns unique to the cloud when you examine an EC2 deployment against other options. By following these twenty rules, however, you should find yourself securely deploying web applications into EC2... (1) Encrypt all network traffic; (2) Use only encrypted file systems for block devices and non-root local devices; (3) Encrypt everything you put in S3 using strong encryption; (4) Never allow decryption keys to enter the cloud — unless and only for the duration of an actual decryption activity; (5) Include NO authentication credentials in your AMIs except a key for decrypting the file system key; (6) Pass in your file system key encrypted at instance start-up..."

  • [November 2008] Alliance AES Key Management. Prepared by Patrick Townsend Security Solutions. "Key Management is as important to your security strategy as the encryption software you use. The loss of an encryption key can compromise your security and lead to expensive privacy notification procedures and the need to re-encrypt historical data. This paper discusses what you should know about Key Management... Encryption keys should be protected with special applications that are designed to prevent unauthorized access to the keys, and to allow the use of the encryption key by authorized users and applications. These special applications are call Key Management systems. Key management systems provide a number of functions including: Creation of new encryption keys; Secure storage of encryption keys; Definition of authorized users; Definition of key expiration dates; The changing of keys (key roll-over); Definition of encryption key policies; Distribution of keys to end points where needed; Periodic save of keys to secure backup... There are six basic levels of key management ranging from no key management (a really bad idea) to the use of high-end key management infrastructure systems. A key management strategy always involves a balance between security and system reliability and usability. Very secure key management systems such as those used in military applications may impose unreasonable constraints that would be unacceptable or unnecessary in commercial applications. Finding the right solution will mean balancing your security needs with your application needs. The following table shows the general relationship between key management approaches, security, and application reliability: [details]... The highest level of secure key management is to use a FIPS-140 certified key management solution that resides on a completely separate platform from the one where you encrypt sensitive data. FIPS-140 certification is a testing process implemented by the National Institute of Standards and Technology (NIST) and provides more assurance of the security of the key management solution. In the event a file containing encrypted information is lost, the encryption keys remain protected even if the entire system is lost. This separation of the keys onto a physically separate system provides a very high level of security. It should be noted, however, that application reliability is reduced by this approach. A failure in the network or a failure in the key management server hardware can make the encryption keys inaccessible for a period of time. This can have a major impact on certain business applications..."

  • [October 22, 2008] "2008 Encryption and Key Management Benchmark Report." By Kimberly Getgen (Principal, Trust Catalyst). 28 pages. [PDF] "According to the Privacy Rights Clearinghouse, more than 234 million credit card accounts have been compromised since 2005, and the reality is that hundreds of millions more are at risk... Thales and Trust Catalyst conducted an in-depth survey on encryption and key management trends to better understand how global businesses and government organizations are securing data using encryption, and more importantly, how these organizations are protecting their encryption keys. In particular, we were interested in four main issues: (1) What's being encrypted? (2) How are encryption keys being managed, stored, and protected? (3) Is there a concern about the cost to business when encryption keys are mismanaged? (4) How are key management challenges impacting the adoption of encryption? We received 330 responses from individual respondents... Despite all the encryption in use, key management issues continue to plague organizations. When it comes to what's keeping organizations from encrypting databases, respondents answered they were most concerned about key management issues... Section Three: Key Management Challenges. Encrypting data and storing encryption keys securely are only parts of a thorough data protection strategy. Encryption key management is also crucial. The ways encryption keys are managed can be the difference between recovering data in an acceptable timeframe and business disruption caused by lost, compromised, or hard-to-access keys. We asked participants what would concern them the most if encryption keys were compromised versus if encryption keys were lost; survey respondents' highest concerns are lost business (33.8%) and the cost of data recovery (46.5%)... Encrypting data turns out to be a small part of an enterprise's encryption strategy. Just as much, if not more, planning goes into how to make encrypted data accessible in an acceptable amount of time to avoid disruption and business costs. This requires good key management. We asked respondents how much time their organizaton spent preparing for key management issues [...] they ranked the following nine aspects of key management on a scale of 1 to 5, from least to most challenging: (a) Rotating keys, decrypting and re-encrypting data; (b) Revoking/Terminating keys — so data can't be accessed; (c) Meeting compliance requirements; (d) Keeping track of keys — having the right key at the right time; (e) Backing up and recovering keys; (f) Proving compliance requirements have been met; (g) Making keys accessible to disaster recovery site; (h) Long term key archival; (i) Preparing for publicity and impact of data breach... The aspect rated least challenging was 'Preparing for publicity and impact of data breach' compared with the more difficult issues of revoking and terminating keys, backing up and recovering keys, and making keys accessible to the disaster recovery site. Without getting these key management challenges right, organizations could lose valuable data and cause business disruptions that negatively impact the bottom line.... We predict key management will be a growing concern for organizations because inaccessible data results in lost business and data recovery costs. Here we can see that encryption, while a good data protection strategy, comes at a heavy price if keys are not managed properly. We believe that many organizations are at risk of having a significant incident that could cause them to lose data because of improper key management. Consequently, we expect to see centralized key management solutions become more widely adopted. We also expect a move away from manual key management processes based around spreadsheets, software, and disks...."

  • [October 4, 2008] "Data Center Encryption Is Key To Security." By Avi Baumstein. From InformationWeek. "Why bring encryption into the glass house? To paraphrase bank robber Willie Sutton, because that's where the data is. To date, most data center security efforts have been focused on protecting against Internet threats. However, IT can no longer ignore physical security: Thieves recently broke into the Chicago data center of managed Web hosting provider C I Host and stole server hardware — for the fourth time. Meanwhile, backup tapes are frequent targets for theft because they're often out of IT's direct possession. The Privacy Rights Clearinghouse Web site documents more than 40 cases of tape theft since 2005, and it's likely that far more were never reported. In our 2008 Strategic Security Survey, the theft of computers or storage systems was among the top five breaches seen as most likely to occur in the coming year. Clearly, encrypting hard drives and tapes is vital to protect data. So why aren't organizations rushing to sign on? The complexity of managing keys is a top deterrent to ubiquitous encryption. After all, there are many ways to encrypt, but key management is where all these projects succeed or fail. And failure is most likely to occur several years out, after the hole has been dug quite deep. Some information must be kept for decades, after all, and storing the keys needed to access that data securely for 10 or 20 years is a challenge. Fortunately, advances in managing keys as well as new options for encrypting data at each step within the backup process make it much less likely lost keys will come back to haunt you. Most of the vendors we spoke with understand the problem and are working to solve it. RSA's Key Management Suite, for example, works with encryption products from RSA partners to give IT a single management point for all encryption keys. Encryption vendors also have started to build key management into their products or offer these capabilities as options for companies with modest requirements... As with tapes, there are choices and trade-offs in disk storage encryption. While not strictly limited to the data center, PGP's NetShare is an elegant option for companies that can easily wrap their arms around users with sensitive data — for instance, a research group or credit department. These users' computers can be equipped with NetShare, and any time content is written to an encrypted folder or by a specified application, the files are encrypted with the public keys of the authorized users. This sounds similar to Microsoft's Encrypting File System, but it takes the concept further. Rather than only remaining encrypted while on the intended file system, NetShare-encrypted files can be copied to other folders, servers, or even portable media, and still retain their encryption. This is especially helpful for companies with a diverse server environment or where files are frequently transferred... Seagate recently introduced enterprise-grade disk drives with hardware encryption. By populating an array with these drives, a storage vendor can offer media encryption with no additional overhead. Key management is still an issue, but vendors such as IBM are integrating these devices into their key management software. This approach requires the least changes to a company's server or storage architecture, because it occurs after all other storage optimization, such as RAID, virtualization, compression, and deduplication. Finally, encrypting Ethernet link-layer traffic may seem like overkill, but that's exactly what the IEEE 802.1AE specification does. Cisco's TrustSec initiative uses 802.1AE as the basis for a sophisticated role-based access control system in which the network can tag data packets with user identity information that it can use to make access control decisions..."

  • [October 01, 2008] "PCI Security Standards Council Releases Version 1.2 Of PCI Data Security Standard." — The PCI Security Standards Council (PCI SSC), a global, open industry standards body providing management of the Payment Card Industry Data Security Standard (PCI DSS), PIN Entry Device (PED) Security Requirements and the Payment Application Data Security Standard (PA-DSS), today announces general availability of version 1.2 of the PCI DSS. This latest version is the culmination of two years of feedback and suggestions from its industry stakeholders and is designed to clarify and ease implementation of the foremost standard for cardholder account security. Version 1.2 is effective immediately and version 1.1 of the standard will sunset on Dec. 31, 2008. The updated standard and supporting documentation is available on the Council's Web site... The PCI Security Standards Council was formed by the major payment card brands American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc. to provide a transparent forum in which all stakeholders can provide input into the ongoing development, enhancement and dissemination of the PCI Data Security Standard (DSS), PIN Entry Device (PED) Security Requirements and the Payment Application Data Security Standard (PA-DSS). Merchants, banks, processors and other vendors are encouraged to join as Participating Organizations..."

  • [September 26, 2008] IEEE Key Management Summit 2008. By Scott Guthery (HID Global). From Keystrokes (Blog). "[At KMS 2008] Matt Ball did a great job of attracting speakers from across the entire key management spectrum... The major take-away was nobody has more than about a tenth of a clue. Some of the gathered gurus said to roll keys often. Others said roll keys never. Some of the gathered gurus said to encrypt high in the application. Others said to encrypt low in the fabric. Almost all of these opinions flowed from whatever the speaker's hammer thought was a nail. On the standards side the hoary saw about standards being nice because there are so many to choose from had lots of wood to cut. IMHO OASIS EKMI and IEEE P1619.3 are simply kiddies having fun with XML. IETF KEYPROV and the key management parts of DMTF CIM are credible efforts and efforts to track closely. Both are informed by front-line experience and being built by folks who understand how to write standards. Only the speaker from IBM touched on the really hard problem: finding all the keys in the enterprise; not only the keys in current use but the keys used to encrypt all the email and data in the Iron Mountain archives..."

  • [September 24, 2008] "A More Holistic Approach to Key Management." By Paul Turner (VP Product and Customer Solutions, Venafi). Paper presented at the KMS 2008 Conference. "With the expanding use of symmetric keys comes the increasing need for organizations to institute rigorous security measures and management procedures across the entire lifecycle of those keys. Traditional approaches to key lifecycle management, however, are proving limited, especially when these keys are deployed across various systems and applications. This presentation will contrast traditional views of key lifecycle management with key learnings from large telecom and financial services organizations to present an expanded perspective of the key management operations required to holistically approach this important problem. This more expanded perspective includes the need to automate the creation and management of both keys and certificates, configuring the applications that use them and providing comprehensive tools to monitor and report on the status of every component being managed. This results in improved data security, critical system uptime, operational efficiency and audit readiness..."

  • [September 24, 2008] IEEE Key Management Summit 2008. Baltimore, Maryland, USA. September 23-24, 2008. Held in conjunction with MSST 2008 (25th IEEE Symposium on Massive Storage Systems and Technologies). See the Agenda, with details for the Tuesday Program and Wednesday Program. Slide presentations are available individually online, or in a complete ZIP file. The recordings are also available. Program Chair: Matt Ball (MV Ball Tech Consulting / Sun Microsystems). The Program Committee included: Jack Cole (US Army), Robert Griffin (RSA Security - EMC), Eric Hibbard (Hitachi Data Systems), Larry Hofer (Emulex), Walt Hubis (LSI Logic), Jim Hughes (Sun Microsystems), Bob Lockhart (nCipher), Fabio Maino (Cisco Systems), Michael Marcil (Vormetric), Luther Martin (Voltage), Landon Noll (Cisco Systems), Arshad Noor (StrongAuth), Subhash Sankuratripati (NetApp), and Hannes Tschofenig (Nokia/Siemens Networks). Background: "With recent legislation, such as California's SB 1386 or Sarbanes-Oxley, companies now have to publicly disclose when they lose unencrypted personal data. To meet this new need for encryption, many companies have developed solutions that encrypt data on hard disks and tape cartridges. The problem is that these data storage vendors need a solution for managing the cryptographic keys that protect the encrypted data. The IEEE Key Management Summit brought together the top companies that develop cryptographic key management for storage devices with the standards organizations that make interoperability possible and the customers that rely on key management to secure their encrypted data. The aim of this summit was to provide clarity on the key management by showing how existing products and standards organizations address the problem of interoperability and security." Key Goals of KMS 2008 [Matt Ball]: Understand the current key management standards, understand current vendor key management solutions, and understand the needs of the customer.

  • [September 23, 2008] "Overview of Key Management and Key Management Standards." By Landon Noll (Cisco Systems). Presented at KMS 2008, Program Tuesday, September 23, 2008. 8:30am. The MP3 recording is also available. Overview: What is Key Management? Key Management Service (KMS) Architecture. KMS km Namespace. Cloud Computing Use Case. Key Management is the complete set of operations necessary to nurture and sustain encrypted data and its associated keys during the key life-cycle. A Key Management System is an implementation of all or parts of Key Management Operations. The Key Management Policy translates business security requirements into Key Management Operations which are then executed by a Key Management System. Key Management Audit securely records all Key Management operations associated with keys under its control. Key Management is not PKI (Public Key Infrastructure), though it is likely that digital certificates will play a part. It is not HSM (Hierarchical Storage Manager), though Hierarchical Storage Managers will use Key Management. It's more than just a HSM (Hardware Security Module), though HSMs may provide some Key Management services. Key management is not SSS (Single Sign-on System), though SSS may use some Key Management services. It is more than just encrypting and decrypting of data. Key Management is more about managing keys and how those keys are used. Key management consists of (partial list): I. Management operations: backup and restore of key material; archival and retention of key material; distribution of key material; expire, delete, destruction of key material; audit of a given key's life cycle; reporting of events and alerts. II. Policy operations: set and manage key policies; policy notification; policy enforcement; reporting of policy enforcement. Terms: Key: A handle for encrypted data; without the key, the encrypted data is 'lost'. Key Management Services: A multivendor service that maintains keys throughout their life cycle (Creation, Distribution, Archival, Sharing, Recovery, Deletion), Establishes policy on the use of keys (key type and strength, key lifetime, re-keying, etc.), and maintains an audit trail for all Key Management actions. Key Management Clients: actors involved in cryptographic operations that use Key Management Services via the Client API... This presentation describes the P1619.3 KMS Architecture and a KMS km Namespace, and km SOGUID Namespace (km://domain/object/path). SOGUID = Security Object Global Unique Identifier...

  • [September 23, 2008] "Key Management and Mobile Computers in the Government." By Bill Burr (NIST). Presented at KMS 2008. 15 slides. The MP3 recording is also available. Interest: How we're managing keys in the U.S. government. NIST Security Technology Group creates standards (FIPS - sensitive, unclassified data) and recommendations for crypto. Background: OMB M-06-16 (from VA laptop incident) and OMB M-07-16 ("Safeguarding Against and Responding to the Breach of Personally Identifiable Information") articulate requirements for encryption. Full Disk Encryption (FDE). Many agencies are asking for full disk encryption in response to M-06-16; user has to enter a password to unlock the data. Key or password backup is a big issue. Lose the key or password - then you often loose all the data on the disk. Many FDE solutions have some provision for managing and backing up keys and passwords. A good FDE system gives protection when a system/drive is lost or stolen; it may also provide instant erase or sanitization. But FDE is subject to network or malware attacks; if the drive is in use then malware has access too. Activated, sleeping or recently turned off software systems may be vulnerable to attack (Ed Felton "Lest We Remember: Cold Boot Attacks on Encryption Keys")...Don't leave your encrypted laptop in 'sleep mode'. All software FDE: may fall to a password dictionary attack if a drive/system is lost, stolen or imaged; attacker has everything needed for an offline password attack (so you need a high entropy password). FDE: at least five variants: (1) Pure software - key is on the encrypted laptop drive. (2) Store key on separate device - key can be removed from laptop. (3) Trusted hardware to secure keys (TCG/TPM) - trusted chip on motherboard to secure keys and boot process. (4) Removable crypto module - key is never in laptop memory, module can be separated from the data; slow. (5) Encryption built into disk drive controller - No performance hit for encryption, key is never in laptop memory, but the key is always bundled with the data on the drive, and extracting raw ciphertext from drive is a laboratory scale problem..."

  • [September 22, 2008] "Cryptographic Interoperability Strategy for the U.S. Department of Defense." By Sue (Sandi) Roddy (National Security Agency). Presented at KMS 2008, Program Tuesday, September 23, 2008. The MP3 recording is also available. "Based on the most recent Defense Science Board report (2008), the U.S. DoD cryptographic interoperability needs span the traditional Armed Services, our long-standing Allied Partners, the ever-expanding NATO Alliance and Non-Governmental Organizations (NGOs). Each of these partners have differing regulatory, social, technical and sovereign requirements. In finding common ground, the U.S. DoD is proposing a suite of algorithms based on open standards. However, security goes well beyond just the selection of algorithms. This talk will discuss the additional key management aspects to ensuring interoperability... The ability to get the right encryption key to the right person is a challenge. Need standards for interoperability..." [Ms. Roddy is the Information Assurance (IA) Infrastructure Development and Operations Technical Leader for the Information Assurance Directorate, NSA. Over the last 12 years, she's held a variety of technical leadership positions in DoD's cryptographic key management environment. She currently chairs a NATO Security Management Working Group and has worked in design and development of interoperable infrastructures with Combined Communications Electronics Board (CCEB) nation partners. Her previous experiences include cryptographic equipment repair and maintenance for the U.S. Naval Security Group.]

  • [August 13, 2008] "Victoria's Secret Key." By Scott Guthery (HID Global). From Keystrokes (Blog). "Arkajit Dey and Stephen Weis rolled out Keyczar, a changing room for keys. Beside being a cryptographic toolkit and application programming interface, Keyczar is the distilled essence of a key management system. To a first order of magnitude Keyczar satisfies the GlobalPlatform Key Management System Functional Requirements. Here are the key management features of Keyczar: (1) Keys are organized into and generated inside of keysets. (2) Each keyset is associated with a profile describing the keys in the keyset. Profiles include specification of the intended use of the keys in the keyset — signing, encryption, etc. (3) Each key in a keyset is in one of three key lifecycle states: Active, Primary, Inactive. Keys are created in the Active lifecycle state. There is at most one key in the Primary state in each keyset. (4) The keyset profile also keeps historical track of the versions of each key. Right now version is just a number but date and reason for rolling could easily be added to the description of each version. The key lifecycle states are defined in Keyczar as follows: [1]Primary: Verify or decrypt existing data and can sign or encrypt new data; [2] Active: Only verify or decrypt existing data; [3] Inactive: Only verify or decrypt existing data and may be revoked at any time. It seems to me Revoked is another lifecycle state but that's a small point..." See also: the Keyczar Google Project, and the Keyczar Discuss List. Guthery wrote to that list (August 13, 2008), 'Keyczar as a Key Management System Starter Kit': "In addition to innovative cryptographic and useability features, Keyczar provides rudimentary key management capabilities. This I believe is to its credit and distinguishes it from other cryptographic APIs and toolkits. I do not use the term 'rudimentary' in any pejorative sense. The key management is a territory that has just begun to be mapped so it is wholly appropriate that first steps be modest and exploratory. Lighting up the key management features of Keyczar let's one do some exploration of this frontier on one's own. What few key management standards and specifications that do exist are not in complete agreement with respect to basic definitions let alone about criteria, objectives and requirements for key management systems. For those that travel such paths, I would call attention to the following: IEEE P1619.3 Work Group, GlobalPlatform Key Management System Functional Requirements, and NIST Special Publication 800-57, Recommendations for Key Management, Parts 1 and 2..."

  • [June 2008] "Key Management Infrastructure for Protecting Stored Data." By Luther Martin (Chief Security Architect, Voltage Security). From IEEE Computer (June 2008). A new standard from the IEEE P1619 Security in Storage Working Group (SISWG) will make it easier to manage the keys used to encrypt data in storage. This standard will greatly simplify key management and finally make interoperable key management possible. Products that implement the standard should be available by next year [2009]... Key management covers everything that's done with cryptographic keys and other related security parameters during the keys' entire life cycle. It includes how keys are generated, stored, used, and destroyed, as well as the policies that define how these things must be done... The P1619.3 standard's ambitious goal is to eliminate all [myriad KMS] problems and make interoperable key management possible. To do this, the standard abstracts the components of a cryptographic system into a key management server, a key management client, and a cryptographic unit. Components interact: In this [example] model, a key management server creates and distributes keys as well as the policies covering their use. Key management clients get keys and policies from a key management server on behalf of a cryptographic unit. These units perform the actual encryption and decryption operations with the keys the key management clients manage. Any product that complies with the P1619.3 standard will support a standard set of operations between these components. In addition, the P1619.3 standard also defines operations between key management servers. Any compliant implementation will also support a standard set of operations that let key management servers work together by securely exchanging both cryptographic keys and policies. This means that future key management systems will be able to interoperate in ways that aren't possible today, and that users of storage encryption will no longer be locked into single-vendor solutions..."

  • [May 2008] "Enterprise Key Management." Whitepaper. From the BITS Security Working Group, BITS Financial Services Roundtable. May 2008. 24 pages. [BITS (originally): "Banking Industry Technology Secretariat"] Except: "Financial institutions recognize the necessity to maintain secure, confidential and lawful treatment of data, including personally identifiable information as defined by the country in which the business is conducted. Encryption is increasingly important as a tool for protecting such data, particularly personally identifiable information from disclosure to unauthorized parties. For encryption to be effectively utilized across large enterprises, the encryption keys must be managed with the similar care given to the confidential data they protect for the duration of their entire lifetime to ensure that they are not easily guessed, disclosed or lost, and so that the data they encrypt can be recovered by authorized individuals. This paper describes general framework, best practices, and additional considerations regarding key management in the enterprise. It offers high-level definitions and examples of critical components of an enterprise-wide key management system, including key generation, distribution, archival and storage. Appendix One provides an overview of Encryption and Key Management, while Appendix Two provides a list of Industry Standards and References relating to the topics of Encryption and Key Management... The control of cryptographic keys is critical to the integrity of cryptographic systems. Commercial products do not always address all aspects of the key management lifecycle. As a result, businesses are forced to supplement commercial products with additional technical solutions or manual processes to fill the gaps in the lifecycle management. The additional effort of 'supplementing' commercial products results in increased cost and complexity of the data protection efforts. To ensure effective and efficient deployment of key management across the enterprise, an 'Enterprise Key Management Program' should be developed and maintained. Such a program should consist of the following elements: [I] Governance and Business Management Oversight: (1) Meet business, regulatory and legal requirements; (2) Define, maintain and enforce policies, standards and common practices. [II] Well Defined Key Management Lifecycle: (A) Key Generation,(B) Key Distribution, (C) Key Usage, (D) Key Storage (Operational Storage, Backup Storage Archive Storage); (E) Key Recovery, (F) Key Reissue, (G) Key Escrow, (H) Key Retirement (Key Replacement [Rollover, Update and Renewal, Key Deregistration], Key Revocation, Key Deletion)..."

  • [March 09, 2008] "Seagate Includes IEEE P1619.3 in an FDE Whitepaper." By Matt Ball. From Heisencoder: Matt Ball's Blog on Coding and the Heisenberg Uncertainty Priciple. Blog. "Seagate recently published a white paper depicting the IEEE 1619.3 key management protocol used in a system containing Seagate Full Disk Encryption (FDE) hard disks. It's an interesting read if you're into the hardware encryption scene. The white paper mentions using existing key management systems, like IBM's EKM (Enterprise Key Management) system, with storage systems that include Seagate FDE hard disks. The FDE encrypts the hard disk data using an AES-128 encryption key (NIST's Advanced Encryption Standard), and stores the only copy of this encryption key on the hard disk in encrypted form. To decrypt the encryption key, you need an 'authentication key'. The FDE also stores a cryptographic hash of the authentication key, which is used to verify whether the user entered the correct authentication key. The beauty of this setup is that it is possible to perform a fast secure-erase of the hard disk by simply erasing the encrypted encryption key. Also, if an attacker was able to open the hard disk or compromise the firmware, the only available information is the encrypted encryption key and the hash of the authentication key. Without the authentication key, it is impossible to get any data off the hard disk. There are a few caveats here, however: (1) In the absence of a key management server, the authentication key is likely a password entered by the user, which makes the strength of the encryption only as strong as the weaker of the entropy of the password (which is typically very low) or the physical security of the hard disk (which is unknown).... (2) Neither the white paper nor any other source I've seen describes the AES encryption mode used for protecting the data and the encryption key in the FDE. Just using AES-128 is not sufficient to ensure a high-level of security — you need to use AES in a secure mode of operation..."

  • [January 30, 2008] "Storage Standards Part Three: Key Management." By Rick Cook. "Three separate standard groups are working on encryption key management standards. The IEEE Security in Storage Working Group (SISWG) 1619 committee, called P1619.3, is focused on storage management. A second key management standard, the Enterprise Key Management Infrastructure (EKMI), is being developed by the Organization for the Advancement of Structured Information Standards (OASIS) consortium and is more general. The third, IETF's Keyprov, is based around a list of best practices for key management and will eventually evolve into a standard according to IETF documents. As storage security becomes widespread, managing encryption keys — especially those from different vendors' products--has become important. Standards for key management have lagged behind this need.

  • [January 5, 2008] "Key Management: The Key to Secure Storage". By Walt Hubis (LSI). 13 slides. Presented at the Storage Visions 2008 Conference, January 5-6, 2008, Flamingo Hotel, Las Vegas, NV, USA. "Storage systems can be made secure only to the extent that the keys to the data can be protected from disclosure, modification, or loss. The data and the keys protecting the data may be needed for years, or may be required for only a fraction of a second. At the same time, management of the keys has to be fool proof and be transparent to the users of the data especially in consumer content management systems. This session delves into the issues around securing data and the importance of key management for data storage systems including disk drives. The threat models associated with the data keys are examined, and the current techniques for generating, managing, and transmitting keys will be explored. An up to the minute description of key management standards organizations will be provided, along with approaches to make key management easy to use. The topics covered in this session include: (1) The Key Management Problem; (2) Types of Keys; (3) Securely Exchanging Keys; (4) Threats to key Management; (5) Key Management Systems; (6) Current Key Management Standardization Efforts. The session will provide an introduction to key management for electronic storage devices. Attendees can expect to gain a basic knowledge of the issues, terminology, and technologies associated with managing keys for secure storage..." Specifications are being prepared at several levels [lowest to highest]: (1) Device/Media Encryption, (2) Channel Encryption, (3) Management Interface, (4) Key Management, and (5) Application/Data. Standards Organizations working on encryption and key management: DMTF, IEEE, IETF, INCITS (T10/T11/T13), OASIS, SNIA, Trusted Computing Group (TGC). [DRM version

  • [December 03, 2007] "Key Management for Enterprise Data Encryption." By Ulf T. Mattsson (Protegrity Corporation [WWW]). December 3, 2007. Published through SSRN. 7 pages. "One of the essential components of encryption that is often overlooked is key management - the way cryptographic keys are generated and managed throughout their life. Since cryptography is based on keys which encrypt and decrypt data, your database protection solution is only as good as the protection of those keys. Security depends on several factors including where the keys are stored and who has access to them. When evaluating a data privacy solution, it is essential to include the ability to securely generate and manage keys. This can be achieved by centralizing all key management tasks on a single platform, and effectively automating administrative key management tasks, providing both operational efficiency and reduced management costs. Data privacy solutions should also include an automated and secure mechanism for key rotation, replication, and backup. The difficulty of key distribution, storage, and disposal has limited the wide-scale usability of many cryptographic products in the past. Automated key distribution is challenging because it is difficult to keep the keys secure while they are distributed, but this approach is finally becoming secure and more widely used. Standards for key-management have been developed by the government and by organizations such as ISO, ANSI, and the American Banking Organization (ABA). The key management process should be based on a policy. This paper will exemplify different elements of a suggested policy for a Key Management System used for managing the encryption keys that protect secret and confidential data in an organization... rest. Protecting data only sometimes — such as sending sensitive information over wireless devices over the Internet or within your corporate network as clear text — defeats the point of encrypting information in the database. It's far too easy for information to be intercepted in its travels so the sooner the encryption of data... A major problem with encryption as a security method is that the distribution, storage, and eventual disposal of keys introduce an expensive and onerous administrative burden. Historically, cryptographic keys were delivered by escorted couriers carrying keys or key books in secure boxes. An organization must follow strictly enforced procedures for protecting and monitoring the use of the key, and there must be a way to change keys... [cache/archive]"

  • [November 16, 2007] "Self-Encrypting Hard Disk Drives in the Data Center. Data is Instantaneously Secured the Moment the Drive Leaves the Data Center.." Seagate Technology Report. 6 pages. "[...] At least 35 U.S. states now have data privacy laws that state if you encrypt data-at-rest, you don't have to report breaches of that data. U.S. Congressional bills have similar provisions. The Payment Card Industry Data Security Standard, which requires rendering sensitive cardholder data unreadable anywhere it is stored, lists strong cryptography as an acceptable method of doing so... The beauty of encryption is that from the moment the drive or system is removed from the data center, whether intentionally or otherwise, the data on the drive is secure. No advance action or thought is required on the part of the data center administrator to secure this data. There is no way for the data to be breached should the drive be mishandled... Technology Overview: This technology consists of three components: self-encrypting hard drives, a key management service that stores, manages and serves authentication keys (i.e., passwords), and a storage system that passes these authentication keys to the correct drive. The self-encrypting drives perform full disk encryption. When a write is performed, clear text enters the drive and, before being written to the disk, is encrypted using an encryption key embedded within the drive. When a read is performed, the encrypted data on the disk is decrypted before leaving the drive... The drive requires an authentication key (otherwise known as a password) from an outside source before the drive will unlock for read/write operations. In addition to its traditional functions, the storage system defines secure volume groups, gets the authentication keys from the key management service, and passes the key to the correct drive... The storage system makes the encryption function transparent to the hosts and applications... The key management service may include software- or hardware-secure key stores. At the request of the storage system, it will create and assign keys transparently to hosts and applications. The key management service can leverage existing security management policies to define keys and restrict access to keys. Key management includes backup and synchronization, key life-cycle management, auditing, and long-term retention. The key management service can employ existing high-availability and disaster-recovery configurations. The key material can be automatically included within the server backup data and stored offsite... This technology is designed to be standards-based and to be part of an interoperable solution. The Trusted Computing Group (TCG) Storage Work Group has developed a security communication protocol for self-encrypting drives. Supporting transport commands in T10 and T13 are ratified. IEEE 1619.3 is developing a standard authentication key management protocol. All hard drive vendors and major storage vendors are participating in the Trusted Computing Group. Leading storage system vendors and key management vendors are participating in IEEE 1619.3.Ultimately, this technology will apply across the entire data center... Self-encrypting drives may be in storage arrays, on SANs, NAS, and servers, in data centers, branch offices and small businesses. A unified key management service will support the key management requirements for all forms of storage, as well as other security applications... [cache/archive]

  • [September 2007] Key Management Standards Hit the Fast Track. By Greg Goth. From IEEE Distributed Systems Online Volume 8, Number 9 (2007). It might appear that the technology industry just discovered encryption-key management in 2007. Since the beginning of the year, data-security product vendors, enterprise customers, and standards bodies have embraced efforts to standardize methods for managing encryption keys across disparate encrypted-data storage and exchange systems. Three standards bodies — the IEEE, the Internet Engineering Task Force (IETF), and OASIS — have recently chartered working groups on key management. For enterprise technologists, navigating the landscape of vendor-specific key-management solutions and emerging standards efforts might prove to be a daunting task. Bob Griffin, technical marketing director for RSA Security, sees two prevailing industry trends precipitating the urgency to create a key-management standard. First is the proliferation of endpoint devices that can share keys to access encrypted data. The second, following naturally from the first, is the increased number of vendors homing in on this market niche. A third factor, just as important as the technical nuts and bolts, is a regulatory climate that's becoming ever more security-conscious. Numerous laws, such as California's Breach Disclosure Law, and US federal regulations, such as the U.S. Health Insurance Portability and Accountability Act (HIPAA), as well as the Payment Card Industry's Data Security Standard (PCI), have spelled out strict requirements for protecting customer and patient data. As a result, security experts increasingly recommend encrypting data stored on any device, not just data in transit. And those devices must be able to share keys efficiently. For now, RSA has staked the most of its key-management effort on the IEEE process. The key-management group, IEEE-P-1619.3, is a subgroup of the IEEE 1619 Security in Storage Working Group. Griffin is a member of 1619.3, which is focusing on storage encryption. He's also serving as an observer and liaison in the OASIS key-management effort, known as Enterprise Key Management Infrastructure (EKMI). Griffin characterizes the OASIS effort as "an extremely, extremely large project." It aims to enable universal encryption and decryption at the application layer. Because this would require every imaginable application to adhere to the same key management standard, both Steve Norall (Taneja Group) and Griffin see results at least five years away..."

  • [July 2007] "A Proposal of Key Management Scheme and Its Operation Using Anonymous Biometrics on ID-based Infrastructure." By Akitoshi Izumi, Yoshifumi Ueshige, and Kouichi Sakurai. From International Journal of Security and Its Applications Volume 1, Number 1 (July 2007). 12 pages. "In the information exchange through network, the security risks always exist, that is eavesdropping, defacing, and spoofing by the attacker. PKI (Public Key Infrastructure) will prevent such attacks. But key management is very serious problem in PKI. The public key certificate is issued and distributed by certificate authority, but we think that the updating of expired certificate etc. are very costly for users. And secret key management is more serious problem. In order to solve above problems, we propose the scheme that stores protected secret key which is made by combination of biometrics and secret key in the smartcard in IDbased cryptography system. The user can restore the secret key from protected secret key by presenting his fingerprint to smartcard that has protected secret key and helper data. In our scheme, the template is not need for authentication. So, the problem of the template leakage won't arise. Lastly, we proposed the concrete operation scheme in which our scheme is used and how to make signature or authentication by applying our scheme. We show that the cost of the public key and secret key management will be reduced by using this operation scheme..."

  • [July 2007] Constructive Key Management (CKM) Framework. Edited by Jan Wack (and others). Prepared by TecSec; see the TecSec Reference papers. Copyright © 2007 TecSec. Updated November 2008 (or later). 18 pages. "CKM is a key management framework that includes architecture for administering keys and for an encryption schema. The framework results in an encrypted message that users share through a common set of keying components. Security through encryption of information may be done through the transport or network layer or encryption may be applied to the information content. There is often a need for a mix between Transport and Content encryption protection... There is often a need for a mix between a technical encryption solution and a business representation through content. In a client bank exchange, for example, it could be decided to encrypt only some parts of the message depending on the business need for confidentiality and access control. Protecting content can offer more flexibility to combine business models with security through encryption. Signing for non-repudiation can be considered an adjunct to a content protection process. The message may be protected through the network as in the example of a channel in which there is no direct binding of encryption to the message or may be protected at the content level for which there can be a direct binding of encryption to the content. The CKM schema may apply to a network encryption implementation or to a content protection implementation. For content protection the CKM schema can apply to on-line and off-line communications environments... CKM is exactly what the name implies: A key is constructed per message. A message-encrypting key (or working key) is constructed as needed by the originator of the message, and can only be re-constructed by the appropriate entities as necessitated by their respective roles and relationship to the content... CKM is embodied in numerous standards (X9.69. X9.73, X9.84, X9.96) published by the American National Standards Institute (ANSI) and is being incorporated into ISO 22895 which includes reference to the cited ANSI standards... To implement an encryption schema, the fundamental challenge is: how to maintain a secret associated with that schema so that vastly dispersed usage can be accomplished. In sum, how a balance is reached to distribute the secret in the form of a key to many users. The mechanism is a framework for a key management architecture. CKM is built on a key management framework: the architecture is designed for a large scale enterprise solution or for even a personal use; the CKM schema results in a dynamic creation of a message encrypting key on-the-fly; there are multiple inherent integrity checks; and the schema has been certified and is contained in national and international standards... ANSI X9.69 (Framework for Key Management Extensions) identifies Constructive Key Management (CKM); a multi-tier architecture is illustrated that can offer a scalable enterprise solution. The elements of the CKM administration are described in the standard. The Key Establishment portion of key management is defined so that designated groups of user can establish keys for data encryption via a key derivation procedure, thus avoiding key distribution in the usual sense..."

  • [April 30, 2007] Analysis: Enterprise Key Management." By Jordan Wiens and Steven Hill. Analysis of key management offerings from NeoScale, Decru, nCipher, RSA, and Sun Microsystems. " Layer upon layer of encryption management, replete with multiple key-management challenges. It's time to stop and think: Will a piecemeal approach to encryption and key management cost us big? Maybe, but you have little choice. Even though large enterprises have been fighting this problem for years, the hard truth is, there are no standards to enable unified management of keys from disparate systems. RSA's Public-Key Cryptography Standard 11 and Microsoft's CryptoAPI are helpful, but they're not standards. The OpenSSL project is a standards-based toolkit for crypto implementation, but it doesn't address key management. Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) is akin to Microsoft's CryptoAPI: If you're using Java and JCE, it might meet EKM (enterprise key management) requirements for those apps only. Sun's SKIP (Simple Key Management for Internet Protocols) provides key sharing only, no management. Our analysis of available key-management offerings ("Review: Enterprise Key Management Software") revealed that even without settled standards, a few vendors are taking baby steps in the right direction. For the most part, though, vendors using encryption are keeping the R&D close to home, focused on improving key management for their own offerings.... As a rule, key management has been integrated as a part of each encryption platform, and in the absence of industrywide guidelines, each vendor has developed its own methodology for operation of its key systems. As of 2007, there are draft standards before the Internet Engineering Task Force and the National Institute of Standards and Technology designed to standardize protocols for key generation and transfer across multiple platforms, but until vendors come to an agreement, it will continue to be a challenge for companies to obtain a global key-management solution that encompasses all types of encryption systems. The gold standard for high-security key management today comes directly from NIST. Federal Information Processing Standard (FIPS) 140-2 establishes specific rules for the generation, security, encryption, storage, recovery and auditing of passwords. It also proscribes four security levels for the physical protection of password control systems, with Level 3 being the highest level required for most corporate applications. FIPS 140-2 Level 3 includes requirements for identity-based authentication, internal encryption, physical separation of secure/nonsecure ports and strong protections against physical tampering. Although FIPS 140-2 clearly establishes the security requirements of key systems, it doesn't specifically dictate how those requirements are accomplished, leaving the details up to individual vendors. In response, several encryption companies have recognized the need for key-management systems...

  • [February 2007] "Symmetric Key Management Systems." By Arshad Noor. From ISSA Journal (February 2007). "Most security professionals are familiar with symmetric key-based cryptography when presented with terms such as Data Encryption Standard (DES), Triple DES (3DES) and the Advanced Encryption Standard (AES). Some are also familiar with Public Key Infrastructure (PKI) as an enterpriselevel solution for managing the life-cycle of digital certificates used with asymmetric-key cryptography. However, the term Symmetric Key Management System (SKMS) — which refers to the discipline of securely generating, escrowing, managing, providing access to, and destroying symmetric encryption keys — will almost always draw blank stares. This is not surprising, because symmetric encryption key management has traditionally been buried in applications performing encryption. These applications primarily focused on business functions, but managed encryption keys as an ancillary function. Consequently, there was no reason to emphasize key management. This article advances the notion that the time has come for the infosec community to address SKMS as an application- independent, enterprise-level defense mechanism that is more effective when addressed separately... Why is symmetric key management a problem? After all, applications seem to have addressed the problem within the applications for decades, and appear to be continuing to do so. The problem becomes obvious if you are in IT Operations. As an illustration, if you are responsible for managing a point-of-sale (POS) application that accepts credit cards for payment, an e-commerce application that requires credit cards for payment, a payment processing application that communicates with the credit card network for settling transactions, a back-office database that consolidates transactions, and a business analytics application for determining retail fraud, you have five applications that require encryption... IT Operations staff are [often] forced to manage at least 8 to 0 distinct symmetric key-management infrastructures, each with its own technology, training, documentation, procedures and audits... Presented with the problem in this perspective, the logical solution springs to clarity: the key-management capability needs to be abstracted from the applications that use it. Such a solution is not unlike the Domain Name System (DNS) for hostname-IP address resolution, or a Relational Database Management System (RDBMS) for data management. [Solution?] The SKMS architecture also allows for business continuity in the face of network failures, massive scalability and the use of many well-understood technical standards. It is architected along the lines of DNS, the completely free software abstracts symmetric keymanagement functions from applications and consolidates them on one or more centralized Symmetric Key Services (SKS) servers on the network... While symmetric encryption has been in use for decades within general computing, we have reached a confluence of inflection points in technology, the Internet and in regulatory affairs, that require IT organizations to implement Symmetric Key Management Systems (SKMS) as independent infrastructures..."

  • [October 30, 2006] "Location-Aware Key Predistribution Scheme for Wide Area Wireless Sensor Networks." By Katerina Simonova, Alan C. H. Ling, and X. Sean Wang (University of Vermont, Burlington, VT, USA). Paper presented at the Fourth ACM Workshop on Security of ad hoc and Sensor Networks (Alexandria, Virginia, USA, 2006). 12 pages (15 references). Key predistribution in wireless sensor networks refers to the problem of distributing secret keys among sensors prior to deployment. Solutions appeared in the literature can be classified into two categories: basic schemes that achieve fixed probability of sharing a key between any pair of sensors in a network and location-aware schemes that use a priori knowledge about sensors' communication needs, such as location information, to guarantee connectivity only among sensors that need to and can talk. Location-aware schemes achieve performance enhancement over the basic schemes by using resources efficiently. However, existing location-aware solutions are not compatible with combinatorial methods that use a set of key groups to generate sensors' key rings. Combinatorial methods are appealing as they achieve deterministic performance close to optimal. Besides, existing location-aware solutions do not have enough flexibility in terms of trade-off between connectivity and resilience. In this paper we propose a general key predistribution framework that can use any key predistribution method as its underlying scheme, including combinatorial ones. The proposed framework provides the user with options on how to allocate available resources to achieve desired performance based on the needs of the application. We also consider heterogeneous sensor networks consisting of nodes with different amount of memory and communication ranges and show that special treatment of this case results in substantial performance improvement. We confirm the good performance of our framework by providing experimental and analytical results..." [archive/cache]

  • [June 2005] "Guidelines for Cryptographic Key Management. By Steven M. Bellovin (Department of Computer Science, Columbia University) and Russell Housley (Vigil Security, LLC). IETF Network Working Group, Request for Comments #4107. BCP 107. June 2005. Source: http://tools.ietf.org/rfc/rfc4107.txt. I-D Reference: Internet Draft draft-bellovin-mandate-keymgmt-03. See the I-D Tracker. "The question often arises of whether a given security system requires some form of automated key management, or whether manual keying is sufficient. This memo provides guidelines for making such decisions. When symmetric cryptographic mechanisms are used in a protocol, the presumption is that automated key management is generally but not always needed. If manual keying is proposed, the burden of proving that automated key management is not required falls to the proposer..."

    The term "key management" refers to the establishment of cryptographic keying material for use with a cryptographic algorithm to provide protocol security services, especially integrity, authentication, and confidentiality. Automated key management derives one or more short-term session keys. The key derivation function may make use of long-term keys to incorporate authentication into the process. The manner in which this long-term key is distributed to the peers and the type of key used (pre-shared symmetric secret value, RSA public key, DSA public key, and others) is beyond the scope of this document. However, it is part of the overall key management solution. Manual key management is used to distribute such values. Manual key management can also be used to distribute long-term session keys. Automated key management and manual key management provide very different features. In particular, the protocol associated with an automated key management technique will confirm the liveness of the peer, protect against replay, authenticate the source of the short- term session key, associate protocol state information with the short-term session key, and ensure that a fresh short-term session key is generated. Further, an automated key management protocol can improve interoperability by including negotiation mechanisms for cryptographic algorithms. These valuable features are impossible or extremely cumbersome to accomplish with manual key management..."



Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/keyManagement.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org