ICSA Guide to Cryptography
Randall Nichols
 $69.95  0-07-913759-8
Backward Forward
Chapter: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 9 | 10

Reserve your copy at a
Beta Bookstore near you!
Contact Bet@books
© 1998 The McGraw-Hill Companies, Inc. All rights reserved.
Any use of this Beta Book is subject to the rules stated in the Terms of Use.

CHAPTER 2

FIRST PRINCIPLES and OVERVIEW

Summary

The explosive growth of commercial computer systems and networks over the last decade has brought with it the challenge of protecting them from unauthorized access to their contents. One of the most important tools available for computer security is cryptography. A wide variety of products using encryption technologies have become available commercially. The Cryptography Products Consortium (CPC), facilitated by the International Computer Security Association (ICSA), has been working towards joint security goals of enhanced interoperability and communications among their common products. The OSI standard is cited as a common framework for design of secure networks. Common industry practices for implementing cryptography in commercial computer systems are identified within the purview of that OSI framework.

Classical Cryptosystems

Cryptography is the science of writing messages that no one except the intended receiver can read. Cryptanalysis is the science of reading them anyway. "Cryptography" comes from the Greek 'krypte' meaning hidden or vault and "Graphy" comes from the Greek 'grafik' meaning writing.

Steganography comes also from Greek and means covered and will be considered in the next chapter. The term Cryptographia, to mean secrecy in writing, was used in 1641 by John Wilkins, a founder with John Wallis of the Royal Society; the word ‘cryptography’ was coined in 1658 by Thomas Browne, famous English physician and writer. It is the aim of cryptography to render a message incomprehensible to an unauthorized reader - ars occulte scribendi. One speaks of overt secret writing: overt in the sense of being obviously recognizable as secret writing.

The words, characters or letters of the original intelligible message constitute the Plain Text. The words, characters or letters of the secret form of the message are called Cipher Text or constitute a Cryptogram.

The process of converting plain text into cipher text is encipherment or encryption. The reverse process of reducing cipher text into plain text is decipherment or decryption. A cryptosystem is defined as the associated items of crypto-material and the methods and rules by which these items are used as a unit to provide a means of encryption and decryption. A cryptosystem embraces the general enciphering-deciphering and the specific keys essential to the employment of the system.

Cipher systems are divided into two basic classes: substitution and transposition. A substitution cipher is a cryptogram in which the original letters of the plain text, taken either singly or in groups of constant length, have been replaced by other letters, figures, signs, or a combination of them in accordance with a definite system and key.

A transposition cipher is a cryptogram in which the original letters of the plain text have been rearranged according to a definite system. Modern cipher systems use extremely complex mathematical forms of both substitution and transposition to protect sensitive messages.

Purposes of Encryption

In a cryptosystem, plain text is acted upon by a known algorithm (set of mathematical rules) to determine the transformation process to cipher text and a key which controls the encryption/decryption algorithm to transform the data into cipher text. In a system using a key, the message cannot be transformed without the key. Two types of key systems exist: symmetric or private key systems, where the sender and receiver use the same key, and asymmetric or public key systems, where the sender and receiver use different keys. In an asymmetric system the sender uses a public key, which is available to anyone, to encipher the message, but only the receiver using a unique private key can decipher the message.

The basic purpose of encryption is to protect sensitive information from unauthorized disclosure. When computer systems are involved, this information can be stored within the system or transmitted across insecure public carriers.

Modern encryption methods are used to prevent the exposure previously defined and offer desirable features for other types of exposures such as:

Data confidentiality, or secrecy, since messages must be decrypted in order for information to be understood.

Data integrity, because some algorithms additionally protect against forgery or tampering.

Authentication of message originator because it is needed to demonstrate that the key has not been compromised and remains secret. Authentication of system occurs when the user performs a cryptographic function with a cryptographic key that is unique to the user.

Electronic certification and digital signature, using cryptographic algorithms to protect against unauthorized modification and forgery of electronic documents.

Non-repudiation, using a) secret key technology whereby a trusted third party (TTP) can hold a copy of the secured transaction or b) public key technology where users impose non-repudiation on the originator by virtue of the digital signature. Public key technology can also provide non-repudiation of the recipient by requiring an acknowledgment signed by the recipient before a "contract" is formed. Thus, neither sender nor receiver can deny the document. The signed acknowledgment includes the signature from the original message. This is very important in the making of electronic contracts on such media as the Internet.

Cryptosystems represent a powerful countermeasure to computer intrusions.

Cryptographic product vendors have a mission to reduce the risk in the electronic marketplace.

A Glimpse at the Commercial Cryptography

The Cryptography Products Consortium (CPC) is a group of forty talented companies specializing in the delivery of a wide variety of cryptographic products for protection of commercial computer systems and networks. In 1997, the International Computer Security Association (ICSA) initiated certification programs for many different categories of security products and services (File Encryptors, Virtual Private Networks (VPN’s), Cryptographic Toolkits, and Smartcards). ICSA's overall goal is to improve commercial security systems by improving the implementation, sales, and use of appropriate security products, services, policies, techniques and procedures.

The CPC as a joint entity defined four values supporting their mission to produce quality cryptographic products for their customer facilities. These values are Quality, Service, Innovation, and Collaboration. Figure 2-1 shows the linkage between CPC performance and their corporate values.

Note that the critical success factors include both quality goals and customer satisfaction.

Customer satisfaction is most important in the commercial security market because of unequal tradeoffs of risk, security, cost and productivity. Commercial firms and government departments have different reasons and priorities to justify the purchase of security systems. Nor are the security concerns the same for commercial facilities as with government entities. The more difficult the encryption product is to understand, the less likely it will be added to customer inventory. Customers are satisfied when they are comfortable with the supporting quality controls used by the vendor. Quality is the key issue in the commercial security market. In some firms, the technical superiority of the encryption system is of secondary importance in the decision to purchase the encryption product. Compatibility of the product line with current computer systems may have significant influence in the final decision.

How do we measure the quality of a cryptosystem? One answer is to provide products that conform to international standards of excellence.

Cryptographic Standards

To customers, quality and interoperability of encryption products are essential. Standards facilitate widespread use of cryptographically sound techniques, and interoperability of system and system components. The main standards organizations--ISO, CCITT, ANSI, IEC, and ECMA--are described by Stallings. Appendix Tables A-1 – A-7, list standards addressing particular areas. Menezes presents a detailed overview of these standards.

Table A-1 presents international (ISO and ISO/IEC) application-independent standards on cryptographic techniques. Tables A-2 and A-3 summarize banking security standards, subdivided into ANSI and ISO standards. Table A-4 considers international security architectures and frameworks (ISO and X.509). Table A-5 summarizes security-related standards for use by US federal government departments. Table A-6 addresses selected Internet specifications, while Table A-7 notes selected de facto industry standards.

Importance of Standards to the Commercial Market

The telecommunications industry embraced standards to govern physical, electrical and procedural characteristics of their communication equipment. Historically, the computer industry has not embraced this view. Computer vendors tended to bind their customers with proprietary products and protocols and have been slow to push for standardization of interfaces. The CPC realized that computers from different vendors must communicate with each other; with the ongoing evolution of protocol standards, customers would no longer accept special-purpose protocol-conversion software development. From the potential customers’ standpoint, there are three key advantages of standardization: Standards assure that there will be a large market for a particular piece of encryption equipment or software. Economies of scale for production are encouraged. Standards allow products from multiple vendors to communicate, giving the purchaser more flexibility in equipment selection and use. Standards facilitate competition, leading to better products at lower prices.

The principal disadvantage of standards is that they tend to "freeze" technology. By the time a standard is developed, subjected to review and compromise, and promulgated, more efficient technologies may have been developed. Products developed under these systems may be delayed for acceptance into the market.

It should be noted that many cryptographic standards are voluntary. Manufacturers voluntarily implement a product that conforms to a standard if they perceive a benefit to themselves; there is no legal requirement to conform. The CPC embraced standardization because: 1) standards have been developed on a basis of a broad consensus and 2) customer demand for standardized products encourages continuous improvement and implementation by consortium members.

Open Systems Interconnect (OSI) Model

How many ways can data require protection in a computer system? Conventional wisdom might say three: 1) data at rest, 2) data in motion, or 3) data in the process of transference. Data protection is accomplished by means of hardware, software, or a combination of both.

Currently, communications usually are analyzed using the International Standards Organization’s Open System Interconnect Reference Model (ISO OSI-RM). The purpose of the OSI Reference Model (OSI-RM) was to provide a framework for developing communication protocol and service standards that would allow interworking of equipment from many different vendors. The OSI model breaks down communications into seven layers. Refer to Table 2-1 for a look at the OSI model layers and their information processing/transfer functions. Table 2-2 presents OSI data processing and transfer protocols and equivalents for each layer [DATA]. Within this architecture, standards have been developed at all seven layers to support distributed computing. With the development of the OSI model, vendors were expected to quickly provide standardized communications facilities. This did not happen. The last decade has seen slow acceptance of standardized communications products.

Truly interoperable, distributed, standardized (universal) processing requires more than the basic protocols at the seven layers. Issues concerning the choice of networks and the ability to control and manage configurations had to be addressed (ex. ISDN protocols).

Four key areas where alternative standards are developing to provide the customer with improved functionality are:

Internetworking: Both connectionless and connection-mode internetworking standards have been developed. Routing issues are being addressed.

WAN (wide area network): New standards for ATM (asynchronous transfer mode), SONET (synchronous optical network) and frame relay are being revised.

LAN and MAN (local and metropolitan area networks): FDDI (fiber distributed data interface) and 802.6 MAN standards are evolving.

Network Management and Security: The nub of complex networking is effective network management and security.

OSI-related standards have reached a level of maturity and functionality that makes standardized network management and security products practical. On the quality assurance side, in the last five years, commercial firms registering to ISO 9000 standards have quadrupled in both Europe and United States. Another useful result of international standards is international standardized profiles (ISP’s). These profiles provide specifications that allow multiple vendors to build products that work together for specific application areas.

TABLE 2-1

OSI Model Communication Layers & Their Information Processing Transfer Functions

OSI LAYER #

NAME

INFORMATION PROCESSING/TRANSFER FUNCTIONS

7

APPLICATION

Provides the interface for applications to access the OSI environment through the lower levels. Supports functions such as file transfer, virtual terminal, electronic mail, establishing the authority to communicate, systems and applications management functions, …

6

PRESENTATION

Formats data received from Layer 7. Character code conversion, terminal standards, data compression, display rules, …

5

SESSION

Negotiation and establishment/termination of connections with other nodes. Manages and synchronizes the direction of data flow. Coordinates interaction among applications.

4

TRANSPORT

Provides for end-to-end data transfer between applications, data integrity, and service quality. Assembles data for routing by Layer 3.

3

NETWORK

Routes and relays data across multiple networks.

 

2

DATA LINK

Transfers data from one network node to another over a transmission circuit. Performs data integrity functions

1

PHYSICAL

Transmits the bit stream over a communication medium.

TABLE 2-2

ISO Data Processing, Transfer Protocols & Equivalents

LAYER #

NAME

ISO

ITU(T)

ANSI

ECMA

7

Application

8571 (FTAM)

10021 (MHS)

9041 (VT)

10026 (DTP)

9594 (DS)

8613 (ODA)

9579 (RDA)

9596 (CMIP)

--

X.400

--

--

X.500

T.410, T.73

--

--

--

--

--

--

--

--

--

--

--

--

--

--

--

ECMA-101

--

--

6

Presentation

8823 (connection)

9596 (connectionless)

X.226

--

--

--

--

--

5

Session

8327 (connection)

9548 (connectionless)

X.225

--

X3.153

--

ECMA-75

--

4

Transport

8073 (TP0 - TP4)

(connection)

8602/8072

(connectionless)

X.224

--

X3.140

--

ECMA-72

--

3

Network

8208 (layers 1-3)

8878 (use w/8208)

8348 (connection)

8473 (connectionless)

9542 (IS-IS)

8880 (LAN)

8881 (X.25 on LANs)

X.25

X.25

X.213

--

--

--

--

--

--

--

--

--

--

--

--

--

--

ECMA-92

--

--

--

2

Data Link

7776 (LAPB)

3309 (HDLC)

8802.2-.7 (LAN)

(IEEE 802.2-.7)

X.25

--

--

--

X3.66

--

--

ECMA-40

ECMA-81, -82, -89,

-90

1

Physical

9314 (FDDI)

2110 (EIA-232D)

4902 (EIA-449)

2593

4903

--

V.24, V.28

V.24, V.28

V.35

X-series

X3.139, X3.148, X3.166

--

--

--

--

--

OSI Security

From a security standpoint, ISO 7498-2 may be the most important standard in the business. ISO standard No. 7498- 2, OSI Basic Reference Model-Part 2: Security Architecture establishes an OSI security framework. It provides functional assignment of security services and mechanisms to OSI layers. The OSI security architecture addresses the issues of network security (protecting data in transmission from terminal to user or from computer to computer) rather than a single system security. Three concepts form the basis for the security architecture:

1. Security threats: actions that compromise the security of information owned by an organization.

2. Security mechanisms: communications mechanisms designed to detect, prevent, or recover from a security threat.

3. Security services: communications services enhancing the security of an organization's data processing systems and information transfers. These services are intended to counter security threats.

ISO has been developing standards that elaborate on the concepts in 7498-2 and specify procedures and protocols for implementation of security services. Table 2-3 lists these key ISO standards.

Security Threats

Computer and network security generally addresses secrecy, integrity and availability requirements. Secrecy requires that information be accessible only to authorized parties.

Integrity means that computer system assets be modifiable (in any form) only by authorized parties and that modifications can be detected. Availability requires that computer-system assets be available to authorized parties.

There are four categories of security threats to a network in its normal pattern. These are 1) Interruption (threat to availability), 2) Interception (threat to secrecy), 3) Modification (threat to integrity) and 4) Fabrication (threat to integrity). Table 2-4 (definitions from ISO 7498-2) lists the types of threats that might be faced in the context of network security.

TABLE 2-3

Key OSI Security Standards

ISO 7498-2

OSI Basic Reference Model - Part 2: Security Architecture

ISO 8649 AM 1

Service Definition for the Association Control Service Element – Amendment 1: Authentication during Association Establishment

ISO 8650 AM 1

Protocol Specification for the Association Control Service Element – Amendment 1: Authentication during Association Establishment

ISO 9160

Data Encipherment-Physical Layer Interoperability Requirements

DIS 9797

Security Techniques: Digital Signature Scheme Giving Message Recovery

ISO 9797

Data Cryptographic Techniques: Data Integrity Mechanism Using a Check Function Employing a Block Cipher Algorithm

DIS 9798-1

Security Techniques: Entity Authentication Mechanisms-Part 1: General Model

CD 9798-2

Security Techniques: Entity Authentication

 

Mechanisms-Part 2: Entity Authentication Using Symmetric Techniques

CD 9798-3

Security Techniques: Entity Authentication

 

Mechanisms-Part 3: Entity Authentication Using Public Key Algorithms

DIS 10116

Mode of Operation for an n-Bit Block Cipher

CD 10181-1

Security Frameworks - Part 1: Overview

CD 10181-2

Part 2: Authentication Framework

CD 10181-3

Part 3: Access Control

CD 10181-4

Part 4: Non-Repudiation

CD 10181-5

Part 5: Integrity

CD 10181-6

Part 6: Confidentiality

CD 10181-7

Part 7: Secure Audit Framework

DIS 10736

Transport Layer Security Protocol

CD 10745

OSI Upper Layers Security Model

Refer also to Appendix, Table A-1 that details additional ISO Standards for generic cryptographic techniques, for example, digital signatures.

TABLE 2-4

ISO 7498-2 Security Threats

Threat:

A potential violation of security.

  • Accidental

A threat with no premeditation such as software bugs malfunctions.

  • Intentional

Premeditated threat when realized as an attack.

  • Passive

Unauthorized disclosure of information without changing state of system.

  • Active

Deliberate unauthorized change to state of the system.

Release of Message

 

Contents

Data transmission read by unauthorized user

Traffic Analysis

Inference of information from flow of traffic ( presence, absence, amount, direction, frequency)

Masquerade

Pretense by an entity to be another entity.

Replay

Occurs when a message, or part, is repeated to produce an unauthorized effect.

Modification

Alteration of data without detection of effect.

Denial of Service

 

(DOS)

The prevention of authorized access to resources or delaying of time-critical operations.

Security Mechanisms

ISO 7498-2 also discusses security mechanisms that are implemented in a specific layer of the OSI architecture and those that are used at any layer of the model. (Refer to Tables 2-5 and 2-6). The OSI security architecture distinguishes between specific security mechanisms and pervasive security mechanisms. Specific security mechanisms may be incorporated into the appropriate (N) layer in order to provide some of the OSI security services. Pervasive security mechanisms are not specific to any particular OSI layer or OSI security service.

TABLE 2-5

Specific Security Mechanisms

Encipherment

Use of mathematical algorithms to transform data into a form that is not readily intelligible. The transformation and subsequent recovery of the data depend on the algorithm and one or more encryption keys.

Traffic padding

Insertion of bits of meaningless data in a data stream to frustrate traffic- analysis attempts.

Authentication exchange

Mechanism to insure the identity of an entity by means of information exchange.

Digital signature

Data appended to, or a cryptographic transformation of, a data unit that allows the recipient of the data unit to prove the source and integrity of the data unit. It is a means to bond a user to data.

Access control

Mechanisms used to enforce access rights to resources.

Data integrity

Mechanisms used to assure the integrity of a data unit or stream.

Routing control

Selection of secure routes for certain data and allows routing changes especially when security has been breached.

Notarization

Trusted third party assurance of properties of data exchange.

Stallings presents several tables showing placement of security services and mechanisms in the various OSI layers.

TABLE 2-6

Pervasive Security Mechanisms

Trusted functionality

That which is perceived to be correct with respect to some criteria – establishment of a security policy.

Security label

A marking bound resource that delegates the security attributes of that resource.

   

Event detection

Detection of security-relevant events.

Security-Audit Trail

Independent data for surveillance.

Security recovery

Permits recovery actions when events require or software management indicates.

Security Services

The OSI security architecture distinguishes between five classes of security services: authentication, access control, data confidentiality, data integrity, and non-repudiation.

Table 2-7 shows the relationship between ISO layers, cryptographic protocols, and security services.

TABLE 2-7

ISO Layers, Cryptographic Protocols & Security Services

LAYER #

NAME

CRYPTOGRAPHIC PROTOCOLS

SECURITY

SERVICES

7

APPLICATION

X.400, MSP, PEM, S/MIME, PGP, X.509, DNS Security,

S-HTTP, Key Management,

Certificate Management

Entity authentication

Origin authentication

Access control

Message integrity

Message stream integrity

Selective field confidentiality

Traffic flow confidentiality

Connection integrity

Selective field integrity

Non repudiation

6

PRESENTATION

 

Selective field confidentiality

5

SESSION

SSL

 

4

TRANSPORT

TLSP

Entity authentication

Origin authentication

Access control

Message integrity

Message stream integrity

Connection integrity

3

NETWORK

NLSP, ESP, AH

Entity authentication

Origin authentication

Access control

Message integrity

Message stream integrity

Traffic flow confidentiality

Connection integrity

2

DATA LINK

SILS

Message integrity

Message stream integrity

1

PHYSICAL

Synchronous Link

Message stream integrity

Traffic flow confidentiality

Components of Authentication Systems for Secure

Networks

A) ONE-WAY HASH FUNCTIONS

One-way functions are of central importance in cryptography. Informally speaking, a one-way function is easy to compute, but hard to invert. Given a function f: A --> B is a one way function if f (x) is easy to compute for all x in A, but it is computationally infeasible when given y in f (A) = B to find x in A such that f (x) = y.

It is not required that a one-way function be invertable, and distinct input values may be mapped to the same output values. If f is a one way hash function, and it is also computationally infeasible to find two distinct x1, x2 in A such that f (x1) = f (x2), then the function is called collision-resistant. Examples of collision-resistant one way hash functions are MD4 (Rivest, 1992), MD5 (Rivest and Dusse, 1992), secure hash standard (SHS) proposed by NIST under FIPS 180 (NIST, 1993).

B) SYMMETRIC KEY CRYPTOGRAPHY

In symmetric or secret key cryptography, a secret key is established and shared between communicating parties, and this key is used to subsequently encrypt and decrypt messages. Nichols and Bauer are two excellent references on the algorithms supporting secret key cryptography.

Examples of secret key cryptosystems that are in widespread use today are the data encryption system, DES (NIST, 1977); triple DES; the international data encryption algorithm (IDEA) (Lai, 1992); as well as RC2, RC4, and RC5 (Rivest, 1995). Other well-known algorithms, like FEAL, are out of use because of the invention of differential cryptanalysis.

C) PUBLIC-KEY CRYPTOGRAPHY

The idea of having one-way functions with trapdoors led to the invention of public key cryptography (Diffie and Hellman, 1976). Public key cryptosystems employ pairs of mathematically related keys. The pair consists of a public key and a private key. For both keys, it is computationally infeasible to derive one from the other. The most widely deployed public key cryptosystem is RSA, invented by Rivest, Shamir, and Adleman at the Massachusetts Institute of Technology (MIT) in 1978. Other public-key systems in use and referred to in the standards include Elliptic Curve Cryptosystems (ECC) Diffie-Hellman, and discrete log systems.

Public-key cryptography is more convenient than secret key cryptography because it is not necessary for two parties to authenticate each other by sharing a secret key. Hence, the key-distribution problem is not as complex. Also, public key cryptography makes it possible to place authentication information under the direct control of the system user. This is especially helpful for access control, since secret information need not be distributed throughout the system. The application of public-key cryptography requires an authentication framework that binds users’ public keys and user's identities. A public-key certificate is a certified proof of such binding vouched for by a trusted third party, called a certification authority (CA). The CA removes the need for individual users to verify directly the correctness of other users’ public keys.

One of the more important aspects of public-key cryptography is that it enables digital signatures. Section G discusses digital signature schemes. The historical drawback of slow performance on microprocessors has improved in the last five years. An additional compensating factor is the extent by which operations between the authentication system and host computer system have been reengineered. Algorithms such as the ECC provide improved performance as well. Hybrid approaches are widely used where public-key cryptography is used to distribute keys for use by secret-key cryptosystems. Pretty Good Privacy (PGP) uses RSA and MD5 for digital signature, IDEA and RSA for message encryption, ZIP for compression, RADIX 64 conversion for Email compatibility and segmentation for large messages. Messages are encrypted using IDEA with a one-time session key generated by the sender. The session key is encrypted using RSA with the recipient’s public key and is included with the message. A hash code of a message is created using MD5. This message digest is encrypted using RSA with the sender’s secret key, and included with the message. Menezes and Schneier are two excellent references on the algorithms comprising public key cryptography.

D) AUTHENTICATION SYSTEMS

Authentication refers to the process of verifying someone’s claimed identity. Techniques are divided into three categories, depending on whether a technique is based on:

  1. Something the claimer knows (proof of knowledge),
  2. Something the claimer possesses (proof of possession), or
  3. Some biometric characteristic of the claimer (proof of property).

Examples of the first category are personal identification numbers (PIN's), passwords and transaction authentication numbers (TAN), whereas examples of the second category are keys, identification cards and personal tokens. Finger- prints, retinal images, voice patterns and DNA are biometric devices that may be used in the third category.

The three categories of authentication are:

There are several well-known authentication and key-distribution systems:

The authentication and key-distribution systems above differ in various respects. In the following tables, AU means authentication services, DC means data confidentiality, DI means data integrity, AC means access control, NR means non- repudiation, OWHF means one-way hash function, SKC and PKC refer to secret- and public-key cryptosystems.

Table 2-8 shows a comparison of security services provided by the systems, Table 2-9 shows the cryptographic techniques used by the systems and Table 2-10 shows the standards to which these systems conform.

TABLE 2-8

Security Services provided by the Systems

System

AU

DC

DI

AC

NR

Kerberos

X

X

X

   

NetSP

X

X

X

   

SPX

X

X

X

   

TESS

X

X

X

 

X

SESAME

X

X

X

X

 

OSF DCE

X

X

X

X

 

TABLE 2-9

Cryptographic Techniques Used by the Systems

System

OWHF

SKC

PKC

Kerberos

X

X

 

NetSP

X

X

 

SPX

X

X

X

TESS

X

X

X

SESAME

X

X

X

OSF DCE

X

X

X

TABLE 2-10

Standards to Which the Systems Conform

System

 

Kerberos

RFC 1510, GSS-API

NetSP

GSS-API

SPX

RFC 1507, ITU-T X.509, GSS-API

TESS

RFC 1824

SESAME

ITU-T X.509,GSS-API, ECMA

OSF DCE

GSS-API

E) KEY MANAGEMENT AND DISTRIBUTION

Key management involves the secure generation, distribution, storage, journaling, and eventual disposal of encryption keys. Keys can be either distributed via escorted courier, magnetic media, or via master keys that are then used to generate additional keys.

Cryptographically protected data are dependent on the protection of the encryption keys.

The theft, loss or compromise of a key can compromise the entire system.

ISO, ANSI, federal government and the American Banking Association have developed standards for key management. Key management is crucial to maintaining good, cost-effective, and secure communications between a large number of users.

Most of the security services associated with the OSI security architecture are based on cryptographic mechanisms. The use of these mechanisms requires key management, which is carried out by protocols. Many of these protocols, unfortunately, do not depend on the underlying cryptographic algorithms, but rather on the structure of the messages themselves. The IEEE 802.10 standard supports three classes of key-distribution systems, namely manual key distribution, center-based key distribution, and certificate-based key distribution. Opplinger details these classes.

F) KEY RECOVERY

Key management techniques exist to manage the lifecycle of cryptographic keys,

including the creation, distribution, validation, update, storage, usage, and expiration of keys. Should keys become forgotten, damaged or rendered unavailable to authorized parties, then information recovery techniques are necessary to allow recovery of plaintext, typically by first recovering the key through re-construction or retrieval. Such techniques are an integral part of commercial key management. An independent, motivating factor for key recovery technologies is that some countries regulate the use, export, or import of strong encryption based on governments citing law enforcement needs to access encrypted data. The use of particular key recovery technologies favored by government authorities may thus be necessary to sell or use products in such countries. Several techniques have been proposed to provide for key recovery. Related terms include key escrow, commercial key recovery, cryptographic backup and recovery, and trusted third party techniques (TTP). These terms are defined differently by various communities of interest, and the associated schemes have overlapping and sometimes poorly delineated differences.

All techniques for key recovery may be viewed as having a position on a wide continuum. Because of this broad spectrum, the only way to clearly identify each key recovery technique is by listing its specific characteristics, rather than using terms with many definitions. Different characteristics have advantages in different environments, so there is no "best" key recovery technique.

 

Selected Techniques

Key recovery enables authorized persons to access the plaintext of encrypted data when the decryption key is not available. Key recovery is a broad term that applies to many different techniques. The following are selected techniques identified by their characteristics.

Key Escrow

Key escrow involves storing keys or key parts directly with one or more escrow agents. Information recovery requires that the escrow agent(s) facilitate plaintext recovery by providing the necessary key or key parts or by actually decrypting the information using the escrowed key. Advantages of this technique include: user selection of the escrow agent(s) and dispersion of key parts avoids a single point of attack.

The disadvantages include dialog overhead with a third party during cryptographic key initiation, and storage requirements for escrow agent(s).

TTP: Key Distribution Center

One type of TTP scheme involves creation and distribution of encryption session keys by a trusted third party other than the principals involved in the secured

communications. Such a scheme was defined in the banking standard ANSI X9.17, where the TTP is called a Key Distribution Center (KDC). If a KDC is adapted to store a copy of the session key for later key retrieval, then it may serve as an escrow agent. One disadvantage of such a scheme is the requirement for user interaction with the on-line KDC for each session key; e.g., additional, supporting infrastructure is required. A second disadvantage is that escrow-agent storage of each session key may result in a large storage requirement and cost. Both these issues impact scalability. A third concern, particularly if the TTP is an external party, is the potential for key compromise at the TTP.

Commercial Key Backup (Long-term Keys)

Another type of TTP scheme involves an "internal" third party run by the organization with which the end-user is affiliated. This party stores (and/or creates) a backup copy of each user’s private asymmetric decryption key. Session keys are generated by end-users and are made available to another user by encrypting such keys under that user’s public key. The commercial key backup scheme is configured to return a copy of private keys to the end users, if necessary, but does not have interfaces available to allow itself access to these keys. Advantages of this scheme include:

    1. Interaction with the third party is not required on a per-session basis;
    2. If an end-user loses a private key, a single interaction allows the user to recover data from a large number of files, rather than a key recovery transaction for each stored enciphered message (e.g., mail message);
    3. Concerns about trusting an external party do not arise (an entity in a corporate environment is assumed to trust its affiliated organization); and
    4. The same infrastructure used for on-going key life-cycle management (e.g., key update) may be used for key recovery.

External-Party Session Key Access

The commercial key backup scheme may be altered to enable an authorized external party to have access to (but not actually possess) all user session keys. In this case, the encryption system of each user would additionally encrypt the session key under the public key of the external party. The intended recipient can still gain access to the session key (should it lose its own private key), by providing ciphertext containing the extra encrypted key (but not necessarily the encrypted user data itself) to the external party, together with proper authentication. The external party can gain access to the session key to allow decryption of any available ciphertext.

Key Encapsulation

In this type of scheme, neither a session key nor long-term key is stored or escrowed with an outside party. Instead, recovery information is associated with each encrypted message (or file to be archived) to support key recovery at a later time. Here, the TTP or ‘key recovery agent’ is required to process or unpack the recovery information and to provide it to the person authorized to recover the key/plaintext. However, the key recovery agent does not need to derive the decryption key directly. In this case, the agent does not have sufficient information to derive the key. Instead, the authorized requester possesses additional, essential information for key recovery, thus limiting the liability of the key recovery agent. An advantage here, as with commercial key backup and external-party session key access, is minimal overhead during normal operations. Another advantage is the avoidance of any outside party as a single point of attack, provided there are two or more key recovery agents used in each jurisdiction.

Key Recovery Stages

Several distinct stages may be identified for a generic key recovery scheme, beyond the usual cryptographic preparations. These include:

    1. Selection stage: selecting an entity or entities (if any), as a trusted third party, to be involved in the key recovery process.
    2. Parameter set-up stage: each principal or end-user obtains any necessary cryptographic parameters and/or keying material to allow preparation of recovery parameters for later use (e.g., per message or session) in key recovery. Some administrative and/or address information may also be required to allow subsequent communication with entities in the recovery process.
    3. Preparation stage: preparation of parameters allowing subsequent recovery of the key, on a per-message or per-session basis. It is desirable that this stage not involve communication with the parties selected in the first stage or any party other than the principals in the communication (i.e., be self-contained).
    4. Recovery stage: recovery of the key from available parameters, at some indefinite future time. This stage typically involves communication with the parties selected in the selection stage.

Since the communicating principals originally know the key, one option is to pre-authorize these parties to invoke key recovery, in order to allow their recovery of destroyed or mismanaged keys. Such pre-authorization parameters could be embedded in the preparation and recovery processes. To scale to the dimensions of the envisioned global information infrastructure, key recovery techniques should be analyzed with respect to potential scalability. For example, since the preparation stage will be invoked far more frequently than the recovery stage, the computational burden/overhead should be minimized in the preparation stage.

Advanced Features

Features emphasizing invincibility and immunity to cryptanalytic attack may enhance key recovery schemes. Concepts related to this include dispersion, collusion-resistance, integrity checks, and residual work factor.

Dispersion

Dispersion refers to the property that joint (but not necessarily unanimous) participation by multiple, designated entities be an essential step in key recovery. Here the key recovery preparation stage may involve multiple pieces of information; each associated with an independent key recovery agent (a TTP). Ideally, this information is associated with but not physically passed to the agent; this allows modularity (the preparation stage may be added to existing cryptographic applications with minimal effort) and self-containment (infrastructure overhead required to support key recovery is reduced).

Collusion-Resistance

While dispersion thwarts collusion among the (supposedly) trusted key recovery agents, an additional goal in some environments might be collusion resistance of key recovery; even collusion among the key recovery agents is not sufficient to allow their own recovery of the original key. With collusion resistance, no keys are exposed outside the end-user’s system, except to the authorized requester.

Residual Work Factor

A residual work factor allows the communicating parties to retain a variable amount of information needed for key recovery, which must then be recovered through cooperation or trial-and-error techniques. A residual work factor option can be used to increase the overall work effort involved in recovery, to discourage "casual" recovery requests and to keep part of the overall security of key recovery in the hands of the user. Use of the residual work factor is optional, depending on user or regulatory requirements. For example, a residual work factor may be implemented by escrowing the 16-bit difference between 56-bit DES and the 40-bit exportable version.

Integrity Checks

Inherent process integrity checks may be built into the key recovery technology to

allow users, observers, and involved third parties to ascertain that the technology is being correctly invoked and to guarantee that key recovery preparation has not been circumvented. The moment of key recovery is too late to learn that the recovery technology has been improperly applied or bypassed.

Even though integrity checks may be needed for the ‘confidence’ of the commercial user, such checks are primarily required by law enforcement or national security interests to guarantee that the key recovery technology is not being circumvented.

G) DIGITAL SIGNATURES AND NOTATIONS

RSA and Digital Signature Algorithm (DSA) are the best-known digital signature algorithms. The latter was invented by NSA and approved for government use. NIST has supported the DSA algorithm because the digital signature operation is separated from encryption. Both are tools for authenticating the user and origin of the message and the identity of the sender.

A digital signature (a mathematical algorithm) verifies the signer, is not reusable, cannot be forged or repudiated and proves that the sender did sign an unaltered document. DSA is based on the SHA (Secure Hashing Algorithm) and is described in FIPS PUB 180 "Secure Hash Standard". The DSA is based upon the T. El Gamal signature system and newer work by C. Schnorr. All these systems rely for their security on a problem known as the discrete log. This means that given a message m and a value a, it is easy to compute Ma mod p where p is a prime number. If you are given another value n, it is difficult and certainly infeasible to discover a value such that Ma mod p = n. That is, it is hard to take the discrete log of n. ECDSA is the elliptic curve analog of the DSA. It offers performance, bandwidth and space advantages over the DSA and comprises the ANSI X9.62 standard.

In general, a digital signature scheme consists of a key generation algorithm that randomly selects a public key pair, a signature algorithm that takes as input a message and a private key, and that generates as output a digital signature for the message.

A signature verification algorithm that takes as input a digital signature and a public key, generates as output an information bit signifying that the signature is consistent with some valid message for the private key corresponding to the given public key.

In general, the bandwidth limitation of public key cryptography is unimportant, due to the use of one-way hash functions as auxiliaries. However, in environments where long key lengths are used, bandwidth may be a limiting factor.

H) DISCRETE LOG SIGNATURE SCHEMES

One of the more significant signature schemes uses the strength of the discrete log problem. Many of the digital cash systems on the Internet use this algorithm. No one knows an efficient way to reverse the computation ga mod p if p is a large prime, g is a generator, and a is an integer. Reversing the computation means receiving ga mod p and determining a provides a concise description of this system.

The classic failure in many security systems comes when the attacker learns the password. The discrete log problem can also be used to provide "zero knowledge" to the attacker, even if he knows the password for a system. The system is also known as a challenge and response protocol. It is described in non-confusing terms in the next segment.

I) SIMPLE CRYPTOGRAPHIC NETWORKS

To form a cryptographic network, each network user should be provided with the same algorithm but with different keys so that messages sent by one node in the network can be deciphered only by the intended recipient node. Figures 2-2 to 2-4 show three different cryptographic networks. Each Kn represents a different key.

When end-to-end encryption is used, both the sender and receiver must be equipped with compatible hardware. After validating each other, the two unit’s exchange encrypted data. Messages are encrypted by the sender and decrypted only at the final destination.

Link encryption involves a series of nodes, each of which decrypts, reads, and then re-encrypts the message as it is transmitted through the network. With link encryption, both source and the destination remain private, and no synchronization of special equipment is required. However, more nodes = more possibilities of the message being intercepted and/or modified.

In a hybrid network, there is communication between a large number of secondary stations and a single main station all using separate master keys. A few stations intercommunicate with each other.

It would seem that it is preferable to use a public-key system for cryptography because of its versatility. It is slower than the equivalent private key cryptosystems by a significant margin. The hybrid system uses the best of both kinds of systems. The speed advantage of the private key cryptography is used for encrypting and transmitting. Public key transactions are for the smaller transmissions. A typical combination (for a hybrid) is to employ a public dual key for encryption and for the distribution of the private keys and the private-key system for bulk data.

J) IMPLEMENTATION CONSIDERATIONS

Media

Cryptography can take place in software, hardware, or firmware. The least efficient, least secure and cheapest media is software.

Configurations

In-line, off-line, embedded, and stand-alone are four different configurations, each with its own requirements, which need to be considered when implementing cryptosystems. The four configurations are:

1. In-line. The communications equipment is external to the cryptosystem. The hand off to the communications device occurs after encryption.

2. Off-line. The source controls all encryption, storage, and communications facilities.

3. Embedded. Configurations may be off or on line. The main requirement is that the cryptographic module be embedded or contained within the computer and the interface with that computer.

4. Stand-alone. These require that the cryptographic module is separately enclosed outside of the host and physically secured.

NIST FIP's 140-1 entitled "Security Requirements in Cryptographic Modules" describes four levels of security ranging from commercial grade security to penetration/tamper resistant.

K) CRYPTOGRAPHIC ALGORITHMS

Chapter 8 will introduce some key algorithms to the study of modern cryptography - of

special interest is the RSA algorithm. The bibliography presents many excellent references, which describe in detail all the cryptographic algorithms examined in this book.

Table 2-11 shows a sampling of cryptographic algorithms and implementations.

TABLE 2-11

Cryptographic Algorithms & Implementations

ALGORITHMS

IMPLEMENTATIONS

DES

3DES

IDEA

RSA

DSA

Skipjack

REDOC

RCn

SEAL

Elliptic Curve - ECC, ECAES, ECDSA

A5

PKZIP

N-HASH

SHA-1

MDn

IBM Secret-Key Management Protocol

ISDN

STU-III

Kerberos

KryptoKnight

Sesame

Common Cryptographic Architecture

X.509

Privacy-Enhanced Mail (PEM)

TIS/PEM

MSP

PGP

PKCS

Clipper

Capstone

L) PROTOCOLS

Table 2-12 shows that the interface protocols and implementations of cryptographic algorithms are quite varied.

TABLE 2-12

OSI Layers, Algorithms, LAN/WAN Interface Protocols & Implementations

OSI LAYER #

NAME

ALGORITHMS

LAN/WAN INTERFACE PROTOCOLS

IMPLEMENTATIONS

7 APPLICATION

 

file systems, mail, remote login, FTP, directory systems, print services, gateways, network management, network applications

 

6 PRESENTATION

 

network applications, network management,

gateways,

 

5 SESSION

 

session control, gateways,

sockets

 

4 TRANSPORT

Kerberos

TCP, Novell, SNA,

NT, Decnet transport, sequenced packet exchange, gateways,

terminal programs

 

3 NETWORK

Kerberos

Decnet routing, IP, SNA, internet packet exchange,

X.25, routers, gateways,

terminal programs

 

2 DATA LINK

 

gateways, frame relay, SMDS, ATM, Modems, Bridges, switches, token-rings, Ethernet, ARCnet, FDDI, ISDN, PPP, SLIP

 

1 PHYSICAL

 

FDDI, Ethernet, ARCnet, Token-rings, ATM, ISDN, repeaters, PPP, SLIP

 

M) ISO CONFORMANCE TESTING

A key element in the practical implementation of ISO is conformance testing, which is intended to assure that a given implementation conforms to an OSI specification. ISO standards for conformance testing are shown in Table 2-13.

TABLE 2-13

ISO Conformance Testing Standards

Title

ISO

CCITT

General Concepts

9646-1

X.290

Abstract Test Suite

   

Specification

9646-2

X.291

Tree and Tabular

   

Combined Notation

9646-3

X.292

Test Realization

9646-4

X.293

Conformance Assessment

9646-5

X.294

Protocol Profile

   

Test Specification

9646-6

 

International Standardized Profiles

The practical use of the ISO protocols and services has two requirements. First, implementation of ISO protocols and services must conform to relevant standards. Verifying this is the task of conformance testing. Second, any two separate implementations, if they are to participate in a cooperative application, must interwork and interoperate correctly. The two implementations must support compatible options and parameters associated with the protocols at each layer of the OSI model. ISO addresses this latter problem with the concept of International Standardized Profiles (ISP's). This is an ongoing process. The CPC is participating in this process.

Special Topics: Web Security Countermeasures

The Web represents a chaotic and exciting technology. It has become the security-balancing act of the 90's. McCarthy published a tiered approach to commercial security in 1997

He suggests the following procedure for commercial Web users: 1) identify what applications the Web will be used for; 2) based on this stated use for a company Website, identify the crucial threats; and then 3) map these threats to the appropriate protection technologies. Divided commerce on the Web into three basic application types - advertising, secure Internet/intranet (further subdivided into informational and transactional categories), and electronic commerce. There are nine basic threats to web security (See Table 2-14) and that six safeguards (See Table 2-15) could counter these threats.

TABLE 2-14

Nine Basic Threats to Web Sites

  1. Data Destruction: Loss of data on Web Site through accident or malice and interception of traffic (encrypted and unencrypted) both going to/from the Web Site.
  2. Interference: The intentional rerouting of traffic or the flooding of a local Web server with inappropriate traffic in an attempt to cripple or crash the server.
  3. Modification/replacement: Altering of data on either the send or receive side of a Web transmission. The changes, whether they are accidental or not, can be difficult to detect in large transmissions.
  4. Misrepresentation/false use of data: Offering false credentials, passwords, or other data. Also included is posting of a bogus or counterfeit home page to intercept or attract traffic away from its intended destination.
  5. Repudiation: An after-the-fact denial that an on-line order or transaction took place (Especially for 1-800 or 1-900 services.)
  6. Inadvertent misuse: Accidental and inappropriate actions by approved users.
  7. Unauthorized altering/downloading: Any writing, updating, copying, etc., performed by a person who has not been granted permission to conduct such activity.
  8. Unauthorized transactions: Any use by a non-approved party.
  9. Unauthorized disclosure: Viewing of Web information by an individual not given explicit permission to have access to this information.

TABLE 2-15

Six Best Weapons Against Security Threats

  1. User ID/Authentication: Range from simple passwords and callback systems to secure one-time passwords and challenge response tokens (either hardware cards or software resident).
  2. Usage: All web users.

  3. Authorization: Network confirms identity, grants access. Typical approaches include access control lists, authorization certificates, and directory services.
  4. Usage: Secondary level protection to prevent data modification.

  5. Integrity control: Aimed at the data not the user, the two key methods are encryption and message authentication, which can ensure that the message has not been altered on way to receiver and not reading by someone else.
  6. Usage: Excellent for validating secure Internet electronic commerce transactions.

  7. Accountability: Web managers use various tools to monitor responsibility and ownership. Methods include audit trails, Web server logs, and receipts.
  8. Usage: Accountability is the backbone of enforceable and traceable security policies and practices.

  9. Confidentiality: The keystone of most Web security policies. The technology is aimed at preventing unauthorized disclosure, or interception or both. Encryption is the central safeguard. This can mean end- to-end encryption on the network as well as layered encryption of files, protocols and secured links.
  10. Usage: These techniques are geared toward data content that must be held strictly off-limits to certain users.

  11. Available controls: Protects the integrity of the Website itself. Technology includes virus protection software and backup/redundancy features.

Usage: Protection of Web and its associated data.

Security decisions will only become tougher as companies continue to exploit the power of the Web. Electronic commerce will especially aggravate the difficulties of setting just the right security policy. Two possible challenges include the use of select electronic transactions and high level digital certification.

A Peek at the Future - Integrated Network Security

The convergence of technologies used in corporate computer systems such as intranets, and extranets along with the Internet means that corporations will have a greater number of options for providing connectivity to different classes of network sites or users. The ISO standards provide new alternatives for increased use of the public Internet transport as well as extranet services for tying remote users or business partners back into the corporate headquarters. The technology will be based on the same frameworks. This means that hybrid internet/intranet/extranet solutions will be feasible.

The key to this hybrid approach is an integrated network security solution. Figure 2-5 shows what an integrated security model might include. The process of supporting trusted users across the hybrid corporate network by establishing secure encryption tunnels through untrusted networks is called "membership". The CPC is working toward providing customers with the hardware and software to accomplish membership options for customers.

Membership is really nothing new to the CPC. It consists of three stages. First, the target network must establish the identity of users through strong authentication. Then, each party must obtain secret encryption keys so that any data sent over the network can be kept private. And finally, to avoid unnecessarily complex routing gateways, the remote workstation or branch router must join the corporate routing scheme so that the secure tunnel appears to the rest of the network and to the remote devices as a simple direct network connection.

Wrap-up

The need for computer security and encryption products is increasing in the commercial theater. This need is especially great for firms doing business on the Internet or maintaining Web access. The most effective safeguard is cryptography. The most common practice for implementation of cryptographic solutions is in layered formats using the OSI model. In practice, the theoretical layers are combined to three or four layers. The CPC stresses that no safeguard will work unless a recognized system of international standards and procedures such as those of the ISO is in place and enforced.

We have introduced many interrelated concepts in this chapter. These are some of the various concerns which must be addressed when implementing cryptographic countermeasures in a commercial theater of operations.

Backward Forward
Chapter: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 9 | 10

Reserve your copy at a
Beta Bookstore near you!
Contact Bet@books
© 1998 The McGraw-Hill Companies, Inc. All rights reserved.
Any use of this Beta Book is subject to the rules stated in the Terms of Use.

Beta Books | Beta Bookstores | Computing McGraw-Hill

Professional Publishing Home | Contact Us | Customer Service | For Authors | International Offices | New Book Alert | Search Catalog/Order | Site Map | What's New


A Division of the McGraw-Hill Companies
Copyright © 1998 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use; the corporation also has a comprehensive Privacy Policy governing information we may collect from our customers.