Advertisement

The Puzzling Science of Information Integrity

  • Gustavus J. Simmons
Chapter
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT)

Abstract

The science of information integrity is concerned with preventing deception and/or cheating in information dependent systems; or failing that, to at least detect deception and assign responsibility if it does occur-- where the means of deception is solely the manipulation of information. In other words, information integrity is supposed to make it possible to trust the correct functioning of the system, even though (some of) the inputs may be untrustworthy. Typical deceptions might be; gaining unauthorized access to files or facilities, impersonating another user or forging his digital signature, disavowing a message that was actually sent (and received) or else falsely attributing a message to a transmitter who did not originate it, etc. Solutions to problems of this sort, while important in the classical two party communications setting, are crucial in a multiparty network setting where the number of participants may be essentially unlimited, as are the types and objectives for deception

Systems and protocols devised to protect against deception are fundamentally different from all others. For example, the specifications for a piece of communications gear might specify the natural environment in which the equipment is supposed to operate, such as voltage and temperature extremes, the shock, vibration and noise environments it must tolerate etc. The equipment can then be tested to verify that it meets these specifications and certified that it does. An information integrity protocol, however, can never be certified in this same way, since the hostile environment is not nature but rather an intelligent opponent(s) who can be expected to exploit his knowledge of the system and all information he may acquire about it and about the actions of the other participants to maximize his chances of success in cheating the system. He may act in ways not anticipated by the designer, or join forces with other participants to form cabals not planned for in the design. Nature, while it may present a hostile environment, is unknowing and impartial. The human opponent is knowledgeable and capable of finding and exploiting any weaknesses the system may have.

In all disciplines a difficult and as yet unsolved problem can be properly described as puzzling. However, in most disciplines, once a solution is found the puzzle is solved. Information integrity, to initiate the controlled action without the other shareholders concurrence? and if they can, can they also produce a bona fide arbiter’s certificate fraudulently indicated that he had concurred This is by no means an exhaustive list of the conceivable types of deception in this simple protocol, but it should give the reader some feeling of what is involved when the trustworthiness of everyone involved must be considered as suspect. In a real world application, all possible deceits need to be recognized and considered, but fortunately (for the sanity of the designer) many can be dismissed as either being too unlikely of occurrence to be of concern, or else simply ruled out as deceits that a particular protocol can’t deal with.

The two man control protocol also provides an example to illustrate how the information integrity primitives we will introduce later can be combined to construct protocols. Two man control is a simple example -- indeed the simplest possible example -- of a shared secret scheme. In spite of this apparent simplicity, things may not be so simple. If an unconditionally trusted authority exists to generate the two shares and to distribute them in secrecy to the two shareholders, and if there is no need for the shareholders to be able to prove to themselves or to any one else that they have been given bona fide shares, then the solution is indeed simple. If, however, the shareholders demand that they be protected from either the issuing authority exposing their shares, or of him misusing them to initiate the controlled action and then blaming them, the problem becomes very difficult. Ingemarsson and Simmons devised a protocol with which parties who mistrust each other can set up a shared secret scheme that they must logically trust without the aid of a mutually trusted issuer. The problem here is one level more difficult than the one that was solved by Ingemarsson and Simmons in that a third party who has the authority to delegate the distributed capability to initiate the controlled action must also be involved. In the Ingemarsson and Simmons scheme the shareholders end up being certain that they each hold a bona fide share of a secret which they do not know and which none of the other shareholders know either, but which they know was jointly determined by them. In the present example, the shareholders want to be certain that the shares are completely indeterminate to the issuing authority, even though they are not free to determine (jointly or in combination) the secret itself. Simmons devised a key distribution protocol, the main feature of which was that two parties, say A and B, would interact to determine a “random” number whose value was totally indeterminate to each of them in advance and which only A would know when the protocol was completed. B, however, even though he didn’t know the number they had jointly generated, could verify that A was using it as the key in a cryptographic protocol. The present problem is similar, but more complex, in that each of the shareholders must end up in possession of a random number (share) which only he knows, but which must be related to the secret being shared, and hence to the other parties share, and whose (joint) use can later be verified by others.

The shared secret scheme is itself often more complicated than a simple two out of two concurrence -- typically requiring that any pair of shareholders, out of several, be able to initiate the action: a k out of m threshold scheme where k < m. Clearly each shareholder must keep his share secret. If the information content of a share is small enough for them to be able to recall it from memory, then memorization may suffice. However, this limits security to one in a trillion or so. If higher security is needed, then one must find a cryptographic technique amenable to mnemonic key storage and of adequate security. If it is necessary for the shareholders to be able to prove to themselves and to others that they hold bona fide shares, without eroding the security of the controlled action, then a difficult extension of the notion of zero knowledge proof to zero knowledge distributed proofs is required. A simple example of what we are talking about here would be to devise a protocol with which A and B can be given “shares” with which they can prove to others that they jointly possess all of the information needed to factor a very large composite integer, although neither of them alone has any improved chance of factoring the integer than does an outsider who only knows the large integer in question. The reader should recognize that two types of shared capability are being discussed; the shareholders share a secret (piece of information), but they also share a function, in this case the functional ability to prove that they could do something which they haven’t done. The two man rule can also require other primitives in the construction of a protocol. The arbitration function, whose purpose is to assign responsibility (for initiating the controlled action), is dependent on a distributed signature being produced by the shareholders when they exercise the capability they have been given. There are also aspects of authentication, notarization, time stamping etc. The point of this discussion was to illustrate both how much is needed in the way of primitive building blocks to construct information integrity protocols, and to suggest what some of these primitives might be.

Having set the stage for a discussion of the science of information integrity, all that is possible in this abridged introduction will be to sketch the essentials of the three points mentioned at the beginning. Obviously, the most important thing to clearly understand is what the functions are that information integrity protocols are designed to achieve. As we said earlier, every such function has as a mirror image at least one deception it is intended to thwart. The following table summarizes some of the principle information integrity functions. We tabulate the functions rather than the deceptions, because the one can be described in telegraphic style, while the other cannot.

Keywords

Share Secret Information Integrity Shared Secret Scheme Multimedia Security Integrity Protocol 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Copyright information

© IFIP International Federation for Information Processing 1995

Authors and Affiliations

  • Gustavus J. Simmons
    • 1
  1. 1.Sandia ParkUSA

Personalised recommendations