Wednesday, December 21, 2011

451

**********
***451's**
**********
                               
An ezine with to little time to make a good header.
                               
THE INFORMATION FOUND HERE IS FOR INFORMATIONAL PURPOSES ONLY . . .
BLAH, BLAH, BLAH, BLAH . . . THE WORLD HAS TOO MANY LAWYERS . . . BLAH,
BLAH, BLAH.
***Table of Contents***
1st   : Credits.
2nd   : Introduction.
3rd   : Review of Some Wardialers.
4th   : How to Use a Wardialer to da Max.
5th   : Movie Review (of a hacker movie you may have never seen.)
***Credits***
===Design===
Citizen0
===Articles===
The 451 Team
===Ownership of===
(In the Binary Ezine version)
All graphics were either made by me or . . .
The SS sign was found on the web.
Jolt cola is owned by the jolt inc. and has no part in the making of this zine!
===Spell Checking===
Blue Heart

***451 H&P***
Content of Introduction
(I) What we are about.
(II) Future sections
(III) FAQs
This zine is about raw information.  The idea is to supple info that the major hacking zine on hard
copy or otherwise, don't supply.  We don't pretend to be elite hackers.  We do however have an
ambition to learn anything that interest me about computers.  If in order to understand Big Brother
We (or I) must penetrate the beast then so be it.  I (just me) don't believe in hacking to free
information but to protect those who cannot protect them from cooperation and government entities
who wish to act as Big Brother.  I don't write in this issue radical security cracking because I
wouldn't want to cause implicate my self more than I already have.
***Future sections***
We liked to have fictional stories on hacking we will call it hackfiction.  If you have any, send
them our way.  Also any phreaking articles and hacking articles will be but in a section we hope to
be a constant.
Note: I am current trying to encourage my brother to do his zine so half my effort is there.
However, I hope this turns out well.
***FAQ***
I hope these answer your questions.
Q: Why did you make this zine?
A: I want to contribute something to the hacker culture.  You see 2600 is so closed mindly liberal
and some articles are below them (no to mention antireligious), 411 is not a pure hacker zine and
PHRACK is snobby.  Instead of whining I choose to do something about it.
Q: How do I submit an article?
A: Write to Citizen0@netexecutive.com.  The article must be good.  Not unessary swearing or
articles on how to do lame acts of terrorism.
Q: What is a good submission?
A: Anything to do with the hacker culture. Hackfiction, News, How to and good info sources.
Q: Do you get 2600?
A: Out of the dumpster from now on!
Q: How do I get the next issue?
A: To get it soon write to me an article or write on how we can do better .
Q: Who are you?
A: It's a we really. Me and some friends.
Q: Okay who are you guys.
A: You Feds think your smart!

**********
***451's**
**********
Guide to . . .
War dialers
War dialers are a great tool for looking up unlisted numbers such as unlisted BBS's, company
systems numbers, colleges and gov't systems (better leave those alone).  But I didn't start war
dialing at first because I did not want to get stuck with a lame one, Or one that would stab me in
the back and dial 911.  Also I was worried about the good old Orwellian phones system.  Here are
reviews of these War dialers.  These wardialer were download from . . .
http://www.asan.com/users/mmendez/home.htm expect for A-dialer.
X-dialer
This one is really nice despite these big draws back.  He did mention that the numbers are stored
in the cfg file and the numbers are stored between config numbers.  When he says, don't mess with
the cfg file, DON'T.  I have told you it's few draw backs but it is fast!  I like it.
A-dialer
Is there a pattern to the names?  Anyway this one is really slow, when you find a modem number
It gets stuck for a long time,  Next!
Man hunter
This one must be written is C.  Set the time between the calls to a real lower number than the one
it gives you, a number less than 3.

**********
***451's***
**********
Guide to . . . 
Using you Wardialer to da max!!!
Ok you got or programed or own Wardialer.  Great now all you half to do is find were to
start.  It actually easier than you think.  First you need to know what you want to target.
===You need . . . ===
1 A phone book
2 Or a list of number like the school's
3 To know what number not to dial such as Fire, Government hotline, police etc . . .
===When===
Good times are Weekends when fewer people are home.  On weekdays early morning hours when
people have left home gone to work and when housewives are at the store avoiding the rush during
lunch time that is about between 8:00 and 11:20.
===How===
Use school or familiar numbers
If you are at school and note that the number to offices side by side are 0001 0002 0003.  This is
a PBX.
The phone book . . .
Ok your looking for a company that big enough to have a line which might have a number you
Computer can dial up.  Here is the method I've used.  The number was something like, xxx-0025 I
auto-automatically assume that big company has a PBX and I decide to start dialing at xxx-0000
and go all the way up to 0100.  This as worked big time I got two numbers by this method.
===How Not To===
Some of these war dialers have an option to have a few seconds between dials.  Chances are it
okay to do rapid dials in your area and if you wish to do serous war dialing have it wait no slower
than two second wait otherwise you will get calls between your dialing asking  did anyone call.'
However if you really are afraid of the Orwellian phone company, my suggestion is this; do rapid
dials at no more than 25 numbers then stop.  If anyone bothers you, say you were looking for
BBS's.


==Trouble shooting===
* Ok got a dial up but don't know why you get garble. 
1st It maybe you need to lower the baud rate of your modem.
2nd It may not be a computer modem.
**I get a log in then garble
This is a PPP account.

===Closing===
It is not illegal to make a phone call, don't let anyone stop you.  If you do this when most people
are not at home.  You should not bother to many people, if any at all.  Remember wardialing is
only looking for dial ups nothing illegal about that!

***********
***451's***
***********
Review of . . .
The Greatest Hacker Movie!
My Computer class is getting on the subject of computer crime and we going to see a movie.  He
vaguely describe it and I realized that it was a movie I once saw on PBS.  It is a movie about a
computer nerd who got caught it the middle of a hacker spy ring.  To top that Feds are shown to be
inept hapless techno weenies (whois command was used to comprise CIA security).
The chase begins when a novice computer nerd finds a 75 cents unaccounted for.  He continues to
explore this and soon finds that they are hackers.  On the way he uses numerous techniques and
dauntless determination to finds them, as well as a great deal of help from his friends.  Instead of
this movie making hacking look fantastic it shows it for what it truly is at time shockingly easy.
These hackers used the basic  newbie guide stuff' to get information that they sold for thousands to
the KGB.  However, this movie does not dewel on the victory of the good guy but ends with the
tragic death of one of the hacker the good guy caught.
****« (out of 5)

Sunday, December 18, 2011

10,000 Monkeys and a Webpage

(originally published on NewOrder Newsletter, #12)
---[ 10,000 Monkeys and a Webpage . by Izik <izik@tty64.org> ]
A lot has been said on the Peer2Peer structure and how flexible and useful
it could really be.  But in reality the only concept that has proven to be
working on top of it, is mostly File sharing.  The main advantage and
disadvantage in the Peer2Peer structure is lack of the central server which
acts as an authority figure. In this article I will explain a concept, a
theory on how one can implement a trust system within a Peer2Peer
structure, without any authority figure nor previous assumption toward
peers in the network.
To implement this concept we will take a goal, that goal will be to surf to
a given webpage from within the Peer2Peer network, using the peers as
proxies, thus providing the anonymity aspect. Each peer in our theoretical
network is equipped with a simple plugin that accepts a GET request,
processes it and then returns back the data. This situation is a bit
tricky, as we relay on peers to give us back a piece of data which we never
encountered before. This could easily be abused by evil peers to return a
false or modified context to mislead us. So how can we trust a given peer
to give us back the actual data without modifying nor fixing it? The answer
is by applying democracy.
Democracy in our case would be to a make a poll on the given GET request
(e.g. GET /index.html) and sample back the results. If all the peers were
telling the truth we should have only one type of result data, if for some
reason a few peers decided to be evil and fake back the data or return it
modified, the poll will let us know about it. To compare between one result
and another we will use a hash function like MD5, and will go with the MD5
hash that has been returned most often.
Of course this method isn't bullet proof, as massive amount of evil peers
returning the same MD5 will poison the poll, and lead us into thinking that
their data chunk/reply is the right one. But this as well can be dealt
with. We can perform a polygraph test by accessing a dummy site which can
be any site and sample different parts on it and keep the MD5 to ourself
and ask the peers to go to the same site and examine first hand who's
telling the truth and who's not. Another method could be the Human Factor
as in some cases it would be easy to spot a context spoofing as such
'Wrong Picture' or 'Broken Text' and based on Human judgement to issue out
individual trust levels for peers and increase their weight in the next
poll.
To conclude I would say it's possible to implement a trust system within an
Peer2Peer structure without having a well defined authority server. It's
just a matter of how much one is willing to risk.

Saturday, December 17, 2011

C Tech. Rep 79-91 #1

C TECHNICAL REPORT 79-91

                                                                                    Library No. S-237,254

                                                                                    (IDA PAPER P-2316)

                                                                                    September 1991

















INTEGRITY IN AUTOMATED
INFORMATION SYSTEMS



                                                       

                              

                                

Prepared for

National Computer Security Center (NCSC)

by

Terry Mayfield

J. Eric Roskos

Stephen R. Welke

John M. Boone

                            



INSTITUTE FOR DEFENSE ANALYSES

1801 N. Beauregard Street, Alexandria, Virginia 22311





FOREWORD

This NCSC   Technical Report,   ``Integrity in Automated Information Systems,'' is
issued by the National Computer Security Center (NCSC) under the authority of and in
accordance with   Department of Defense  (DoD)  Directive 5215.1,  ``Computer  Security  
Evaluation  Center.''   This Publication contains technical observations,  opinions,  and
evidence prepared  for  individuals involved  with  computer security.

Recommendations   for   revision  to  this  publication  are encouraged and  will be
reviewed  periodically  by the NCSC. Address  all  proposals  for  revision  through 
appropriate channels to:

                                       National Computer Security Center

                                       9800 Savage Road

                                       Fort George G. Meade, MD 20755-6000

                                       Attention: Chief, Standards, Criteria & Guidelines Division



Reviewed by:_________________________________ September 1991

RON S. ROSS, LTC (USA)

Chief, Standards, Criteria & Guidelines Division

Released by:_________________________________ September 1991

THOMAS R. MALARKEY

Chief, Office of Computer Security Publications and Support





















TABLE OF CONTENTS

        1.  INTRODUCTION................................................................................................            1

          1.1   PURPOSE.........................................................................................................           1

          1.2   BACKGROUND..............................................................................................           1

          1.3   SCOPE...............................................................................................................           3

        2.  DEFINING INTEGRITY........................................................................................          5

          2.1   DATA INTEGRITY.........................................................................................           6

          2.2   SYSTEMS INTEGRITY...................................................................................            6

          2.3   INFORMATION SYSTEM PROTECTION GOALS...................................           7

          2.4   INTEGRITY GOALS........................................................................................           8

              2.4.1   Preventing Unauthorized Users From Making Modifications..........          8

              2.4.2   Maintaining Internal and External Consistency...................................         8

              2.4.3   Preventing Authorized Users From Making Improper Modifications....   9

          2.5   CONCEPTUAL CONSTRAINTS IMPORTANT TO INTEGRITY.................     9

              2.5.1   Adherence to a  Code of Behavior.............................................................     10

              2.5.2   Wholeness......................................................................................................      11

              2.5.3   Risk Reduction.............................................................................................       11

        3.  INTEGRITY PRINCIPLES........................................................................................      15

          3.1   IDENTITY..............................................................................................................      15

          3.2   CONSTRAINTS...................................................................................................       16

          3.3   OBLIGATION.....................................................................................................        16

          3.4   ACCOUNTABILITY............................................................................................      17

          3.5   AUTHORIZATION.............................................................................................      18

          3.6   LEAST PRIVILEGE...........................................................................................        18

          3.7   SEPARATION.....................................................................................................       19

          3.8   MONITORING...................................................................................................        20

         

          3.9   ALARMS.................................................................................................................     21

          3.10  NON-REVERSIBLE ACTIONS...........................................................................     21

          3.11  REVERSIBLE ACTIONS......................................................................................     22

          3.12  REDUNDANCY.....................................................................................................   22

          3.13  MINIMIZATION....................................................................................................   23

              3.13.1  Variable Minimization..................................................................................   23

              3.13.2  Data Minimization.......................................................................................    24

              3.13.3  Target Value Minimization......................................................................      24

              3.13.4  Access Time Minimization.........................................................................    24

          3.14  ROUTINE VARIATION.....................................................................................     25

          3.15  ELIMINATION OF CONCEALMENT...........................................................      25

          3.16  ACCESS DETERRENCE..................................................................................        26

        4.  INTEGRITY MECHANISMS................................................................................         27

          4.1   POLICY OF IDENTIFICATION AND AUTHENTICATION..................          29

              4.1.1   Policy of User Identification and Authentication...............................         29

              4.1.2   Policy of Originating Device Identification........................................          32

                    4.1.2.1  Mechanism of Device Identification..........................................           32

              4.1.3   Policy of Object Identification and Authentication...........................          33

                    4.1.3.1  Mechanism of Configuration Management..............................       .   37

                    4.1.3.2  Mechanism of Version Control...................................................          38

                    4.1.3.3  Mechanism of Notarization.........................................................          38

                    4.1.3.4  Mechanism of Time Stamps........................................................           39

                    4.1.3.5  Mechanism of Encryption............................................................          39

                    4.1.3.6  Mechanism of Digital Signatures...............................................           40

          4.2   POLICY OF AUTHORIZED ACTIONS......................................................          40

              4.2.1   Policy of Conditional Authorization....................................................         41







              4.2.1.1 Mechanism Conditional Enabling.................................................          41

                    4.2.1.2  Mechanism of Value Checks.........................................................         42

              4.2.2   Policy of Separation of Duties...............................................................         43

                    4.2.2.1  Mechanism of Rotation of Duties................................................         45

                    4.2.2.2  Mechanism of Supervisory Control.............................................         46

                    4.2.2.3  Mechanism of N-Person Control.................................................          47

                    4.2.2.4  Mechanism of Process Sequencing..............................................          47

          4.3   POLICY OF SEPARATION OF RESOURCES.............................................         48

              4.3.1   Policy of Address Separation..................................................................        49

                    4.3.1.1  Mechanism of Separation of Name Spaces..................................        49

                    4.3.1.2  Mechanism of Descriptors...............................................................        50

              4.3.2   Policy of Encapsulation..............................................................................       51

                    4.3.2.1  Mechanism of Abstract Data Types.................................................      52

                    4.3.2.2  Mechanism of Strong Typing............................................................      53

                    4.3.2.3  Mechanism of Domains......................................................................      54

                    4.3.2.4  Mechanism of Actors..........................................................................      54

                    4.3.2.5  Mechanism of Message Passing.......................................................       55

                    4.3.2.6 Mechanism of the Data Movement Primitives................................      56

                    4.3.2.7  Mechanism of Gates............................................................................      56

              4.3.3   Policy of Access Control.............................................................................       56

                    4.3.3.1  Mechanism of Capabilities................................................................       57

                    4.3.3.2 Mechanism of Access Control Lists..................................................       57

                    4.3.3.3  Mechanism of Access Control Triples............................................       58

                    4.3.3.4  Mechanism of Labels.........................................................................       59

          4.4   POLICY OF FAULT TOLERANCE...................................................................      60

              4.4.1   Policy of Summary Integrity Checks.........................................................     60

                    4.4.1.1  Mechanism of Transmittal Lists........................................................     60

                    4.4.1.2  Mechanism of Checksums..................................................................     61

                    4.4.1.3 Mechanism of Cryptographic Checksums........................................    61

                    4.4.1.4  Mechanism of Chained Checksums.........................................            62

                    4.4.1.5  Mechanism of the Check Digit.................................................             62

              4.4.2   Policy of Error Correction..................................................................             62

                    4.4.2.1  Mechanism of Duplication Protocols......................................             63

                    4.4.2.2  Mechanism of Handshaking Protocols...................................             63

                    4.4.2.3 Mechanism of Error Correcting Codes....................................             64

        5.  INTEGRITY MODELS AND MODEL IMPLEMENTATIONS..................             67

          5.1   INTEGRITY MODELS.................................................................................            67

              5.1.1   Biba Model...........................................................................................              67

                    5.1.1.1  Discussion of Biba......................................................................             67

                         5.1.1.1.1  Low-Water Mark Policy...................................................             69

                         5.1.1.1.2 Low-Water Mark Policy for Objects...............................             69

                         5.1.1.1.3 Low Water Mark Integrity Audit Policy........................             69

                         5.1.1.1.4  Ring Policy..........................................................................            70

                         5.1.1.1.5  Strict Integrity Policy........................................................             70

                    5.1.1.2  Analysis of Biba..........................................................................             71

              5.1.2   GOGUEN AND MESEGUER MODEL............................................             72

                   5.1.2.1 Discussion of Goguen and Meseguer.......................................             72

                         5.1.2.1.1 Ordinary State Machine Component..............................             73

                         5.1.2.1.2 Capability Machine Component.....................................              74

                         5.1.2.1.3  Capability System.............................................................             74

                    5.1.2.2  Analysis of Goguen and Meseguer........................................              75

              5.1.3   SUTHERLAND MODEL....................................................................             76

                    5.1.3.1  Discussion of Sutherland..........................................................             76

                    5.1.3.2  Analysis of Sutherland..............................................................             78

              5.1.4   CLARK AND WILSON MODEL.....................................................             78

                    5.1.4.1  Discussion of Clark and Wilson..............................................             78

                    5.1.4.2  Analysis of Clark and Wilson..................................................             80

            

              5.1.5   BREWER AND NASH MODEL...........................................................          82

                    5.1.5.1  Discussion of Brewer and Nash..................................................          82

                    5.1.5.2  Analysis of Brewer and Nash......................................................          85

              5.1.6   SUMMARY OF MODELS......................................................................          86

          5.2   INTEGRITY MODEL IMPLEMENTATIONS..............................................         86

              5.2.1   LIPNER IMPLEMENTATION................................................................         87

                    5.2.1.1  Discussion of Lipner.......................................................................         87

                    5.2.1.2  Analysis of Lipner...........................................................................         88

              5.2.2   BOEBERT AND KAIN IMPLEMENTATION......................................         90

                    5.2.2.1  Discussion of Boebert and Kain.....................................................        90

                    5.2.2.2  Analysis of Boebert and Kain..........................................................       91

              5.2.3   LEE AND SHOCKLEY IMPLEMENTATIONS......................................       92

                    5.2.3.1  Discussion of Lee and Shockley.......................................................      92

                    5.2.3.2  Analysis of Lee and Shockley...........................................................      93

              5.2.4   KARGER IMPLEMENTATION.................................................................      94

                    5.2.4.1  Discussion of Karger..........................................................................      94

                    5.2.4.2  Analysis of Karger..............................................................................      95

              5.2.5   JUENEMAN IMPLEMENTATION...........................................................      96

                    5.2.5.1  Discussion of Jueneman.....................................................................      96

                         5.2.5.1.1  Subject Integrity Label...............................................................     97

                         5.2.5.1.2  Data File Integrity Label...........................................................      97

                         5.2.5.1.3  Program Integrity Label............................................................      98

                    5.2.5.2  Analysis of Jueneman..........................................................................     98

              5.2.6   GONG IMPLEMENTATION.......................................................................     99

                    5.2.6.1  Discussion of Gong...............................................................................    99

                    5.2.6.2  Analysis of Gong....................................................................................  102

              5.2.7   SUMMARY OF MODEL IMPLEMENTATIONS.......................................  103

          5.3  GENERAL ANALYSIS OF MODELS AND

                MODEL IMPLEMENTATIONS..........................................................................     103

             

              5.3.1   Hierarchical Levels..................................................................................        104

              5.3.2   Non-hierarchical categories...................................................................        104

              5.3.3   Access Control Triples...........................................................................         104

              5.3.4   Protected Subsystems.............................................................................         105

              5.3.5   Digital Signatures/Encryption..............................................................        105

              5.3.6   Combination of Capabilities and ACLs................................................       105

              5.3.7   Summary of General Analysis...............................................................        105

        6.  CONCLUSIONS.....................................................................................................       107

          6.1   SUMMARY OF PAPER...................................................................................       107

          6.2   SIGNIFICANCE OF PAPER............................................................................      108

          6.3   FUTURE RESEARCH.......................................................................................       109

        REFERENCE LIST........................................................................................................     111

        APPENDIX  - GENERAL INTEGRITY PRINCIPLES............................................      117

        1.  TRADITIONAL DESIGN PRINCIPLES..............................................................      117

          1.1   ECONOMY OF MECHANISM......................................................................      117

          1.2   FAIL-SAFE DEFAULTS...................................................................................       118

          1.3   COMPLETE MEDIATION..............................................................................       118

          1.4   OPEN DESIGN.................................................................................................        118

          1.5   SEPARATION OF PRIVILEGE.......................................................................       118

          1.6   LEAST PRIVILEGE...........................................................................................       118

          1.7   LEAST COMMON MECHANISM.................................................................       119

          1.8   PSYCHOLOGICAL ACCEPTABILITY..........................................................       119

      2.  ADDITIONAL DESIGN PRINCIPLES....................................................................     119

          2.1   WORK FACTOR.................................................................................................     119

          2.2   COMPROMISE RECORDING.........................................................................      120

    

      3.  FUNCTIONAL CONTROL LEVELS..............................................................             120

          3.1   UNPROTECTED SYSTEMS.....................................................................             120

          3.2   ALL-OR-NOTHING SYSTEMS...............................................................             121

          3.3   CONTROLLED SHARING......................................................................             121

          3.4   USER-PROGRAMMED SHARING CONTROLS................................              121

          3.5   LABELLING INFORMATION...............................................................              121

        ACRONYMS........................................................................................................               123

        GLOSSARY...........................................................................................................              125

LIST OF FIGURES

        Figure 1.  Integrity Framework.........................................................................               13

        Figure 2.  Cascade Connection of Capability System....................................               74

LIST OF TABLES

TABLE 1.  Integrity Mechanisms Grouped by Policy and SubPolicy.............             28

































EXECUTIVE SUMMARY

As public, private, and defense sectors of our society have become increasingly
dependent on widely used interconnected computers for carrying out critical as well as
more mundane tasks, integrity of these systems and their data has become a significant
concern. The purpose of this paper is not to motivate people to recognize the need for
integrity, but rather to motivate the use of what we know about integrity and to
stimulate more interest in research to standardize integrity properties of systems.

For some time, both integrity and confidentiality have been regarded as inherent
parts of information security. However, in the past, more emphasis has been placed on
the standardization of confidentiality properties of computer systems. This paper shows
that there is a significant amount of information available about integrity and integrity
mechanisms, and that such information can be beneficial in starting to formulate
standardizing criteria.   We have gone beyond the definition of integrity and provided
material that will be useful to system designers, criteria developers, and those
individuals trying to gain a better understanding of the concepts of data and systems
integrity. This paper provides foundational material to continue the efforts toward
developing criteria for building products that preserve and promote integrity.

We begin by discussing the difficulty of trying to provide a single definition for the
term integrity as it applies to data and systems. Integrity implies meeting a set of defined
expectations. We want a system that protects itself and its data from unauthorized or
inappropriate actions, and performs in its environment in accordance with its users'
expectations. We also expect internal data and any transformations of that data to
maintain a correct, complete and consistent correspondence to itself and to what it
represents in the external environment. Addressing these multiple views in a single
definition is difficult. We conclude that a single definition is not needed.   An operational
definition, or framework, that encompasses various views of the issue seems more
appropriate.   The resulting framework provides a means to address both data and
systems integrity and to gain an understanding of important principles that underlie
integrity. It provides a context for examining integrity preserving mechanisms and for
understanding the integrity elements that need to be included in system security
policies.

We extract a set of fundamental principles related to integrity. These are based on
our framework, a review of various written material on the topic of integrity, and an
investigation of existing mechanisms deemed to be important to preserving and
promoting integrity. These principles underlie the wide variety of both manual and
automated mechanisms that are examined. The mechanisms have been categorized to
show that they serve a relatively small set of distinct purposes or policies. Some
mechanisms that promote integrity are not documented in traditional literature and not
all of the mechanisms addressed here are implemented in computer systems. All of these
do, however, provide insight into some of the controls necessary and the types of threats
that automated integrity mechanisms must counter.   We also provide an overview of
several models and model implementations (paper studies) of integrity. These models
are still rather primitive with respect to the range of coverage suggested by examining
both data and systems integrity.   The model we found to be receiving the most attention
at this time is the Clerk-Lesion Model. Although this is not a formal mathematical
model, it provides a fresh and useful point of departure for examining issues of
integrity.

From this study, we conclude that it is possible to begin to standardize data and
systems integrity properties. Principles exist, trial policies can be formulated and
modelled, and mechanisms can be applied at various layers of abstraction within a
system.   The Institute for Defense Analyses (IDA) has initiated a follow-on study to look
at the allocation and layering of mechanisms. We also conclude that there are gaps in our
information and that the standardization process could help guide certain studies. Such
studies should include the analysis of existing interfaces and protocols to determine the
appropriate integrity interfaces or the need to design new protocols. Other
demonstration/validation studies should be conducted to show that mechanisms are
workable, interfaces are well understood, protocol concepts are valid, and standardized
criteria are testable. We conclude that criteria development efforts can occur
concurrently with the protocol and demonstration/validation studies.

























ACKNOWLEDGMENTS

The National Computer Security Center extends special recognition to the principle
authors from the Institute for Defense Analyses (IDA): Terry Mayfield (Task Leader), Dr.
J. Eric Roskos, Stephen R. Welke, John M. Boone, and  Catherine W.  McDonald, as well
as the Project Leader (NSA C81),  Maj. Melvin De Vilbiss (USA).

We wish to thank the external  reviewers who provided technical comments and
suggestions to earlier versions of this report. Their contributions have caused this
document to evolve significantly from the original efforts. We wish also to express
appreciation to the principle reviewers at IDA, Dr. Karen Gordon and Dr. Cy Ardoin, for
their technical support. A special thanks goes to  Katydean  Price  for  her tremendous
editorial support during the course of this project.

The principle authors have dedicated this  document  in memory of their close friend,
Dr. J. Eric Roskos-a talented computer scientist and colleague who performed much of
the original research for this effort. His tragic death left a tremendous gap in the research
team. Eric is often thought of and very much missed.

















           







1INTRODUCTION

1.1 PURPOSE

This paper provides a framework for examining integrity in computing and an
analytical survey of techniques that have potential to promote and preserve computer
system and data integrity. It is intended to be used as a general foundation for further
investigations into integrity and a focus for debate on those aspects of integrity related to
computer and automated information systems (AISs).

One of the specific further investigations is the development and evolution of
product evaluation criteria to assist the U.S. Government in the  acquisition of systems
that incorporate integrity preserving mechanisms. These criteria also will help guide
computer system vendors in producing systems that can be evaluated in terms of
protection features and assurance measures needed to ascertain a degree of trust in the
product's ability to promote and preserve system and data integrity.   In support of this
criteria investigation, we have provided   a   separate   document [Mayfield 1991] that
offers potential modifications to the Control Objectives contained in the Trusted
Computer System Evaluation Criteria (TCSEC), DOD 5200.28-STD [DOD 1985]. The
modifications extend the statements of the control objectives to encompass data and
systems integrity; specific criteria remain as future work.

1.2 BACKGROUND

Integrity and confidentiality are inherent parts of information security (INFOSEC).
Confidentiality, however, is addressed in greater detail than integrity by evaluation
criteria such as the TCSEC. The emphasis on confidentiality has resulted in a significant
effort at standardizing confidentiality properties of systems, without an equivalent effort
on integrity. However, this lack of standardization effort does not mean that there is a
complete lack of mechanisms for or understanding of integrity in computing systems. A
modicum of both exists. Indeed, many well-understood protection mechanisms initially
designed to preserve integrity have been adopted as standards for preserving
confidentiality. What has not been accomplished is the coherent articulation of
requirements and implementation specifications so that integrity property
standardization can evolve. There is a need now to put a significant effort on
standardizing integrity properties of systems. This paper provides a starting point.

The original impetus for this paper derives from an examination of computer
security requirements for military tactical and embedded computer systems, during
which the need for integrity criteria for military systems became apparent. As the
military has grown dependent on complex, highly interconnected computer systems,
issues of integrity have become increasingly important. In many cases, the risks related
to disclosure of information, particularly volatile information which is to be used as soon
as it is issued, may be small. On the other hand, if this information is modified between
the time it is originated and the time it is used (e.g., weapons actions based upon it are
initiated), the modified information may cause desired actions to result in failure (e.g.,
missiles on the wrong target).   When one considers the potential loss or damage to lives,
equipment, or military operations that could result when the integrity of a military
computer system is violated, it becomes more apparent why the integrity of military
computer systems can be seen to be at least as important as confidentiality.

There are many systems in which integrity may be deemed more important than
confidentiality (e.g.,   educational record systems, flight-reservation systems, medical
records systems, financial systems, insurance systems, personnel systems). While it is
important in many cases that the confidentiality of information in these types of systems
be preserved, it is of crucial importance that this information not be tampered with or
modified in unauthorized ways. Also included in this categorization of systems are
embedded computer systems. These systems are components incorporated to perform
one or more specific (usually control) functions within a larger system. They present a
more unique aspect of the importance of integrity as they may often have little or no
human interface to aid in providing for correct systems operation.   Embedded  
computer   systems   are   not restricted to military weapons systems. Commercial
examples include anti-lock braking systems, aircraft avionics, automated milling
machines, radiology imaging equipment, and robotic actuator control systems.

Integrity can be viewed not only in the context of relative importance but also in the
historical context of developing protection mechanisms within computer systems. Many
protection mechanisms were developed originally to preserve integrity. Only later were
they recognized to be equally applicable to preserving confidentiality.   One of the
earliest concerns was that programs might be able to access memory (either primary
memory or secondary memory such as disks) that was not allocated to them. As soon as
systems began to allocate resources to more than one program at a time (e.g.,
multitasking, multiprogramming, and time-sharing), it became necessary to protect the
resources allocated to the concurrent execution of routines from accidentally modifying
one another.   This increased system concurrency led to a form of interleaved sharing of
the processor using two or more processor states (e.g., one for problem or user state and
a second for control or system state), as well as interrupt, privilege, and protected
address spaces implemented in hardware and software. These ``mechanisms'' became
the early foundations for ``trusted'' systems, even though they generally began wit the
intent of protecting against errors in programs rather than protecting against malicious
actions. The mechanisms were aids to help programmers debug their programs and to
protect them from their own coding errors. Since these mechanisms   were designed to
protect against accidents, by themselves or without extensions they offer little protection
against malicious attacks.

Recent efforts in addressing integrity have focused primarily on defining and
modelling integrity. These efforts have raised the importance of addressing integrity
issues and the incompleteness of the TCSEC with respect to integrity. They also have
sparked renewed interest in examining what needs to be done to achieve integrity
property standardization in computing systems. While a large portion of these efforts
has been expended on attempting to define the term integrity, the attempts have not
achieved consensus. However, many of these definitions point toward a body of
concepts that can be encompassed by the term integrity. This paper takes one step
further in that it not only proposes an operational definition of integrity, but also
provides material for moving ahead without consensus.   This is done through an
examination of various integrity principles, mechanisms, and the policies that they
support as well as an examination of a set of integrity models and model
implementations

1.3 SCOPE

Our examination of integrity takes several viewpoints. We begin in Section 2 by
looking at the issue of defining integrity. Here we build a framework or operational
definition of integrity that will serve our purpose in analyzing mechanisms that provide
integrity. This framework is derived from a number of sources, including: (1) what
people generally say they mean when they discuss having a system provide integrity, (2)
from dictionary definitions, and (3) other writings on the topic that we have interpreted
to provide both specific integrity goals and a context for data and system integrity.

In Section 3, we extract a set of fundamental principles from these goals and
contextual interpretations. Principles are the underlying basis on which policies and
their implementing mechanisms are built.   An additional set of basic protection design
principles, extracted from Saltzer & Schroeder's tutorial paper, The Protection of
Information in Computer Systems [Saltzer 1975], has been provided as an appendix for
the convenience of the reader.   These design principles apply to the general concept of
protection and, thus, are important additional considerations for standardizing integrity
preserving properties in computer systems.

Next, in Section 4, we examine a wide variety of manual and automated mechanisms
that address various problems related to integrity. Most of these mechanisms, evolving
over the course of many years, remain in use today. Several of the mechanisms intended
to promote integrity are not documented in traditional computer security literature.  
Not all of the mechanisms we examine are implemented in computer systems, although
they give insight into the types of controls that need to be provided and the types of
threats that must be countered by automated integrity mechanisms. Some of the
mechanisms we examine appear primarily in embedded systems and others are found in
more familiar application environments such as accounting. The mechanisms have been
categorized to show that they serve a relatively small set of distinct purposes. We use the
term policy to describe the higher-level purpose (categorization) of a mechanism since
such a purpose generally reflects administrative courses of action devised to promote or
preserve integrity.

Independent of the mechanisms a small number of formal models has been
established with differing approaches to capturing integrity semantics. In Section 5, we
examine several models that have been proposed in the last decade to address issues of
integrity. Several paper studies have suggested implementations of these models as
possibilities for real systems. We also look at a number of these model implementations
intended to promote or preserve integrity. This examination provides us with a better
understanding of the sufficiency of coverage provided by the proposed models and
model implementations.

Finally, in Section 6, we present our study conclusions and recommend a set of
further studies that should be performed to enhance our understanding of integrity and
better enable us to standardize integrity protection properties in systems.

A reference list is provided at the end of the main body; a list of acronyms and a
glossary are provided after the appendix.

2 DEFINING INTEGRITY

Integrity is a term that does not have an agreed definition or set of definitions for use
within the INFOSEC community. The community's experience to date in trying to define
integrity provides ample evidence that it doesn't seem to be profitable to continue to try
and force a single consensus definition. Thus, we elect not to debate the merits of one
proposed definition over another. Rather, we accept that the definitions generally all
point to a single concept termed integrity.

Our position is reinforced when we refer to a dictionary; integrity has multiple
definitions [Webster 1988]. Integrity is an abstract noun. As with any abstract noun,
integrity derives more concrete meaning from the term(s) to which it is attributed and
from the relations of these terms to one another. In this case, we attribute integrity to two
separate, although interdependent, terms, i.e., data and systems. Bonyun made a similar
observation in discussing the difficulty of arriving at a consensus definition of integrity
[Bonyun 1989]. He also recognized the interdependence of the terms systems and data in
defining integrity, and submitted the proposition that ``in order to provide any measure
of assurance that the integrity of data is preserved, the integrity of the system, as a
whole, must be considered.''

Keeping this proposition in mind, we develop a conceptual framework or
operational definition which is in large part derived from the mainstream writing on the
topic and which we believe provides a clearer focus for this body of information. We
start by defining two distinct contexts of integrity in computing systems: data integrity,
which concerns the objects being processed, and systems integrity, which concerns the
behavior of the computing system in its environment. We then relate these two contexts
to a general integrity goal developed from writings on information protection. We
reinterpret this general goal into several specific integrity goals. Finally, we establish
three conceptual constraints that are important to the discussion of the preservation and
promotion of integrity. These definitions, specific goals, and conceptual constraints
provide our framework or operational definition of integrity from which we extract
integrity principles, analyze integrity mechanisms and the policies they implement, and
examine integrity models and model implementations. A diagram of this framework is
found in Figure 1 at the end of this section.

2.1 DATA INTEGRITY

Data integrity is what first comes to mind when most people speak of integrity in
computer systems. To many, it implies attributes of data such as quality, correctness,
authenticity, timeliness, accuracy, and precision. Data integrity is concerned with
preserving the meaning of information, with preserving the completeness and
consistency of its representations within the system, and with its correspondence to its
representations external to the system. It involves the successful and correct operation of
both computer hardware and software with respect to data and, where applicable, the
correct operations of the users of the computing system, e.g., data entry. Data integrity is
of primary concern in AISs that process more than one distinct type of data using the
same equipment, or that share more than one distinct group of users. It is of concern in
large scale, distributed, and networked processing systems because of the diversity and
interaction of information with which such systems must often deal, and because of the
potentially large and widespread number of users and system nodes that must interact
via such systems.

2.2 SYSTEMS INTEGRITY

Systems integrity is defined here as the successful and correct operation of
computing resources. Systems integrity is an overarching concept for computing
systems, yet on that has specific implications in embedded systems whose control is
dependent on system sensors.   Systems integrity is closely related to the domain of fault
tolerance.   This aspect of integrity often is not included in the traditional discussions of
integrity because it involves an aspect of computing, fault tolerance, that is   often  
mistakenly relegated to the hardware level. Systems integrity is only superficially a
hardware issue, and is equally applicable to the AIS environment; the embedded system
simply has less user-provided fault tolerance. In this context, it also is related closely to
the issue of system safety, e.g., the safe operation of an aircraft employing embedded
computers to maintain stable flight. In an embedded system, there is usually a much
closer connection between the computing machinery and the physical, external
environment than in a command and control system or a conventional AIS. The
command and control system or conventional AIS often serves to process information
for human users to interpret, while the embedded system most often acts in a relatively
autonomous sense.

Systems integrity is related to what is traditionally called the denial of service
problem.   Denial of service covers a broad category of circumstances in which basic
system services are denied to the users. However, systems integrity is less concerned
with denial of service than with alteration of the ability of the system to perform in a
consistent and reliable manner, given an environment in which system design flaws can
be exploited to modify the operation of the system by an attacker.

For example, because an embedded system is usually very closely linked to the
environment, one of the  fundamental, but less familiar, ways in which such an attack
can be accomplished is by distorting the system's view of time. This type of attack is
nearly identical to a denial-of-service attack that interferes with the scheduling of time-
related resources provided by the computing system.   However, while denial of service
is intended to prevent a user from being able to employ a system function for its
intended purpose, time-related attacks on an embedded system can be intended to alter,
but not stop, the functioning of a system. System examples of such an attack include the
disorientation of a satellite in space or the confusing of a satellite's measurement of the
location of targets it is tracking by forcing some part of the system outside of its
scheduling design parameters. Similarly, environmental hazards or the use of sensor
countermeasures such as flares, smoke, or reflectors can cause embedded systems
employing single sensors such as infrared, laser, or radar to operate in unintended ways.

When sensors are used in combination, algorithms often are used to fuse the sensor
inputs and provide control decisions to the employing systems. The degree of
dependency on a single sensor, the amount of redundancy provided by multiple
sensors, the dominance of sensors within the algorithm, and the discontinuity of
agreement between sensors are but a few of the key facets in the design of fusion
algorithms in embedded systems. It is the potential design flaws in these systems that
we are concerned with when viewing systems from the perspective of systems integrity.

2.3 INFORMATION SYSTEM PROTECTION GOALS

Many researchers and practitioners   interested   in INFOSEC believe that the field is
concerned with three overlapping protection goals: confidentiality, integrity, and
availability. From a general review of reference material, we have broadly construed
these individual goals as having the following meanings:

1.Confidentiality denotes the goal of ensuring that information is protected
from improper disclosure.

2.Integrity denotes the goal of ensuring that data has at all times a proper
physical representation, is a proper semantic representation of informa-
tion,   and   that authorized users and information processing resources
perform correct processing operations on it.

3.Availability denotes the goal of ensuring that information and information
processing resources both remain readily accessible to their authorized us-
ers.

The above integrity goal is complete only with respect to data integrity. It remains
incomplete with respect to systems integrity. We extend it to include ensuring that the
services and resources composing the processing system are impenetrable to
unauthorized users. This extension provides for a more complete categorization of
integrity goals, since there is no other category for the protection of information
processing resources from unauthorized use, the theft of service problem. It is
recognized that this extension represents an overlap of integrity with availability.
Embedded systems require one further extension to denote the goal of consistent and
correct performance of the system within its external environment.

2.4 INTEGRITY GOALS

Using the goal previously denoted for integrity and the extensions we propose, we
reinterpret the general integrity, goal into the following specific goals in what we believe
to be the order of increasing difficulty to achieve. None of these goals can be achieved
with absolute certainty; some will respond to mechanisms known to provide some
degree of assurance and all may require additional risk reduction techniques.

2.4.1 Preventing Unauthorized Users From Making Modifications

This goal addresses both data and system resources. Unauthorized use includes the
improper access to the system, its resources and data. Unauthorized modification
includes changes to the system, its resources, and changes to the user or system data
originally stored including addition or deletion of such data. With respect to user data,
this goal is the opposite of the confidentiality requirement:   confidentiality places
restrictions on information flow out of the stored data, whereas in this goal, integrity
places restrictions on information flow into the stored data.

2.4.2 Maintaining Internal and External Consistency

This goal addresses both data and systems. It addresses self-consistency of
interdependent data and consistency of data with the real-world environment that the
data represents. Replicated and distributed data in a distributed computing system add
new complexity to maintaining internal consistency. Fulfilling a requirement for
periodic comparison of the internal data with the real-world environment it represents
would help to satisfy both the data and systems aspects of this integrity goal. The
accuracy of correspondence may require a tolerance that accounts for data input lags or
for real-world lags, but such a tolerance must not allow incremental attacks in smaller
segments than the tolerated range. Embedded systems that must rely only on their
sensors to gain knowledge of the external environment require additional specifications
to enable them to internally interpret the externally sensed data in terms of the
correctness of their systems behavior in the external world.

It is the addition of overall systems semantics that allows the embedded system to
understand the consistency   of external data with respect to systems actions.

1.As an example of internal data consistency, a file containing a monthly
summary of transactions must be consistent with the transaction records
themselves.

2.As an example of external data consistency, inventory records in an ac-
counting system must accurately reflect the inventory of merchandise on
hand. This correspondence may require controls on the external items as
well as controls on the data representing them, e.g., data entry controls.
The accuracy of correspondence may require a tolerance that accounts for
data input lags or for inventory in shipment, but not actually received.

3.As an example of systems integrity and its relationship to external consist-
ency, an increasing temperature at a cooling system sensor may be the re-
sult of a fault or an attack on the sensor (result: overlooking of the space)
or a failure of a cooling system component, e.g., freon leak (result: over-
heating of the space). In both cases, the automated thermostat (embedded
system) could be perceived as having an integrity failure unless it could
properly interpret the sensed information in the context of the thermostat's
interaction with the rest of the system, and either provide an alert of the ex-
ternal attack or failure, or provide a controlling action to counter the attack
or overcome the failure. The essential requirement is that in order to have
the system maintain a consistency of performance with its external environ-
ment, it must provided with an internal means to interpret and flexibility
to adapt to the external environment.

2.4.3 Preventing Authorized Users From Making Improper Modifications

The final goal of integrity is the most abstract, and usually involves risk reduction
methods or procedures rather than absolute checks on the part of the system.  
Preventing improper modifications may involve requirements that ethical principles not
be violated; for example, an employee may be authorized to transfer funds to specific
company accounts, but should not make fraudulent or arbitrary transfers.   It is, in fact,
impossible to provide absolute ``integrity'' in this sense, so various mechanisms are
usually provided to minimize the risk of this type of integrity violation occurring.

2.5 CONCEPTUAL CONSTRAINTS IMPORTANT TO INTEGRITY

There are three conceptual constraints that are important to the discussion of
integrity. The first conceptual constraint has to do with the active entities of a system.
We use the term agents to denote users and their surrogates. Here, we relate one of the
dictionary definitions [Webster 1988] of integrity, adherence to a code of behavior, to
actions of systems and their active agents. The second conceptual constraint has to do
with the passive entities or objects of a system. Objects as used here are more general
than the storage objects as used in the TCSEC. We relate the states of the system and its
objects to a second of Webster's definitions of integrity, wholeness.   We show that the
constraint relationships between active agents and passive entities are interdependent.  
We contend that the essence of integrity is in the specification of constraints and
execution adherence of the active and passive entities to the specification as the active
agent transforms the passive entity. Without specifications, one cannot judge the
integrity of an active or passive entity. The third system conceptual constraint deals with
the treatment of integrity when there can be no absolute assurance of maintaining
integrity. We relate integrity to a fundamental aspect of protection, a strategy of risk
reduction. These conceptual constraints, placed in the context of data integrity and
systems integrity and the previous discussions on integrity goals, provide the
framework for the rest of the paper.

2.5.1 Adherence to a Code of Behavior

Adherence to a code of behavior focuses on the constraints of the active agents under
examination. It is important to recognize that agents exist at different layers of
abstraction, e.g., the user, the processor, the memory management unit. Thus, the focus
on the active agents is to ensure that their actions are sanctioned or constrained so that
they cannot exceed established bounds. Any action outside of these bounds, if
attempted, must be prevented or detected prior to having a corrupting effect. Further,
humans, as active agents, are held accountable for their actions and held liable to
sanctions should such actions have a corrupting effect. One set of applied constraints are
derived from the expected states of the system or data objects involved in the actions.
Thus, the expected behaviors of the system's active agents are conditionally constrained
by the results expected in the system's or data object's states. These behavioral
constraints may be statically or dynamically conditioned.

For example, consider a processor (an active agent) stepping through an application
program (where procedural actions are conditioned or constrained) and arriving at the
conditional instruction where the range (a conditional constraint) of a data item is
checked. If the program is written with integrity in mind and the data item is ``out of
range,'' the forward progress of the processor through the applications program is halted
and an error handling program is called to allow the processor to dispatch the error.
Further progress in the application program is resumed when the error handling
program returns control of the processor back to the application program.

A second set of applied constraints are derived from the temporal domain. These
may be thought of as event constraints. Here, the active agent must perform an action or
set of actions within a specified bound of time. The actions may be sequenced or
concurrent, they may be performance constrained by rates (i.e., actions per unit of time),
activity time (e.g., start & stop), elapsed time (e.g., start + 2hrs), and by discrete time
(e.g., complete by 1:05 p.m.)

Without a set of specified constraints, there is no ``code of behavior'' to which the
active agent must adhere and, thus, the resultant states of data acted upon are
unpredictable and potentially corrupt.

2.5.2 Wholeness

Wholeness has both the sense of unimpaired condition (i.e., soundness) and being
complete and undivided (i.e., completeness) [Webster 1988].   This aspect of integrity
focuses on the incorruptibility of the objects under examination. It is important to
recognize that objects exist a different layers of abstraction, e.g., bits, words, segments,
packets, messages, programs. Thus, the focus of protection for an object is to ensure that
it can only be accessed, operated on, or entered in specified ways and that it otherwise
cannot be penetrated and its internals modified or destroyed. The constraints applied
are those derived from the expected actions of the system's active agents. There are also
constraints derived from the temporal domain. Thus, the expected states of the system or
data objects are constrained by the expected actions of the system's active agents.

For example, consider the updating of a relational database with one logical update
transaction concurrently competing with another logical update transaction for a portion
of the set of data items in the database. The expected actions for each update are based
on the constraining concepts of atomicity, i.e., that the actions of a logical transaction
shall be complete and that they shall transform each involved individual data item from
one unimpaired state to a new unimpaired state, or that they shall have the effect of not
carrying out the update at all; servility i.e., the consecutive ordering of all actions in the
logical transaction schedule; and mutual exclusion, i.e., exclusive access to a given data
item for the purpose of completing the actions of the logical transaction. The use of
mechanisms such as dependency ordering, locking, logging, and the two-phase commit
protocol enable the actions of the two transactions to complete leaving the database in a
complete and consistent state.

2.5.3 Risk Reduction

Integrity is constrained by the inability to assure absoluteness. The potential results
of actions of an adversarial attack, or the results of the integrity failure of  a human or
system component place the entire system at risk of corrupted behavior. This risk could
include complete system includes relatively assured capabilities provided by protection
mechanisms plus measures to reduce the exposure of human, system component, and
data to loss of integrity should be pursued. Such a risk reduction strategy could include
the following:

a)Containment to construct ``firewalls'' to minimize exposures and opportuni-
ties to both authorized and unauthorized individuals, e.g., minimizing, sep-
arating, and rotating data, minimizing privileges of individuals,   separat-
ing   responsibilities,   and    rotating individuals.

b)Monitors to actively observe or oversee human and system actions, to con-
trol the progress of the actions, log the actions for later review, and/or
alert other authorities of inappropriate action.

c)Sanctions to apply a higher risk (e.g., fines, loss of job, loss of professional
license, prison sentence) to the individual as compared to the potential
gain from attempting, conducting, or completing an unauthorized act.

d)Fault tolerance via redundancy, e.g., databases to preserve data or proces-
sors to preserve continued operation in an acknowledged environment of
faults. Contingency or backup operational sites are another form of redun-
dancy. Note: layered protection, or protection in depth, is a form of redun-
dancy to reduce dependency on the impenetrability of a single protection
perimeter.

e)Insurance to replace the objects or their value should they be lost or dam-
aged, e.g., fire insurance, theft insurance, and liability insurance.



(Figure 1. Not available for electronic version.)







Figure 1. Integrity Framework

3 INTEGRITY PRINCIPLES

``There is a large body of principles from among which those pertinent to any
application environment can be selected for incorporation into specific policy
statements. There is a need to identify as many as possible of those principles as might
be of sufficiently general benefit to warrant their inclusion in a list of such principles
from which the formulators of policy can select, cafeteria-style, those appropriate to their
needs'' [Courtney 1989].

In this section we discuss important underlying principles that can be used in the
design of integrity policies and their supporting or implementing mechanisms. These
principles involve not only those that we believe are fundamental to integrity, but also
those which underlie risk reduction with respect to integrity. These principles were
developed from a review of various written material on the topic of integrity, from our
framework formulated in the previous section, and by an investigation of existing
mechanisms deemed to be important to preserving and promoting integrity.

3.1 IDENTITY

The principle of identity is fundamental to integrity in that it defines ``sameness in all
that constitutes the objective reality of a thing: oneness; and is the distinguishing
character of a thing:   individuality'' [Webster 1988]. Identity allows one to distinguish
and name or designate an entity. It is through identity that relationships are attributed
and named. It is through identity that functions are distinguished and named.
Identification of users, programs, objects, and resources includes both their
classification, i.e., their membership in classes of entities that will be treated in the same
or similar manner, and their individuation, i.e., their uniqueness that will allow the
individual entities to be addressed separately. It is through the process of identity that
one can establish the specification of wholeness and a specification of behavior.

All protected systems requiring authorization   and accountability of individuals
depend on the unique identification of an individual human user. User identities need to
be protected from being assumed by others. User identities need to be authenticated to
confirm that the claimed identity has been validated by a specific protocol executed
between the system and the unique user. Further, to ensure traceability throughout the
system, the individual identity must be maintained for its entire period of activity in the
system.

Identity, through the use of conventions for naming, attributing, labelling,
abstracting, typing, and mapping, can provide for separation and control of access to
entities. Objects created within the system may require additional attribution to expand
the dimensional scope of their identity to meet specific system objectives such as
confidentiality, proof of origin, quality, or timeliness.

Another fundamental dimension of both subject and object identity is the
conveyance of identity attributes via the relationships of inheritance or replication.
Inheritance relationships include part-whole, parent-child, and type instantiation.
Attributes of interest include privileges conveyed by users to other users or to surrogate
subjects (processes acting on behalf of users), and authenticity of origin conveyed to
object copies. This aspect of identity is important to most identity-based policies for
access control, especially with respect to the propagation, review, and revocation of
privileges or object copies.

3.2 CONSTRAINTS

The principle of constraints is fundamental to integrity. A constraint denotes the state
of an active agent being checked, restricted, or compelled to perform some action. This is
central to the conceptual constraint of adherence to a code of behavior-or to what others
have termed ``expected behavior.'' Constraints establish the bounds of (integrity)
actions. When viewed from the context of objects, constraints are the transformation
restrictions   or limitations that apply in transforming an object from an initial state to a
new specified (constrained) state. Constraints establish the bounds of (integrity) states.

3.3 OBLIGATION

The binding, constraining, or commitment of an individual or an active agent to a
course of action denotes the principle of obligation. Obligation is another fundamental
principle of integrity. It is reflected in the terms duty (required tasks, conduct, service,
and functions that constitute what one must do and the manner in which it shall be
done) and responsibility (being answerable for what one does). The bound course of
action, or constraint set, is generally interpreted as always being required or mandatory
and not releasable until the course of action comes to a natural conclusion or specified
condition.   However, the sense of obligation is lost should the individual or active agent
become corrupted, i.e., the binding is broken rather than released. In this sense, an active
agent within a system, once initiated, is bound to proceed in its specified actions until it
reaches a natural or specified termination point or until the state of the system reaches a
failure or corruption point that drives the active agent away from the course of action to
which it is bound. This failure or corruption point could be the result of an individual
yielding to the temptation to perform an unauthorized action either alone or in collusion
with others. It also could be the result of faulty contact with the external environment
(e.g., undetected input error at a sensor), loss of support in the internal environment
(e.g., hardware failure), contact with corrupted objects (e.g., previously undetected
erroneous states), or contact with   another   corrupted   active agent (e.g., improper
versioning in the runtime library).

There is also a temporal dimension  to  the  course  of action to which an active agent
becomes bound.  This  dimension  binds  sequencing, sets deadlines, and establishes
bounds of performance for the active agent.   Obligation is then thought of in terms of
initiation or completion timing, e.g., eventually starting or completing, beginning or
finishing within an elapsed time, initiating or ending at a specified clock time, initiating
or completing in time for a new course of action to begin, or completing a specified
number of action cycles in a specified time. System designers, especially those involved
in real-time or deadline-driven systems, use the temporal dimension of obligation to
develop time slices for concurrent processes. Significant obligation issues in time slicing
include interprocess communication synchronization and the access of concurrent
processes to shared data.





        

One example of obligation is the concept of protocols, which are obligatory
conventions or courses of action for external and/or internal active entities to follow in
interacting with one another. Protocols can constrain the states of data or information to
be exchanged, a sequence of actions, or the mutual exclusion or synchronization of
concurrent asynchronous actions sharing resources or data objects.

3.4 ACCOUNTABILITY

Integrity, from the social and moral sense, implies that an individual has an
obligation to fulfill and that the individual is answerable to a higher (legal or moral)
authority who may impose sanctions on the individual who fails to adhere to the
specified code of action.   Holding the individual answerable is the principle of
accountability, from which requirements are derived to uniquely identify and
authenticate the individual, to authorize his actions within the system, to establish a
historical track or account of these actions and their effects, and to monitor or audit this
historical account for deviations from the specified code of action. The enforcement
strength of sanctions may impact some individuals more than others; simply a reminder
of what is expected and the consequences of not meeting those expectations may prove
useful in promoting and preserving integrity.

3.5 AUTHORIZATION

One aspect of binding the active entity to a course of action is that of authorization. In
essence, authorization is the right, privilege, or freedom granted by one in authority
upon another individual to act on behalf of the authority. Employing the principle of
authorization provides one means of distinguishing those actions that are allowed from
those which are not. The authority may be the leader of an organization, an
administrator acting on behalf of that leader, or the owner of a particular asset who may
grant another individual access to that asset. The authority may not only grant access to
a particular asset, but may also prescribe a specific set of constrained actions that ensue
from the access authorization. Thus, there is a binding between the individual, the
course of action, and the asset(s) to be acted upon. Attempting to perform outside of
these privilege bounds without additional authority is an             integrity violation.

Authorizations may be granted for a particular action or for a period of time;
similarly, authorization may be revoked. Authorized actions may be further constrained
by attributes of the authority, the recipient, and the object to be acted upon. For example,
in many systems, the creator of a data object becomes its owner gaining discretionary
authority to grant access, revoke granted accesses, and restrict modes of access to that
data object. Such access authorization is identity based. However, access to that object
may be constrained by certain of its attributes (identified by labels). These constraints
may reflect an augmenting rules-based access policy that mandatory checking of
corresponding attributes in the individual be accomplished in accordance with specified
rules prior to completing the access authorization.   These attributes   could include
National Security Classification Markings, other organizational sensitivity hierarchies or
compartmentation, or label attributes related  to  the  quality,  e.g.,  lower quality, ``initial
draft,'' associated with  document  transcribers vs higher quality, ``final edited draft,'' 
associated with document editors.

There may be a requirement in certain systems  to  provide for the dynamic enabling
or  overriding  of  authorizations. Whether or not the conditions for enabling  or 
override are to be predetermined or left to the judgement of the user, explicit procedures
or specific accountable action  to invoke an enabling or bypass mechanism should be
provided.

3.6  LEAST PRIVILEGE

Privileges are legal rights granted to  an  individual, role, or subject acting on the
behalf of a user that  enable the holder of those rights to act in the system  within  the
bounds of those rights. The question  then  becomes  how  to assign the set of system
privileges  to  the  aggregates  of functions or duties that correspond to a role of a  user 
or subject acting on behalf of the user. The principle of least privilege provides the
guidance for such assignment.  Essentially, the guidance is that the active entity should 
operate using the minimal set of privileges  necessary  to  complete the job. The purpose
of least privilege  is  to  avoid giving an individual the ability to perform unnecessary
(and potentially harmful) actions  merely  as  a  side-effect  of granting the ability to 
perform desired functions. Least privilege provides a rationale for where to install the
separation boundaries that are to be provided by various protection mechanisms.

Least privilege will allow one individual to have different levels of privilege at
different times, depending on the role and/or task being performed. It also can have the
effect of explicitly prohibiting any one individual from   performing another individual's
duties. It is a policy matter as to whether additional privileges are ``harmless'' and thus
can be granted anyway. It must be recognized that in some environments and with some
privileges, restricting the privilege because it is nominally unnecessary may
inconvenience the user. However, granting of excess privileges that potentially can be
exploited to circumvent protection, whether for integrity or confidentiality, should be
avoided whenever possible. If excess privileges must be granted, the functions requiring
those privileges should be audited to ensure accountability for execution of those
functions.

It is important that privileges and accesses not persist beyond the time that they are
required for performance of duties. This aspect of least privilege is often referred to as
timely revocation of trust. Revocation of privileges can be a rather complex issue when it
involves a subject currently acting on an object or who has made a copy of the object and
placed it in the subject's own address space.

3.7 SEPARATION

Separation refers to an intervening space established by the act of setting or keeping
something apart, making a distinction between things, or dividing something into
constituent parts. The principle of separation is employed to preserve the wholeness of
objects and a subject's adherence to a code of behavior. It is necessary to prevent objects
from colliding or interfering with one another and to prevent actions of active agents
from interfering or colluding with one another. Further, it is necessary to ensure that
objects and active agents maintain a correspondence to one another so that the actions of
one agent cannot effect the states of objects to which that agent should not have
correspondence, and so that the states of objects cannot affect the actions of agents to
which they should not have correspondence.

One example of separation is the concept of encapsulation, which is the surrounding
of a set of data, resources, or operations by an apparent shield to provide isolation, (e.g.,
isolation from interference or unspecified access). With encapsulation, the protection
perimeter has well-defined (often guarded) entry and exit points (interfaces) for those
entities which have specified access. Encapsulation, when applied in the context of  
software   engineering, generally  incorporates other separation concepts associated with
principles of software design, e.g., modularity and information hiding, and employs the
mechanism of abstract data types found in many modern programming languages.

Other separation concepts include time or spatial multiplexing of shared resources,
naming distinctions via disjunctive set operators (categorical or taxonomic classification,
functional decomposition, hierarchical decomposition), and levels of indirection (virtual
mapping). All these separation concepts can be supported by the incorporation of the
principle of least privilege.

3.8 MONITORING

The ability to achieve an awareness of a condition or situation, to track the status of
an action, or to assist in the regulation of conditions or actions is the essence of the
principle of monitoring. Conceptually, monitoring combines the notion of surveillance
with those of interpretation and response. This ability requires a receiver to have
continuous or discrete access to specified source data through appropriate forms of
sensors. It also requires a specification of the condition, situation,   event,   or sequence of
events that is to be checked, observed, or regulated, and a specification of the response
that should be provided. This response specification generally includes invocation
linkages to alarms and to a family of handler processes, such as resource or device
handlers and   exception or error handling processes. In some cases, monitors will
require more privilege than other subjects within the system.

The principle of monitoring is key to enforcement of constrained actions in that the
actions must be observed, understood, and forced to comply with the imposed
constraints. When the actions are not compliant, either additional system-provided
corrective actions or alarms to request external corrective actions are invoked.

The principle of monitoring is used in mutual exclusion schemes for concurrent
processes sharing data or resources (e.g., Hoare's Monitors) and in the operation of
interprocess communications of asynchronous processes to provide process
synchronization. Monitoring is the basis for auditing and for intrusion detection. Other
examples employing this principle include range, value, or attribute checking
mechanisms in the operating system, database management systems (DBMS), or in an 
applications  program;  an  embedded feedback-loop control system, such  as  a 
thermostat-driven cooling system; and the security  ``reference''  monitor in trusted
systems.

3.9  ALARMS

Whenever systems encounter an error or exception condition that might cause the
system to behave incorrectly  with respect to the environment (an integrity failure), the 
system designer should incorporate the principle of  alarms  to alert the human operator 
or  individuals  in  the  external environment to the unmanageable condition.  This  fact
mandates a careful analysis of not only the internal aspects of system, but also an
analysis of possible influences from the external environment. Further, the designer
must not only consider the alarms, but also their sufficiency.

Alarms must be designed such that they are sufficient to handle all possible alarm
conditions. For example, if a small field on a display is allocated to displaying all alarm
conditions, and only one alarm condition may be displayed at once, a minor alarm (such
as a low-power alarm) may hide a major alarm (such as indication of intrusion). Thus, if
an intruder could artificially generate a low-power condition, he could hide the alarm
indicating an unauthorized access.

Alarm sufficiency is a technical design issue which, if overlooked, can have serious
impact. It must be required that alarms not be able to mask one another. While there
may not be room for all alarm messages to be displayed at once, an indicator of the
distinct alarm conditions must be given so that the user does not mistakenly believe that
an ``alarm present'' indicator refers to a less severe condition than the alarm actually
involved. In general, a single indicator should not group several events under the same
alarm message. The central concepts here are that alarms must always reflect an accurate
indication of the true status of events and alarm messages must always be visible.

3.10NON-REVERSIBLE ACTIONS

Non-reversible actions can prevent the effect of an action from later being hidden or
undone.   Non-reversible actions support the principle of accountability as well as
address a unique set of problems, i.e., emergency revocations or emergency destruction.
Non-reversible actions are in general, simply a type of restriction on privilege. Thus, the
principle can often be implemented using mechanisms intended for granting privileges.
For example, a non-reversible write operation can be provided by giving a user write
access but no other access to an object. Likewise, an emergency destruction operation
can be provided, at least in the abstract, by giving a user ``destroy'' permission but not
``create'' permission on an object.

``Write-once'' media provide one example of the use of this principle. These media
are useful when the integrity concern is that the users not be able to later modify data
they have created. Creation of audit records is another example employing this principle
in which users may be allowed to write data, but then not modify the written data to
prevent users from erasing evidence of their actions. Disposable locks used on shipping
containers (which can only be locked once and cannot be reused) are yet another
example of this principle's use.

3.11 REVERSIBLE ACTIONS

The ability to recognize an erroneous action or condition that would corrupt the
system if actions that depend on the erroneous conditional state were allowed to
continue often establishes the need to back out the erroneous action or ``undo'' the
condition. This is the principle of reversible actions. System designers most often
incorporate this principle at the user interface, e.g., in text editors, where a user may
readily notice keying errors or command errors and reverse them prior to their having a
detrimental and not easily reversible or non-reversible effect on the object state. This
principle is also used to support atomicity in database transaction processing through
the protocol of ``rulebook,'' which undoes the portion of a transaction already
accomplished when the entire transaction cannot be accomplished. Such reversible
actions are key to leaving the database in a complete and unimpaired state.

3.12REDUNDANCY

Redundancy in computer systems is a risk-reducing principle that involves the
duplication of hardware, software, information, or time to detect the failure of a single
duplicate component and to continue to obtain correct results despite the failure
[Johnson 1989]. Redundant processing is commonly used in fault-tolerance applications.
The same processing is performed by more than one process, and the results are
compared to ensure that they match. The need for redundancy varies depending on the
application. Redundant processing is commonly used in the implementation of critical
systems in which a need for high reliability exists. Examples include multiply redundant
processors in avionics systems, and traditional accounting systems in which auditors
reproduce the results of accountants to verify the correctness of their results. In
situations where a system may be subjected to adverse conditions, such as on the
battlefield or hazardous environment, or in systems which may be subject to an adve-
rsarial attack that is attempting to disable operations controlled by the system,
redundancy may be essential. Thus, it may be desirable to require it for certain systems.    

Hardware redundancy is the most familiar type of redundancy, and involves
duplicating hardware components.   Software redundancy involves adding software
beyond what is necessary for basic operation to check that the basic operations being
performed are correct. N-version programming in which different teams provide unique
versions of the same application program vs replicated versions is one example of
software redundancy. The efficacy of software redundancy to support correct operations
remains an open issue. For example, it has been shown that n-version programming
teams tend to have difficulty with the identical hard problems of an application [Knight
1986].

Information redundancy involves duplication of information. Duplicate copies of
information are maintained and/or processed, so that failures can be detected by
comparing the duplicated information. To further assist in detection of failures, the two
copies of information may be represented in different ways, (e.g., parity bits or cyclic
redundancy codes). By exchanging bit positions of individual data bits in a byte or word,
or by complementing the bits of all data, failures such as those that modify a specific bit
position in a byte or word, or which force specific bits to always be zero or one, can be
detected.

Time redundancy involves repeating an operation at several separated points in
time, (e.g., resenting a message that was transmitted with errors). While this approach
will not detect constant, persistent failures that always cause an operation to fail in the
same way, it can often detect intermittent or transient failures that only affect a
subset            of the repeated operations.

3.13 MINIMIZATION

Minimization is a risk-reducing principle that supports integrity by containing the
exposure of data or limiting opportunities to violate integrity. It is applicable to the data
that must be changed (variable minimization and the more general case, data
minimization), to the value of information contained in a single location in the system
(target value minimization), to the access time a user has to the system or specific data
(access time minimization), and to the vulnerabilities of scheduling (scheduling
regularity minimization). Each application is discussed in more             detail in the
following sections.

3.13.1Variable Minimization

The ability of a subject to violate integrity is limited to that data to which a subject
has access. Thus, limiting the number of variables which the user is allowed to change
can be used to reduce opportunities for unauthorized modification or manipulation of a
system. This principle of variable minimization is analogous to least privilege. Least
privilege is usually used to describe restrictions on actions a subject is allowed to
perform, while variable minimization involves limiting the number of changeable data
to which a subject has access.

For example, a subject may be authorized to transmit messages via a
communications system, but the messages may be in a fixed format, or limited to a small
number of fixed messages in which the subject can fill in only specific fields. Thus, a
subject may be allowed to say ``fire type X missile on coordinates __, __'' but may not be
allowed to substitute missile type Y for missile type X.

3.13.2Data Minimization

Variable minimization generalizes to the principle of data minimization, in which the
standardized parts of the message or data are replaced by a much shorter code.   Thus
``Fire missile'' might be replaced with the digit ``1'', and ``on coordinates'' might be
eliminated altogether, giving a message of the form

1 X ___ ___

where ___ ___ is replaced by the coordinates. The shortened forms of standardized
messages or phrases are sometimes called brevity codes. When implemented in a
computer system, integrity can be further enhanced by providing menu options or
function keys by which the operator specifies the standardized message, thus reducing
the potential for error in writing the code. On the other hand, as these codes become
shorter, there is an increased likelihood of spurious noise or errors generating an
unintentional valid message.

3.13.3 Target Value Minimization

The threat of attack on a system can be reduced by minimizing target value. This
practice involves minimizing the benefits of attack on a given system, for example, by
avoiding storing valuable data on exposed systems when it can be reasonably retrieved
from a protected site on an as-needed basis. Distributing functionality among subjects is
another means of minimizing target value and, thus, reduces vulnerability. Highly
distributed systems use this approach, in which any one processing element is of little
importance, and the system is sufficiently widely distributed that access to enough
processing elements to have major impact is not feasible.

3.13.4Access Time Minimization

Access time minimization is a risk-reducing principle that attempts to avoid
prolonging access time to specific data or to the system beyond what is needed to carry
out requisite functionality. Minimizing access time reduces the opportunity for abuse.
Timeouts for inactivity, rotation, and binding of access to ``normal'' or specified working
hours are variations of this approach. This principle can serve distinct integrity
functions, particularly in the case of more analytically oriented users of data.

3.14 ROUTINE VARIATION