home | articles | links | fun | about
Up to: CS432 Information Security

Information Flow/Multi-level security (11/14/02)

Info flow in a program

Program P(v, s) (use randomness)

Output includes every behavior the adversary can see

Does P leak info about s? Adversary can see EVERYTHING but s

Play a game with the adversary:

  1. Tell adv P
  2. Adversary chooses v, s_0, s_1 (secret 0 and secret 1).
  3. We flip a coin to get a choice b, which is 0 or 1
  4. We tell adversary P(v, s_b)
  5. Adversary guess b

To win, adversary must guess right > 50% of the time (do better than random guessing)

P leaks info iff there exists an adversary who can win.

P leaks info iff distributions P(v, s0) and P(v, s1) are distinguishable.

In other words: to avoid leakage, s should have no distinguishable impact on output.

How do you enforce this?

In practice, you can�t demonstrate non-leakiness

So, assume all programs leak their inputs. �Contagion model�: when program sees input, it�s �infected,� and the output reveals something about inputs.

Multi-Level Security

A failed model? General method for specifying/enforcing info flow policy.

Assumes contagion model

Relies on:

A partial order is a relation that is:

  1. reflexive for all x <= x
  2. transitive for all x, y, z if x <= y and y <= z, then x <= z
  3. asymmetric for all x, y if x <= y and y <= x, then x = y

Examples of lattice labeling

Ordering operation is a subset

Bell-LaPadula Theorem: Assume security labels for a lattice. Every file, every program has a label.

�No read up�: program P can read file f only if L(f) <= L(p). This is the idea of �clearance.�

�No write down�: Program P can modify file f only if L(p) <= L(f). Remember the contagion model: I can�t write to an unrestricted file b/c I might leak info that can be read by less privileged users.

Theorem: If L(f1) <= L(f2), and the 2 rules are enforced, then information from f2 cannot leak into f1.

Example: a workstation that can know what�s classified and unclassified.