[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Delegation & Agreement Based Policy
In my previous post I suggested a Byzantine contract paradigm for
certificate policy, especially concerning delegation. Instead of
giving "permission" to delegate, assume that any entity (and, in particular, the
beneficiary of an authorization) will do whatever it likes unless (a) it promises not
to (or the specification promises that it won't), and (b) this promise is enforceable
by the verifier, or at least non-performance on this promise is verifiable.
Assume as well that third parties will do whatever they like unless prevented.
This is the pessimistic assumptions used in distributed computing theory (the
Byzantine generals problem in particular comes to mind); it applies all the more
when dealing in the security of distributed systems. "Fraud", poorly defined
in other security methodologies, is well defined in the contractual
paradigm as being a breach of a specific agreement.
In some cases authorization policies are self-enforcing, and we can rely on
bearer certificates. Otherwise, verification and enforcement rely on observation
of the behavior of an object, often persistence of behavior across many
authorizations to it. One method of facilitating this observation is via public
key-object bindings. These bindings may be certified by some authority, but
this is a weak (not self enforcing, and not even strongly verifiable) kind of
authorization due to a variety of identification frauds: identity lending, identity
theft, man-in-the-middle, etc. To be useful these bindings must be
further associated with "reputation" information, via observing the persistence
of behavior associated with the public key. With such bindings we can have
non-bearer, or identified, certificates. We can then make authorizations which are
non-delegable up to the non-delegability of the public key-object binding.
To partake of such a non-delegatory service one must hold not only an
authorization certificate, but also an identity certificate. If either the
specific authorization or the authentication via identity certificate fails,
the authorization fails. This combination gives an identified authorization certificate.
With identified certificates we've reduced a wide variety of possible delegation frauds
down to identification fraud. Of course, solving the several varieties identification
fraud is still difficult, but at least we've reduce a bunch of nebulous delegation
problems down to a well-defined one. Besides the weakness of solutions to
identification fraud, and the need to further bind to reputation information
for identity to be useful, identity certificates also introduce a major source of
confidentiality loss, especially due to traffic analysis. But many consider
these prices worth paying due to the lack of bearer certificate solutions to
security problems.
Non-bearer certificates require the verifier to have trustworthy verifications chains
for the identity certificates of all entities to be verified, as well as the chain(s) for
the particular authorization itself. The identification certificates cannot, as
far as way know, ever be made as strongly secure as delegable
authorizations.
In many situations we don't really care about delegation. If for example we are
only interested in limiting the usage of a service or associated resources, the
identity of the user is irrelevant. In these cases we can issue bearer certificates,
usable N times. This is strongly enforceable via clearing lists, with no delegation
problems.
I suggest the follow principles for determining when certificate chains should
be automatically followed:
* "Trust" cast in narrow, very specific terms: "trust to specifically X"
* These specific terms converted into proactive, self-enforcing protocols when
possible
* When a self-enforcing protocol is not possible, strong verification of these
specific terms through unforgeable auditing trails and frequent verification
checks should be instituted.
* Otherwise, the "chain of trust" is in computer security terms very weak, relying
on ill-defined human trust and institutions rather than on the security properties
of the software. The user interface should prominently make this known to the users,
and allow the users to input their judgements regarding these people and
institutions. Confusing human with computer security makes great deceptions possible.
More articles about the contractual paradigm, blacklisting (negative reputation
systems), enforcement, verification, and related topics can be found on my web page.
Nick Szabo
szabo@best.com
http://www.best.com/~szabo/
References: