[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Some comments on draft-ietf-spki-cert-theory-02.txt
- To: firstname.lastname@example.org
- Subject: Some comments on draft-ietf-spki-cert-theory-02.txt
- From: "Frank O'Dwyer" <email@example.com>
- Date: Thu, 12 Mar 1998 21:23:27 -0000
- Mmdf-Warning: Parse error in original version of preceding line at post.mail.demon.net
- Sender: firstname.lastname@example.org
Some comments on the latest cert theory draft:
>7.1 Key and certificate storage
>The common practice which has evolved is that of the requester
>supplying any and all certificates which the verifier needs in order
>to permit the requested action. In this model, there may be no need
>for certificate servers or if there are servers, it is likely that
>they will be accessed by the requester (possibly under access
>control) rather than the verifier.
This works in a lot of cases, but I'm not sure that it works in general.
In many scenarios it is difficult for a requester to know a priori all of
the certificates which will be acceptable and/or useful to a verifier.
In general this would require that each verifier divulge information
about which entities it regards as authoritative for various assertions,
or else this would have to be somehow obvious to the requester.
For example in a PGP-style model, the requester would require intimate
knowledge of the trust settings on the verifiers key ring, which is not
normally public information and might even be regarded as confidential.
Even if the verifier were prepared to offer this information to the
requester, in unidirectional networks (e.g. broadcast) there may not
be the connectivity for the requester to receive it. Moreover
in broadcast or multicast networks, there may simply be too many
verifiers for transmission of the necessary (possibly large) set of
certificates to be feasible or efficient. In such networks the set of
verifiers that are tuned in may not even be known to the requester,
which may make the task of determining appropriate certificates
impossible. Lastly where bandwidth is a concern, 'pushing' certificates
may be wasteful, which in turn raises fairly nasty issues of certificate
caching and distributed cache consistency (something that would
have to be re-invented for each higher layer protocol)..
In general it seems more robust for the verifier to make the determination
of which certificates it requires, with the requester perhaps supplying
some certificates as hints for the verifier in the hope that they will be
sufficient (for example to facilitate verifiers without connectivity
to certificate servers).
>7.2 Protection of Private Keys
> For any public key cryptosystem to work, it is essential that a
> keyholder keep its private key to itself.
Although the above seems like a natural assumption, there is
at least one important instance where it may not be true. A
private key might be shared with a trusted intermediary such
as a firewall, forgoing end-to-end security in order to facilitate
content inspection (e.g. virus scanning) or access control. This
could also be done by sharing session keys, but in some cases
it's simpler just to share the private keys used for key agreement
(e.g. as done in the SKIP protocol). Yes, I know it sounds like an
abomination, but many organisations are uncomfortable with
allowing end-end encrypted data (e.g. SSL) through their firewalls.