[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Some comments on draft-ietf-spki-cert-theory-02.txt



[...]
>>In general it seems more robust for the verifier to make the determination
>>of which certificates it requires, with the requester perhaps supplying
>>some certificates as hints for the verifier in the hope that they will be
>>sufficient (for example to facilitate verifiers without connectivity
>>to certificate servers).
>
>The whole issue of finding the right certificates and getting them to the
>verifier is wide open.  The certificate discovery process might prove
>complex enough in some cases (e.g., give me the shortest path of PGP key
>signatures between me and foo@bla.com) that a machine could do that as a
>service.  OTOH, a collection of certificates about you is a dossier on you,
>so there are privacy issues here.

True - but note that in the above example the shortest path may
not be an acceptable path (it may involve signers unknown or
unacceptable to the discoverer). So the discoverer may need
to tell the service something about what it might accept - or it
may retrieve a larger set of data than necessary and whittle it
down itself.  In other words the verifier also may have privacy
concerns when searching for certificates (and this includes
when facilitating "certificate push"). These concerns are really
orthogonal to the push/pull question - the requestor
has to reveal some certificates to the verifier (somehow) and
something has to know what certificates a verifier might be
willing to use.  (with push, "something"==requestor, with
pull, "something"==verifier, and with an intermediate
service, "something" ==certificate server). But regardless
of how this information flows, both the requestor and the verifier
may lose some privacy, and the issue is how to limit this
exposure.

My hunch is that in general some intermediate discovery
service is needed (perhaps with access controls, themselves
bootstrapped using less sensitive certificates and/or acls),
and that 'push' should be regarded as a special simple case
of this in the first instance. Another simplified and probably
useful case would be where the requestor directly implements
an access controlled discovery service for its own certificates.

>As for push vs. pull, we assumed all along that the verifier would be more
>highly loaded than the prover.  That's certainly true for CyberCash
>machines.  Therefore, the more work we can give to the prover, the better.

That seems reasonable, but perhaps this assumption fails for
one-to-many (multicast/broadcast) transmission.  In this scenario
it may be that the transmission node has to compute a large number
of certificate paths and push a large vector of these on the forward
channel, or else it (or some agent working for it) may need to be
prepared to field a lot of certificate requests from receivers. If the
audience is large and dynamic then this requires a lot of server capacity
on the transmission (prover) side.  You get similar problems with
conferencing  and or mailing lists, too.  Here the many (the receivers)
collectively have far more processing power than the one, which may be
something you can leverage. It's also worth bearing in mind
that in such scenarios the application's connectivity is
effectively (or actually) unidirectional from prover to verifier (or
from encryptor to decryptor depending on what the public keys
are being used for). Or the bandwidth may be much larger
in the forward link than in the back link (e.g. with IP over satellite).
There probably isn't a one-size-fits-all answer to these problems,
but I think it's worth keeping these scenarios in mind.

Cheers,
Frank O'Dwyer