[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Some comments on draft-ietf-spki-cert-theory-02.txt


At 09:23 PM 3/12/98 -0000, Frank O'Dwyer wrote:
>Some comments on the latest cert theory draft:
>>7.1 Key and certificate storage
>>The common practice which has evolved is that of the requester
>>supplying any and all certificates which the verifier needs in order
>>to permit the requested action.  In this model, there may be no need
>>for certificate servers or if there are servers, it is likely that
>>they will be accessed by the requester (possibly under access
>>control) rather than the verifier.
>This works in a lot of cases, but I'm not sure that it works in general.  
>In many scenarios it is difficult for a requester to know a priori all of 
>the certificates which will be acceptable and/or useful to a verifier.  
>In general this would require that each verifier divulge information
>about which entities it regards as authoritative for various assertions,
>or else this would have to be somehow obvious to the requester.
>For example in a PGP-style model, the requester would require intimate
>knowledge of the trust settings on the verifiers key ring, which is not 
>normally public information and might even be regarded as confidential. 
>Even if the verifier were prepared to offer this information to the 
>requester, in unidirectional networks (e.g. broadcast) there may not 
>be the connectivity for the requester to receive it. Moreover 
>in broadcast or multicast  networks, there may simply be too many 
>verifiers for transmission of the necessary (possibly large) set of 
>certificates to be feasible or efficient.  In such networks the set of 
>verifiers that are tuned in may not even be known to the requester, 
>which may make the task of determining appropriate certificates 
>impossible.  Lastly where bandwidth is a concern, 'pushing' certificates
>may be wasteful, which in turn raises fairly nasty issues of certificate 
>caching and distributed cache consistency (something that would
>have to be re-invented for each higher layer protocol)..
>In general it seems more robust for the verifier to make the determination 
>of which certificates it requires, with the requester perhaps supplying 
>some certificates as hints for the verifier in the hope that they will be 
>sufficient (for example to facilitate verifiers without connectivity
>to certificate servers).

The whole issue of finding the right certificates and getting them to the 
verifier is wide open.  The certificate discovery process might prove 
complex enough in some cases (e.g., give me the shortest path of PGP key 
signatures between me and foo@bla.com) that a machine could do that as a 
service.  OTOH, a collection of certificates about you is a dossier on you, 
so there are privacy issues here.

As for push vs. pull, we assumed all along that the verifier would be more 
highly loaded than the prover.  That's certainly true for CyberCash 
machines.  Therefore, the more work we can give to the prover, the better.

SET tried to save bandwidth by sending of "thumbs" -- certificate hashes -- 
from verifier to prover, so that the prover could optionally send fewer 

I have yet to see that in operation or to do a performance analysis, but my 
gut feel is that it wouldn't win.  I do know that a certificate result 
certificate from the verifier back to the prover, provided it lives long 
enough, beats the thumbs model.  Hashes are small, but a verifier could have
a lot of certificates.  Meanwhile, the list of certificate thumbs a verifier
happens to hold is an intelligence leak.

>>7.2 Protection of Private Keys
>>   For any public key cryptosystem to work, it is essential that a
> >  keyholder keep its private key to itself.  
>Although the above seems like a natural assumption, there is 
>at least one important instance where it may not be true. A
>private key might be shared with a trusted intermediary such
>as a firewall, forgoing end-to-end security in order to facilitate 
>content inspection (e.g. virus scanning) or access control.  This 
>could also be done by sharing session keys, but in some cases 
>it's simpler just to share the private keys used for key agreement 
>(e.g. as done in the SKIP protocol). Yes, I know it sounds like an 
>abomination, but many organisations are uncomfortable with 
>allowing end-end encrypted data (e.g. SSL) through their firewalls.

I don't buy loaning a private key to anyone, under any circumstances, and 
don't believe anyone else should buy into such behavior.

If you have a corporate policy that the firewall needs to scan content, then 
you need to include the firewall's own key as a crypto-recipient.  This 
gives the firewall only one private key rather than a few thousand.  It's 
almost no extra work for the user.  If that's the corporate policy and a 
message tries to go past without a firewall key, then you dump it on the floor.


 - Carl

Version: PGP for Personal Privacy 5.5.3


|Carl M. Ellison  cme@cybercash.com   http://www.clark.net/pub/cme |
|CyberCash, Inc.                      http://www.cybercash.com/    |
|207 Grindall Street  PGP 08FF BA05 599B 49D2  23C6 6FFD 36BA D342 |
|Baltimore MD 21230-4103  T:(410) 727-4288  F:(410)727-4293        |