[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: public key algorithm naming

Carl Ellison <cme@cybercash.com> writes:
> 1.	hash algorithm negotiation in SSL and maybe other protocols; and 
> 2.	strange signing mechanisms (which might show up as hashing or packing 
> algorithm names in public key structure definitions).
> I recognize (1) but I am not very sympathetic.  If we're going to allow 
> negotiation of hash algorithm, why not of public key algorithm?
I agree that we should. But why not allow them to be negotiated
independently when possible? (It's not possible for DSA, but it
is for RSA).

>  I suspect 
> the reason we see hash algorithm negotiation is because we have a bunch and 
> we can't tell how good they are, so some protocol designer prefers to leave 
> the binding to the last minute, complicating the protocol.
That's one reason. Another is that they are subtly different. I.e.
even if both algorithms were perfect, MD5 would still be faster but
less secure due to it's shorter size.

> It doesn't 
> bother me a great deal to do negotiation of whole packages (e.g., in the PGP 
> case, you have public key algorithm, hash algorithm and symmetric algorithm 
> flexibility).  If you can negotiate whole packages, then you can bind those
> together into key structure definitions.
You can, but you wouldn't want to. Binding algorithm choice to
keys makes for unpleasant restrictions on protocols. 

> Let me modify my statement here.  Part of me likes the flexibility and 
> negotiation of everything could be just great.  It could maximize the number 
> of people you connect to.  However, it could also lead to the annoyance in 
> SSL of making a "secure" connection that got negotiated down to 40-bit 
> symmetric crypto.
The way to stop this is with user policy, not by certificate
restrictions. Taking your suggestion to it's logical conclusion,
a server would have to have between 5 and 10 different certificates
just to do SSL, in order to accomodate all the different 
symmetric algorithms. Doesn't this strike you as problematic?

> Your (2) suggests that a single key with a single certificate might be used 
> in some unusual protocol that just came up at the spur of the moment.
That's not what I meant to suggest. What I meant was that new protocols
have a tendency to design new signing primitives based on current
algorithms and that SPKI will be forced into designing identifiers
for each of these primitives.

I'd observe that TLS uses a different (though similar) client auth
signature as SSLv3, which uses a totally different one from SSLv2.
As a client, am I going to need 3 different certificates to accomodate
each possible server?
>  It might even be a 
> security weakness to use one key too many places.  I don't know of any, but 
> there is the general rule of thumb that the more keys you use, the better.
I don't know of any weaknesses, and I don't consider this rule of thumb
to be particularly applicable to asymmetric algorithms. Especially
when you can generate as many RSA plaintext/ciphertext pairs as you 
wish without possession of the private key.

> Even if I were to use the same key for multiple purposes, each purpose would 
> need a new certificate.  SPKI, at least, doesn't generate one-size-fits-all 
> certificates.  When you generate that new certificate, you can attach the 
> appropriate verification algorithm name.
Aah, but what's a purpose? I'd tend to say that 'SSL client auth'
is a purpose, but as I've just observed, this actually refers to
several algorithms.

It seems to me that you're trading off a lot of suffering for the
users and the protocol designers in order to save SPKI implementors
a very modest amount of work. I've worked on systems that had
digest algorithm flexibility for signatures (X.509 and PKCS-7
come to mind) and while those systems were no picnic to implement,
dealing with the different digest algorithms was only a very
modest headache.


[Eric Rescorla                             Terisa Systems, Inc.]
		"Put it in the top slot."

Follow-Ups: References: