[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: fast re-keying



Hugo,

>The use of MD5 for deriving the keys is fine with me (as usual this should
>be replaceable by any other "pseudorandom" function, e.g. DES-MAC-CBC).

Yeah, but is the additional complexity worth it? Seems to me that
since both the hash output and (part of) the input are secret, the
strength of the hash here is not as important as it might be
elsewhere. Remember also that we're trying to come up with an interoperable
baseline, which means making some choices and sticking with them.

>Since you are going through an interactive process to refresh the key (as
>you explain below) then there is no reason to impose this special role to
>the SAID (aka SPI these days). We better have no assumption on the SAID
>(not even that it does not repeat through the life of a DH key; this should be
>completely free for the implementation to decide).

>The right way that I see to implement the fast re-key is to exchange
>nonces between the parties and use them as arguments to MD5 for the derivation
>of the keys.  These nonces ensure the freshness and authenticity of the
>exchanged key. The protocol remains simple and very cheap (as explained below).

In effect, I am using randomly-chosen SAIDs as nonces.  Random SAIDs
are a good idea for other reasons. For one, they make it harder for an
"off path" attacker to guess what SAIDs might be valid at a particular
destination.

Second, and perhaps of more immediate importance, randomly chosen
SAIDs are an effective way to deal with the collisions that would
otherwise occur when one system in a security association crashes,
reboots and attempts to reuse a SAID that the other side still thinks
is valid. I quickly ran into this in my prototype and decided that
randomizing the SAIDs was a more general and elegant solution to the
problem than coming up with an ad-hoc scheme to handle collisions.
After all, although I opposed the 32-bit SAID as being unnecessarily
large, now that we have it I might as well take advantage of it.

I can explain all this in more detail after I arrive in Danvers
tomorrow night.

>Not only it gives the freshness referred to above, it also saves a lot of
>bandwidth. Just sending the g^x,g^y as dummy values as you suggest seems
>to me unjustfiably wasteful (1000bits each and if you're sending also

If I were really worried about bandwidth in the protocol (which I'm
not), I'd start elsewhere.  E.g., the cookies don't really need to be
128 bits each, I used that size only because that's the size of MD5's
output. I could probably knock them down to 64 bits each, which would
save 128 bits in every Photuris packet but the original cookie
request, which would save 64 bits.

Besides, sending the DH exponentials every time is a simple way to let
the other end see if they've changed without having to create yet another
ad-hoc mechanism to do so.

>to me unjustfiably wasteful (1000bits each and if you're sending also
>g and p it will come to 3000 bits that are essentially unused).

I'm not sending g and p anymore. I have a table of predefined g's and
p's that currently includes exactly one pair: a 1024-bit p that is a
strong prime, and a g that is a primitive root for that particular
p. It's my understanding that you can't do any better than this at
thwarting the discrete log problem for this size field, and generating
equivalently good moduli takes a half hour of CPU time, so why bother
with unnecessary generality?  (I will have a facility to index a few
other predefined g/p pairs of different sizes for those with a
different performance/security tradeoff point.)

>The need for cookies here is not clear to me. In particular, fast re-key
>does not need to require cookies since only fast to compute and verify
>functions are used. But again, I see this as a secondary detail.

Every Photuris function that involves a modular exponentiation (i.e.,
most of them) requires a cookie to guard against off-path swamping attacks.

To summarize my design philosophy for Photuris: a small and general
purpose message set is more interesting to me than one that is overly
optimized with lots of ad-hoc messages to save bits on the wire,
because bits on the wire are just not that important here. Photuris
packets will remain relatively rare in contrast to data packets even when
frequent rekeying is done, so they're not worth a lot of optimization.

What effort I can expend on optimization I'd rather invest in speeding up
the modular exponentation and symmetric encryption functions and in
implementing per-packet data compression.

Phil