[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Simplifying IKE (was RE: Reliable delete notifies)



It has been my observation that just about everything we have do for the
sake of some type of optimization (memory, latency, throughput), has made
IKE into a more complex and ambiguous protocol.

Examples: aggressive mode, dangling phase 2s, 3 message quick mode, 2 byte
CPIs, arcane attribute encoding schemes (which only save a byte or two).

Why isn't there a message counter in the isakmp header (i.e. MM3)? I know
it's redundant, but it would sure make the state machine easier on lossy
networks.

Forget aggressive mode as an optimization and think only of the security
properties it gives you: no identity protection (so you can use preshared
keys) and DoS resistance (trading one attack for another, actually). Base
mode would cause a lot fewer interoperability problems than aggressive mode
because it works just like main mode except without the KE. Or we could go a
step farther, as Bill has suggested, and tweak main mode. (However, I think
that keeping Base Mode as a separate exchange mode is actually less
complex.)

I'm not sure that merging the big 3 documents will actually make IKE easier
to understand. ISAKMP is fine as it is, as far as I'm concerned. Merging IKE
and DOI seems more sensible, although I'm worried that they will end up
being shorter; IKE is too terse already. It should be more restrictive...
recommend a payload ordering, restrict vendor ids to MM1&2 only, put some
kind of limitation on rekeying, use a counter for the message id.

I doubt this is a popular belief, but I would like to see a fourth document,
something that was hinted at in the Schneier/Ferguson analysis of IKE: an
explanation of what properties the IKE protocol is meant to have -- a
derivation of the formulas, e.g. SKEYID derivation, key refresh, cookie
calculation, what PFS will accomplish, what "uniqueness" of the message id
means. ;-)

I don't understand the argument about extra algorithms causing
interoperability problems. Public key encryption has never caused an
interoperability problem for us because we simply don't support it. I can
understand how some people might want a non-repudation property in their
authentication algorithm, and I think the protocol shouldn't limit that, but
those aren't the kinds of people who buy our products.

And I don't understand this whole "derive a pre-shared key from a
self-signed certificate" thing. The whole point of using certificates (IMHO
only, I have discovered) is that they have extra properties that preshared
keys do not have (e.g. the fact that they can be publicly distributed
without revealing the key). If you use the hash of a self-signed certificate
as a preshared key (in which case you cannot distribute the certificate
publicly) then you inherit the worst of both worlds.

TimeStep has supported certificates for 6 years or so, and for intra-domain
communication they are great. But believe it or not, I have seen situations
where preshared keys are much easier to deploy than certificates. One
example is inter-domain communication where the two gateways have different
trusted roots (and restrictive policy rules). Yes, I know you *can* solve
this problem using certificates (by cross-certifying the gateways or by
creating a new domain exclusively for the connection), but I hardly think
that is easier than using a single preshared key.

At the heart of this discussion, I think, is an effort to legislate the way
in which IPsec is used. The IETF overlords can't directly impose their whims
on customers, so instead they place that burden on us, the implementors.
This smacks of the whole IPSRA problem. I happen to prefer protocols that
don't come with a political agenda.

Andrew
--------------------------------------
Beauty with out truth is insubstantial.
Truth without beauty is unbearable.



Follow-Ups: References: