[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Choosing between IKEv2 and JFK



I think we need to get out of the habit of thinking about certificates when
at the protocol level we really mean public keys. Although the certificates
may be delivering the key there may well be another mechanism in use, in
particular XKMS or DNS-SEC.

We should also get way from thinking that the IPSEC stack is necessarily the
place where the trust decision is going to be made. Trust relationships are
relationships between enterprises and people, they are never relationships
between devices. It is unfortunate that the 'alice sends bob an email'
scenario became the certificate paradigm. The End to End principle, properly
understood in the network context is about where you put complexity, put
complexity where it can be managed. In the security context 'end to end' is
the antithesis of link by link encryption. If however the trust relationship
is between enterprises and the network communication is between devices it
may be the case that the 'ends' of the turst relationship are not the same
as the 'ends' of the network communication.

I am however somewhat skeptical as for the need or utility of introducing
separate QoS levels at the packet layer. An application that really cares
about QoS should be doing the trust path processing itself and not pushing
down to the O/S level. If you care about different QoS level you almost
certainly care about non-repudiation at some point, this is not a problem
that has a pretty solution at the packet level.

It does not seem likely to me that anyone will be implementing IKEv2 or JFK
without providing support for AES (particularly if we make it mandatory). I
don't see any reason why someone should be anxious to use anything less
strong than AES. If they are coerced into using something less the issue of
encryption level will not arise.


So I do not believe that QoS etc. is going to give rise to a performance
issue in practice in IPSEC.



Phillip Hallam-Baker FBCS C.Eng.
Principal Scientist
VeriSign Inc.
pbaker@verisign.com
781 245 6996 x227


> -----Original Message-----
> From: Angelos D. Keromytis [mailto:angelos@cs.columbia.edu]
> Sent: Thursday, March 07, 2002 1:15 PM
> To: Jan Vilhuber
> Cc: Eric Rescorla; ipsec@lists.tislabs.com
> Subject: Re: Choosing between IKEv2 and JFK 
> 
> 
> 
> Jan,
> Let me point out that, in the test scenario you are 
> describing, different
> certificates would be used for the different QoS levels, even 
> if though it
> is the same two peers (hosts) establishing multiple SAs. I ran into
> the exact same situation in a different context: per-user (or 
> per-socket)
> keying using distinct SAs for each TCP connection. Since 
> certificates are
> exchanged only during Phase 1 (in both IKE and IKEv2), you 
> end up running
> complete Phase 1/Phase 2 exchanges for each such connection.
> 
> As for dealing with the cost of cert-chain verification when 
> re-establishing an
> expiring SA: on any crypto protocol where this is a 
> consideration (and I'm not
> very convinced it is -- I just saw a crypto card that does a 
> few thousand RSA
> verifications/sec), one can simply cache the result of a 
> verification. Here's a
> simple scheme: hash the contents of the JFK ID payload for 
> slightly longer than
> the lifetime of the SA that was established by that session, 
> and check next
> time an ID is received; you have to also keep track of how 
> long this is
> going to be valid (wrt CRLs or OCSP status results), but you 
> have to make the
> same decision when establishing a Phase 1 SA in IKE/IKEv2.
> 
> Cheers,
> -Angelos
> 
> In message 
> <Pine.LNX.4.33.0203061749030.9043-100000@janpc-home.cisco.com>, Jan 
> Vilhuber writes:
>  >
>  >I'm not sure I would say that (I do think I see lots of need to have
>  >multiple Sa's between peers), I can offer some other arguments:
>  >
>  >A) Consider a aggregator
>  >b) Consider that you really do need one IPsec Sa per QoS level
>  >b') Have a look at IP Storage (which, I'm told) calls for one SA per
>  >     flow (??)
>  >
>  >(There's others, most (or all?) or which are, of course debatable in
>  >whether they are relevant or sane).
>  >
>  >Now the aggregator is likely to have quite a large number of SA's
>  >created to it from peers. Having to redo all 
> cert-chain-validation and
>  >public key operations for each SA seems prohibitive 
> (reusing the DH is
>  >really only one part of the computational complexity of a phase 1;
>  >especially if the exchange only supports rsa authentication).
>  >
>  >How many SA's do we expect to be created between peers/per 
> Qos-level?
>  >I don't know. Video and Voice are quickly becoming more 
> prevalent, so
>  >this needs to definitely be considered. Also, be sure to 
> multiply the
>  >number of SA's between peers by the number of peers you can have
>  >(which is likely to be large for an aggregator).
>  >
>  >
>  >On a personal note, I prefer the 2-phase approach because it does
>  >offer a way to amortize the cost of both the ephemeral DH and the
>  >authentication (which, if it's certs, can be quite substantial) over
>  >multiple phase 2's, and I believe that there WILL be more 
> than a pair
>  >of SA's between hosts. It's not much of an issue if you think purely
>  >of end-to-end encryption, but if there's any kind of 
> aggregator in the
>  >picture that aggregator is quickly going to go to its knees. If you
>  >can guarantee that we will NEVER(*) need a fair number of 
> SA's between
>  >two peers (which I don't believe for a second), then I'd rather have
>  >the tiny bit of added complexity of 2 phases.
>  >
>  >In summary: Cookie crumbs have calories, too.
>  >
>  >jan
>  >
>  >(*) where NEVER is defined as being the lifetime of son-of-ike, and
>  >I'd hope that it doesn't become obsolete the day we standardize it
>  >because our assumptions were wrong.
> 

Phillip Hallam-Baker (E-mail).vcf