[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Last Call: Combined DES-CBC, HMAC and Replay Prevention Security Transform to Proposed Standard




> >> I assume the IV needs to be a multiple of 64-bits (to insure that the
> >> Data starts on a 64-bit boundary for IPv6)? If so, the draft should
> >> state this explicitely (it doesn't seem to).
> 
> >Interesting point. I assumed that the IV was a full DES block or 64
> >bits.
> 
> Oh, right. The IPv6 alignment requirement falls out naturally, but
> isn't the driving motivation.  What is a "full DES block"? Wouldn't it
> be 64 bits as well?

Yes.


> >> >Appendix A
> >> >
> >> >   This is a routine that implements a 32 packet window. This is
intend-
> >> >   ed on being an implementation sample.
> >> 
> >> This code in fact assumes that the replay counter always starts at
> >> 0. It should state this, since this assumption is not made elsewhere
> >> in the draft.

>From the text, the replay counter starts at:

	RP_key_I is the initial value and wrap point for the replay
	prevention field for traffic from the initiator -> responder.

The appendix A code is relative to 0, not the actual value starting value,
therefore, before calling the routine in Appendix A, RP_key_I must be 
(unsigned) subtracted from the count of a given received packet. 

> On another point that others have brought up, I don't see the value of
> having a *negotiated* replay window. I think its reasonable for the
> receiver to have one, but for the sender to know what its size is
> implies that the sender would (for some reason) find it useful to send
> (slightly) out of sequence packets and derive some sort of benefit. I
> don't have a clear sense of why a sender would do that and what
> benefit it would derive.

If you are sending bridged packets, the sender may want to force the
receiver to have a 0 window size, that it does not accept any out of order
packets. It was added after a vote at a meeting. I did not propose it.

>> >      1. (Optional step) Decrypt the first bock of data using the
>> >      appropriate DES_key_ and IV_key_ (or IV) and then do a quick
"san-
>> >      ity check" of the count. 

Lets now take a -fresh- look at the optimization and forget what the draft
says or what the previous email said.

If someone wanted to perform a SYN type (clogging attack) of attack on an
IPsec device, it could take any open SPI and send in packets with random
data exchanged for the encrypted data. To detect a bogus packet, the
decrypting device would require DES on the entire packet and MD5 on the
whole packet as well. If the network the device is attached to is faster
than the packet processing (DES/MD5) engine, then the queues will fill and
packets will be tossed and denial of service may result.
 
Since the count is encrypted (and assuming the attacker does not have the
DES key and therefore can not create known non-duplicate counts), detecting
someone trying to clog can be performed by decrypting the first block and
seeing -if- the decrypted count is reasonably valid (since the attacker can
not control what the data looks like after the decryption step).

Next, if we 

1) Assume that the packets are always decypted processed by the same
decryption device (so that changes in topology do not create different
decryption locations) and that all valid decryptions are performed by this
single device (more succinctly, all valid received counts are processed by
a single device). 

2) This is not multicast with shared keys. (If this is a multicast with
shared keys, then I can see some reason to allow the traffic to go away and
come back, but this is not really intended to be a shared key multicast
protocol?)

3) Assume that breakage of complete connectivity loss (topology changes
that are fatal to traffic) for extend periods of time (greater than X
packets) is tantamount to source or destination failure.

Then

> Under what conditions would it be correct to reject a packet with a
higher
> sequence number than seen so far?

When the count  has moved up by X or more packets.

My initial assumption was that X of 64K packets was enough. X of 16M
packets would be OK too. X of 4billion would be illogical, it is beyond the
key lifetime.

This is my logic on this. I have no problem with removing the section.
Comments?

jim





Follow-Ups: