[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: SPD Cache in 2401bis
At 9:27 PM +0200 3/8/04, Tero Kivinen wrote:
>Stephen Kent writes:
>> In 2401 we did not do an adequate job of describing how to handle
>> some cases, e.g., named SPD entries and PFP entries. Even for simple
>> SPD entries the notion of going back to the SPD to lookup each
>> outbound packet is clearly something that scales poorly for bigger
>> SPDs and/or high speeds. (Of course all of this is only a concern for
>> BITS/BITW/SG implementations. native host implementations have an
>> intrinsic form of caching anyway.)
>
>I think most of the implementations are doing and are going to do the
>processing differently what is described in the rfc2401. This happens
>regardless how the processing is described there. The method used
>tries to be identical from the outside view, but might offer different
>extensions and optimizations depending on the scenario the specific
>ipsec implementation is aimed for.
That is completely consistent with what I said later re the purpose
of the model.
>Security gateway supporting 10000 vpn-clients will have completely
>different optimizations compared to the sgw aimed for connecting 2
>branch offices together with one tunnel.
sure.
>I do not think rfc2401 needs to define the scalable and efficient
>procedure, it needs to describe the processing as clearly as possible,
>and then we might have another document (or the appendix) describing
>all kind of optimizations etc which can be used to make it scalable
>and optimal for different uses.
>
>> The processing model for IPsec is not a proscription for
>> implementation. It gives details of one way to implement IPsec. The
>> intent is that a compliant implementation should behave in an
>> identical manner, as viewed by the IPsec peer and by the user/admin,
>> no matter how it is implemented locally. We need at least one model
>> to provide a reference, and it is preferable if the model is as
>> simple as possible, to make it easy to describe and to understand.
>
>Yes. It needs to be as simple as possible, it does not necessarely
>need to be efficient or scalable. If it happens to be both even
>better, but no extra complexity should be added to the model to make
>it more scalable or efficient.
I think we are in agreement. My feeling is that the cache-based model
meets that criteria, i.e., it is simpler to explain in detail, and
happens to scale well.
Steve