[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Decoupling compression and security



> When using packet-by-packet compression within IPSec, while an SA parameter
> will exist to define the compression algorithm and whether or not
> compression is enabled for a particular sender to receiver, the sender (as
> you suggest) will not want to send data when it expands. We have suggested
> that each compressed payload include a bit (within a one-byte field) which
> indicates whether or not the IP datagram is compressed or not. So, in
> effect, the sender operates as follows:
>          
>          compress the packet
>          if it gets smaller
>             compressed = true
>             send it compressed
>          else
>             compressed = false
>             send the packet in uncompressed form
> 
> On the reciever side, it just checks the compressed/uncompressed bit to
> decide whether or not to attempt decompression.

If the compression was another IPSEC transformation there would be no
need for such a "bit" in the actual _security_ transformation
data. Have everybody forgotten that applying transformations on a
packet is still optional? If I want an option of not applying a packet
compression transformation I can just not apply it -- no need for any
information bits in the packet. As an example,

there is two transformations defined, COMP for compression and CRYPT
for some crypto. If I get a acceptable compression ratio for a packet,
I leave the compression transformation on and pass the packet to the
rest of the transformation chain, finally giving a packet like

	CRYPT(COMP(packet))

OTOH, if the compression ratio is unacceptable, just leave the
compression layer off:

	CRYPT(packet)

Because the receiver sees only SPIs of the various layers, it will
first decrypt and either get a regular (not compressed) packet, or a
compressed packet which will then have to be uncompressed to gain the
final packet.

Of course, this would mean there is overhead of SPI vs. "bit", but I
think the generality is a clear plus. What if you later find out the
cryptographic transformation you have embedded the compression layer
is not secure? Also, there is no clear way how to interoperate between
different transformations which have the same compression engine (you
must remember that each transformation should be self-contained from
the coding point of view). 

Also, some persons might want compression, and only that. I imagine
there would be plenty of situations (two computers connected by a
secure network/line) where getting "compression" out of IPSEC layer
could not be justified because of the extra costs because the
compression would imply (cpu-intensive) cryptographic methods.

Of course, it is another matter whether compression should be included
in the IPSEC layer at all.



PS. In the IP Authentication Header draft (4 June 1996, are there
newer versions?) at chapter 3, the combination of AH(AH(...)) is said
to be invalid at all times. Why? Take the following situation as an
example where AH(AH(...)) could be used:

A VPN network, eg. an intranet created by establishing a secure tunnel
between two separate networks. Typically, the VPN routers apply some
IPSEC AH+ESP transformation on a packet before sending it through the
internet network. But, there can be a case when the VPN router does
not want to encrypt a packet, but still requires authentication. 

If there are IPSEC-capable hosts within these two parts of the
intranet talking to each other with some AH+ESP transformations the
VPN routers will get packets in the form of AH(ESP(...)). If the VPN
routers have knowledge about the used ESP (by acting as a middlemen in
key management routine, for example) an deem it as "secure enough"
they might not want to apply a second layer of ESP, because it would
be unnecessary.

Still they want to apply an AH, so that the other VPN router can
confirm the authenticity of the packet coming from the internet (that
it really is coming from the other VPN router). This leads to a
situation where the packet would look like AH(AH(ESP(...))) when send
to the internet. But this seems not to be allowed by the draft!

--
sjpaavol@cc.Helsinki.FI          I have become death, destroyer of the worlds.



References: