[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: propagation control

> From: Derek Atkins <warlord@mit.edu>
> Actually, when we were designing the PGP 3.0 data formats, we came up
> with an encoding that is linear on both encode and decode, modulo some
> fixed-size buffer (default is 4KByte blocks).  The encoding adds an
> overhead of approximately 1 byte/block.  The encoder needs to be able
> to buffer up to the block size; the decoder doesn't need to buffer at
> all.

OK, I misspoke myself :-).  BER indefinite-length string encodings can
also be segmented, as an unlimited series of definite length segments
terminated by an end token.  The overhead is more than one byte per
block - the tag/length info is 2-4 bytes for variable length blocks up
to 64KB.

When using segmented encodings, it's implicit that the coded value of a
segment cannot be affected by any data that comes later.  This is a
reasonable restriction for some applications; it might not be
reasonable for others such as convolutional error correcting codes and
data compression - sliding window schemes are needed there to keep
memory requirements bounded.

DER assumes that an entire certificate will fit in a single block - with
that interpretation, both encoding and decoding are "linear" if you don't
count random accesses within the buffer as being "non-linear".