[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Summary of key derivation thread



> 
> Ran, I think you're mixing two independent properties: secure MAC
> which
> is proven for HMAC as long as keyed SHA is a secure MAC, and secure
> PRF.
> Basically what HMAC proof does (as fas as I understand) is linking the
> security properties of SHA-1 to chained operations (multi-block).
> > 
> Now, the analysis (as far as I know) was done for the MAC case. There
> was no analysis done for PRF, 

There is:
Bellare, M., Canetti, R., and Krawczyk, H.,
``Pseudorandom Functions Revisited: The Cascade Construction".
{\sl Proc. of the 37th IEEE Symp. on Foundation of Computer Science},
1996, pp.~514--523.

> and in any case in order to use HMAC as
> a secure PRF you'd need to assume that SHA is a secure PRF. But if it
> is so (i.e. if SHA is a secure PRF) - then you don't need the extras
> that HMAC provides.

This is not correct. For the pseudo-randomness of HMAC it sufffices to
assume (following the above paper) that the compression function of SHA1
when keyed through the IV is a prf. However, this property is not
sufficient to make other keyed versions of SHA1 secure. In particular,
prepended-key SHA1 is insecure as a prf EVEN IF the compression
function is a perfect random function.

> 
> > I don't think we can allow ourselves to lose this analytical basis in a
> > standard that is indended for wide use.
> 
> I don't think that analytical basis is applicable to HMAC as PRF.
> 

It is through the above cited work, not through the HMAC paper (crypto'96)
In particular the analysis for the prf case is not as
straightforward as for the case of MAC.

>  
> > BTW, regrading going over 160 "bits of security". I agree that this is a
> > non-issue from a practical point of view. But for the paranoids who insist
> > on doing it, the best way would be to use a PRF with more security, such as
> > HMAC-SHA2, or any block cipher with long enough keys and block-size.
> 
> It is an issue of convenience and of entropy loss. Regarding the
> latter - it
> doesn't make sense to expensively negotiate a kilobit of keying
> material and
> then reduce its entropy to 160 bits.
> 

It does make sense. The 1 kilobit of keying material does NOT mean that
you have 1 kilobit of security. Indeed the discrete log problem with  a
1024-bit prime is breakable with a mere 2^90 operations (or less).
Using the kilobit of key means that you may be using insecure bits. 
For example, under some groups some bits (e.g lsb) of the DH key are
predictable. I repeat: smoothing the computational entropy of the DH key
via hashing is a sound practice backed by theoretical results.

Hugo

>