[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Bignums




>> When a byte string is treated as a bignum (for crypto use), should the
>> first byte given be considered the most-significant byte of the number,
>> or the least-significant byte?

>Internet protocols are predominantly big-endian.  I think the same
>should be true for bignums...

I disagree, primarily for the same reason as Ron gives with his
byte alignment argument.

If you have an n octet bignum it is most likely that you will be 
processing it using an array of integers aligned on a machine-word
boundary. For example on my AXP system I would definitely want the
LSB to be aligned with an quadword (64 bit) boundary.

This is so regardless of whether you want to interpret the number in
a big or little endian manner. In fact on the AXP this makes very
little sense either way since the accessor operations are RISC, 64
bit aligned. Just as bit sex has no real relevance on byte oriented
machines byte sex has no relevance on machines with a larger atomic
unit.

This argues for little endian because you can be sure that the first
digit you get will go in a definite position. 

This logic becomes even more compelling if you are using genuinely
variable length arithmetic rather than very large but fixed size 
integer arithmetic.

Incidentally this also avoids a problem Ron pointed out earlier, 
there is no longer a need to insist on the number of hexadecimal 
digits being even if you are giving all numbers in little-edndian 
form.

Ideally I would like strictly little-endian numbering everywhere:-)
Since that goes against the human readable point I propose only using
it for numbers that are not human readable.



	Phill


References: