[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: encoding: SPKI vs. SDSI




> From: "Phillip M. Hallam-Baker" <hallam@ai.mit.edu>
> 
> I suspect that S-expressions represent the best common ground between
> the various parties. There is widespread agreement that ASN.1 will be 
> untennable for many hand held devices. 

I'm sure everyone is sick of this topic, and I wasn't going to mention
it, but ...
Philip DesAutels said to me that W3C had until recently believed the
conventional wisdom that ASN.1 is too unweildy to use on lightweight
devices, but that demonstrations of Microsoft's Authenticode technology
on Windows CE (a lightweight version of MS Windows for Net PCs, if that
can be imagined!) was convincing evidence to the contrary.

There was recent discussion here of writing some sample code that would
translate between various proposed SPKI formats and processing-friendly
internal data structures (5-tuples? DSig 6-tuples? something else?).
When that code is available (or a specification of the internal
structures is defined) I would find it interesting to compare it to
code for DER encodings of the same structures, both in a native ASN.1
definition of the SPKI certificate and with SPKI fields contained in
standard X.509 fields and extensions.  The SPKI discussion mentioned
both definite-length and indefinite-length encodings, and full-string
allocation vs. small-structure allocation.  BER, of course, allows both
indefinite and definite length encodings (DER is definite only). My
existing X.509v3 decoding package is <1000 lines of C (including
comments), and although I haven't done any benchmarking, I believe it
is both processor and memory efficient.  The package accommodates both
indefinite and definite length encodings, and it uses the
structure-allocation model, which I believe to be more efficient than
the full-string allocation model.

When SPKI reference code is available, I would be happy to discuss
comparative benchmarks of memory usage and certificates-per-second processed
(somewhere else of course, because most people on this list probably are
not interested).  We could definitely offer benchmark results as input
to DSig.


> I can think of many cannonicalisations. Are there to be spaces before,
> after parentheses? How are litterals to be handled etc etc... ad nauseam...

> My understanding of the W3C work was that there was a rough idea of some
> cannonicalisation rules but that these needed some work, in particular for
> someone to draft a spec for them.

That's why I agree with Carl that an inherently canonical format (like
DER or an SPKI-specific binary encoding) is preferable to trying to
sign ASCII.  Since a human editing (cutting&pasting, emailing, etc) a
certificate can't be counted on to preserve precisely the correct
number of spaces around parentheses, there will have to be a separate
canonicalization pass after editing and before signature generation and
verification.  That canonicalization pass might as well output binary
data instead of ASCII, in which case ASCII canonicalization rules are
not needed.  Particularly in the case of international character sets,
signing the data in the form it is presented to the user sounds like a
recipe for non-canonical-form related troubles.