[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

IETF spki meeting minutes



Draft minutes from spki WG, IETF-39 Munich, mon11aug97

Minutes by Stefek Zaba <stefek_zaba@hp.com>; minor edits by Steve Bellovin

-- Policymaker status update

Matt Blaze presented an update on the status of Policymaker, both 
implementation and draft. Draft is "almost ready for circulation";
will be ready for (well before - ed) Washington IETF. Also Joan Feigenbaum 
and Martin Strauss have done some analysis work on provability of the 
Policymaker interpreter. Their paper is due for circulation Shortly.

BadNews: reference implenetation is *not* yet available. Playware 
available on request against a "yes it's broken" disclaimer. Send email to 
policymaker@research.att.com. Only porting problems stand in the way - 
true yacc dependency (bison and cousins won't do). This misfeature to be 
fixed RSN. 

-- WG chair editorialises, comments from the floor:

Bellovin editorialises: comments received that "S" in SPKI no longer 
"simple", possibly "SDSI" but not clearly. Matt Blaze agrees and suggests 
"X.510" as new working name :-)

Temporary adjournment taken while Tatu struggles with the slidemaking 
capabilities of the terminal room.

Timetable? Well, intention *did* exist to put forward a draft to become 
part of standards process from this meeting. Absence of Carl Ellison and 
Tatu's doubts mean that this has slipped.

Matt Blaze explains one important difference between SPKI and SDSI (in
answer to a comment in the spirit of "this is so complex, isn't it just
SDSI in drag") - SPKI is an authentication scheme, while SDSI was for
identification.

Clarification requested for blind signature status - draft claims it'll 
work but two people, including Marjku-Juhani who is doing an 
implementation from the spec (see later) note that they do not in fact 
work.

Pre-announcement: Matt Blaze and Perry Metzger will be doing a 
conference/working group on PK infrastructure and trust management in 
April 1988, position papers need to be in by Dec97, likely location 
Boston, USENIX to sponsor; call for papers within 2-3 weeks ("watch for it 
on the net").

-- Tatu speaks on the current draft

Tatu reports major improvements from previous draft. Canonical format is 
ONLY one supported for SPKI data: essentially binary, length-prefixed. 
Algorithm for 5-tuple reduction specified, appears in the draft. 
Star-forms have been added, allowing specs for 
intersections/unions/predicates to be specified. Powerful, but admittedly 
complex (8 pages of very dense spec). K-of-N cert (a threshold scheme) has 
been added: allows N subjecs to be identfied and require that K <= N keys 
be used to sign messages to the verifier. Increases security by reducing 
exposure of a single key. Also has the following advantage: suppose major 
key, e.g. Verisign private signing key, is compromised: now the blackmail 
value is *large* indeed: whereas for a threshold scheme, value from a 
single key compromise is essentially nil (and practicality of simultaneous 
attck is very low). Draft also adds some info on on-line checks and other 
minor tidy-ups.

-- Discussion on current draft

Floor opened for discussion on the draft. One voice says "it's long". 
Other voices agree that "simple" isn't accurate any more. No-one was 
willing to put their hand up to the opinion that it is now a "simple" pki .
Doubt expressed (from Eric) that the "pulling things out" method for 
simplification will work - suggests that admitting a mistake has been made 
(perhaps rhyming with "Pudsey" :-) and maybe doing a grand reset would be 
the right thing.

Matt Blaze recounted a discussion of a year ago between him and Carl where 
Carl suggested the full power of Policymaker was too general and 
something simpler was needed - but the current accretion of features has 
created something big again. Maybe an SPKI-lite is needed. Marjku also 
points out that the full implementation is too big to verify, and that the 
intersection algebra is going to be hard. Tatu notes that the S-expression 
intersection combination is further complicated by star forms - algorithms 
"aren't trivial", *probably* polynomial time. But Carl and RonR note that 
you do not need to compute the intersection: you are always computing 
whether a fully-ground TERM is a *member* of (multiple) predicate specs, 
and you can cascade the tests...

Floor requests the WG chair to say whether an X.509 or a SDSI cert is 
harder to handle. "On advice of counsel", Bellovin replies "the 
complexities are different". SPKI is simple to parse (but semantics are 
another matter!) Also, X.509 does not carry the authorisation idea - SPKI 
does. Problems with X.509 are not ASN.1, suggests 
GUY-IN-BLACK-SITTING-NEXT-TO-MATT: rather that the structure is static and 
only uninterpreted random octet-strings can be added.

General consensus that "the path to simplicity has been lost" - WG chair 
suggests taking this to the mailing list. 

-- Implementation status

3 people have done partial implementations - one cannot talk in any 
detail. Marjku-Juhani Saarinen (from SSH) has an implementation under way 
- halfway through, maybe about 3 weeks to go. The intersection algebra is 
very unlcear and is an unpleasant barrier to effective implementation in 
its current form. It's written in C, uses bits of reference code. Third 
implementation is by Tatu: has 800 lines of code which parse the binary 
datatypes, does not include hashing or crypto operations, and the minor 
detail of compiling the code has not yet been seen to.

-- More discussion on draft/current status

Suggestions for simplification requested - maybe more hands to 
assist Ron, Carl, and Tatu? Marjku suggests "membership 
operator" as one possible direction. Tatu speaks a bit more on motives for 
introducing the binary-only representation: discusses format (see 
draft-ietf-spki-cert-structure-02.txt). Explicit lengths + type tags - 
ignore the field if you don't understand the typetag. Some common fields, 
e.g. times issued and expired, extra constraints w.r.t. CRLs/revalidation, 
and a lump of "type-sepcific" binary data, all followed by a signature.

For the case of certs, the type-specific data is:
 int nsubjects
 PRINCIPAL subjects
 int k
 string doi  /* domain of interpretation */
 string auth
Note that separate documents describe the treatment of doi and auth for 
particular usages.

More on K-of-N: in "traditional" case of a certchain, full rights-granting 
goes along the chain:

  CAroot -> intermed1 -> intermed2 -> subject

For k-of-N, this "chain" can now fork; and requirement is that where they 
merge at least "k" separate sigs are present. Algorithm for checking 
verification is to go backwards from the subject (depth-first search 
backwards) - keep bitvector as you search and a total count. Note that 
each branch of the depth-first search should terminate at the granter - 
each separate path which does so terminate causes you to increment the 
total count. Success overall is when count reaches k; failure when all 
paths searched and total count still below k. [Minute-taker note - Tatu
may want to verify the above sketch, but that's about what I thought I
heard him say :-)]

Floor asks "what use is this?" Tatu answers with example of corporate 
policy requiring 2-of-3 signatures; a PC requires departmental policy key, 
that requires 2-of-3 corporate key, corporate key requires (say) 8-of-10 
from "the" global roots. Removes a single point of compromise. Eric 
continues to probe the "why" - suggests current solution is both too 
simple and too complex. E.g., sharing schemes that say "must have 2 of 
people X/Y/Z and at least 5 of the other 7" - suggests this kind of 
secret-sharing should be put into the *application policy* rather than 
wiring into the infrastructure; and is not yet clear that algorithm as 
described correctly implements intent of spec.

Observation from Ted Ts'O - can do the more complex example suggested
by Eric by AND'ing the 2 primitives.  CA liability risks also
reduced by secret-sharing, as are costs of protecting each individual
secret keys.

Floor expresses a worry that this is a slippery slope: K-of-N may lead to 
weighted-K-of-N and then to "all other" possible auth-policies. Chair 
suggests that *purpose* of mechanism is unclear - as well as (a) do we 
need K-of-N, we need to consider (b) should base draft specify how K-of-N 
should be implemented. Tatu argues wide utility may be reason to put this 
into the base draft. John Schey reports challenges in meeting audit 
requirements for e.g. threshold schemes all the way down to a final 
money-issuing bank from the Safekeeper-running big bank assoc. WG chair 
says "let's move this to the mailing list".

Tatu / others suggests complications in the spec (possible candidates for 
elimination from the base spec) include:
 - the star-forms (or at least the set-algebra operations therein)
 - K-of-N
 - anything which goes beyond "does auth" and "is simpler than X.509"
 - SDSI names
 - S-expressions :-)
of which the Chair deftly bounced reopening the S-expression religious
war and the "anything beyond" suggestions.

-- Adjournment to the corridor/staircase/bar
at about 16:50, following the traditional exhortations to sign the
blue sheet...


Follow-Ups: