[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: comments on client auth

Brian M. Thomas wrote:
> > The SPKI notion of a SIMPLE infastructure is already estalished
> > in the "v1" model of X.509, where there is just a key
> > and a name, plus some minimal management data  to
> > faciliate lifecycle management where keys go into a
> > systems, and get pulled out again, under the users or
> > issing organizations control (dates, version#).
> > If you dont want a name, dont put one in. See, its easy!
> This is the most important issue to me as an implementor, of course:  can
> the tool I'm evaluating do the job that I want?  I am not particularly
> enamored of one encoding or another, I just want what I choose to do what
> I need done.  I welcome more discussion of that, but in the sense that the
> requirements are ostensibly the same, I'd like to hear it in the context
> of the PKIX discussion, to which I also subscribe.

Id perhaps advise you to differentiate between PKIX and X.509. PKIX is a
tailored "profile" of X.509 based on the authors notions of what
a "well-managed" security infrastructure and philophsy of control should be.

Many disagree that this is "all" the IETF should be involved in. Id like to see
SPKI use X.509 (common libraries, common synatax benefits) but invent a
wholly different "profile" of use suitable for its requirements on a "simpler"
control environment. As Ive said before elsewhere, Ive already
implemented 95% of SDSI notions using X.509 format and
the implied SDSI management/admisntiration philosphy; the other
5% was just too hard to program in the time I had to play...

Ive now partipcated in several profile designs of v3 certs, each
of which could not be more different in their rigour, requirements,
and costs of operations. I see no reason why IETF should not
support multiple control philosophies, from SPKI to PKIX 
and a new one in between next year. Common interfaces, modules, formats,
etc, but radically different ideas for infrastructure security
management, perhaps.

> The problem is that the discussions I'm hearing there are so minutely
> concerned with the finest details of usage of different fields that I am
> completely lost.  It's clear that I am well out of my depth here; but
> so would many potential implementors be, many of whom are smarter than
> I, and that bothers me a lot, because it means that the group will
> not reach the IETF goal of making a clear, implementable standard of
> interoperability.

I agree. I participate mainly in order to learn what what the hell
it all means. Through dialogue, Ive learned alot about the
nuances of the control objectives, and what implementors must do. Much of it
is know-how, which makes the group a bit
of an exclusive in-crowd. Its not intended to be, I do 
believe, but is, non-the-less.

There may be a change coming; instead of being a get-it-right
first time forum, where military experts educate, it may soon be
more of a multi-proposal forum where real options are
selected. In this the SPKI initiative has done good political
service, IMHO. The danger here is IETF incompetence
at handling the vested interests in the IETF machinery, which always kill
all actual consensus from the security forums. I dont know how to
solve this one, except to perhaps keep SPKI and PKIX separate
but "on the same track". X.509 syntax is my uniform-guage train track, but
there are express businessman trains, commuter trains, and
perhaps cattle-trains each representing one of the
PKIX, SDSI/SPKI and N=next control paradigms.  

> I hate quoting at length, so I'll summarize: my earlier post mentioned my
> problem with implicit trust.  Your reply, with which I agree, said that
> some level of trust is necessary, but that it can be based upon judgement
> of acceptable levels of risk.  My point is this:  as a human, I am capable
> of that kind of judgement, and I authorize myself to take those risks, but
> my programming skills are not such as could give a program I write that
> much wisdom.  I, or the user of my program, must be able to express that
> judgement to my program in a way that it can understand and execute it
> appropriately.

This boils down to a single (human) action, to allow
a machine proxy to enact the pre-assigned judgement through delegation: namely
admins configures in the trust point who s/he accepts as
serving the interests of the admin's users, on account
of the admin verifying the issuing policy and 
the controls of the domain which reasonably ensure that
all parties/proxies will act in the way speicified despite autonomous
actions; some "trusted" control system (enacted in the
CPS) will enforce the delegation assumptions with
perhaps, but not always, the support of trusted software.

(sorry if this is a bit abstract)

This is human judgement, and then decision on appropriate
"delegation of authority", represented by a choice to shove a trusted
key into the key-domain key configuration file, or not. Practical,
and realistic, surely. Acountability passes to the human, who
delegates authority to decide to the proxy.

> An example should be illustrative:  I trust VeriSign's level 3 CA
> service, knowing their level of notarial assurance of the identity of a
> keyholder.  Therefore when I get mail from someone with a certificate
> signed by their commercial CA, I feel confident of the identity of that
> person, and if something in that identity maps to something that I can
> get a legal hold on, such as a SSN or, in my business, a phone number,
> I can feel safe offering that person access to information concerning
> their phone service that I keep in my systems.

You are making a decision to relyon a cert! This is
why CAs have to be so careful, as they are therefore liable!

I suspect most non-statutory CAs will not say, here is fred's identity; they
will assert that they have validated the claimed id presented
by fred, using mechanism X, Y , and Z. You will then make 
a judgement that X, Y, and Z are sufficient for your authentication
requirements, as proxied by the cert. You make a decision
on each case, else in general by configuring a trust point
into an automated decision taker, accepting delegation
assumptions through that action.
> The question is:  how do I express that confidence to my application server?
> If I merely install your CA's name and key as a trusted root, or (more
> appropriately) have my internal CA certify it, I tell my application only
> that I am willing to agree that the signer of a request has a particular
> name. 

This is the entrust model for X.509. It makes no difference whether you or
your trusted CA "cerify" the public root. Obviously the more
local the trust deicison, the easier it is for you, perhaps.
but this is a choice. many residential users desire
ADMD organizations of telematic services, versus
their managing many PRMD interconnectivity graphs
which they cannt really handle. A mix of local choice
to configure trust in ADMDs is a nice compromise; nice to
see PGP gone down that line; Ive stopped
criticising PGP's inane previous-design now! I even
stuffed a PGP key in my directory entry; not
sure what it means though, yet.

 I have not told it that I trust that person to access anything in
> that application.  The traditional answer to that problem is an application-
> specific access control list, because we have authenticated but not authorized
> the request.  My reaction to the email I got above would be to install that
> user's name in my application's ACL.

Your decision to approve (not authoirise, I disgree about tradition) the system to
_validate_ a cert chain through a configuration of ADMD trust
points is distinct from the next phase of what rights
am I willing to assign to the authenticated endpoint.

yes, there is user involvement in _approving_ the acceptable
domains of issuing authority. Then, there is the choice
of, so now what do we let fred do. IN many designs, the 
mere fact that fred is known to be fred, may have
implicit authorizations; or it may not! It is implicit when
the approval of trust point carries with it not
only acceptance of policy, but understanding the
existential memberhip of that domain is sufficient
to infer priivlege X. This is particularly useful for
private certs - the analogue of private membership cards.
Not only are you known to be member 14, but you automatically
get to get past the bouncer at the nightclub merely
by showing a valid card. Being member # < 100 may also (by barman's
ACL) allow you to have free drinks; who knows. The barman
assumes bouncer, and only checks the number, not
the card expiry date, for example.

By common acceptance, public certs may play the same role
for certain online activities, but certainly not all!
Online payment activities is one, where public honour-bound systems have
major strategic value to all of society; this what the
free-and-easy credit-card revolution tought us all, over bureacratic
cheques, and cash based controls, for consumer payments.

> Well, all that's nice, but the ACL has just the same need for authentication
> as the request, and not just authentication, either; I need to know that
> the author of the ACL was someone that I authorized to update my ACL.  Hence
> the ACL that authorizes ACL authors.  Of course, all these authorizations are
> given to names, and so along with every signature there's the name of the
> signer, and the relying application (or its database daemon that builds pre-
> authenticated ACLs from authorization records for speed) has to chase a chain
> of certificates for each name to verify the signature.

Again, separate approval of authority-issuing mechanism, support
authorization-autorities safely representing some
privilege. That the authorization-authority needs an authoirity-issuing
control systems is not a given. yes a PAC mechanism needs
a signature to protect it, which therefore requires certs,
which therefore require users to configure approval of
trust points. however, not all authorization-schemes need
signatures; so, authorization always needs such authority-oriented
and trust-oriented prior approval, is NOT a given. Here Carl and I disagree. This (false)
implication faciliate the joining of the issuing models,
but with disasterous consequences leading to
semantic ambiguity, confusion at the UI, and therefore always
the reality of human-bound vulnerabilities. Trust, authority, approval,
authentication and authorization are not in general related.
they are linearly sequenced in a comms security design.

> What a strong faction at least of the SPKI group is pushing is to *do away*
> with names in these contexts.  Ultimately, an application doesn't care what
> the client's name is; it's just a layer of indirection that gets in the way.
> The application's job is to determine whether a request is authorized.  If
> the client can present a credential, signed by someone trusted, that the key
> he uses is authorized, then there is only one certificate to trust, and if
> the key signing the certificate is its own, there can be no key it trusts
> more, and the trust does not have to be implicit.

I dont disagree with this underlying true condition; its however a fraction of the security
problem; as does not facilitate accountability for those enviornments
which demand individual accountability. certs and sigs and authoirzaion
in some enviornments (the big companies) need to always
be able to track the interaction down to a real person, so
that person can be fired, when the company becomes liable
for that persons mis-actions. In more realistic cases, voluntary
regulation (to prevent fraud in trust positions usually)
requires one to be 101% open, and not object to everyone knowing
who, when, where and if ,so they are fully informed
of everything they could ever reasonably claim they need to know to make an
informed decision. this is commercial reality we need to address.

(I dont disagree, see later, about naming being not necessary to
authentication or authorization, though some unique id is necessary
as knowledge sufficien to correctly infer the (implied) public key required
to verify the dig sig.)

On the matter of names, "If I sell SSL source code over the
net to 0x[128bytes], and later discover 
through NSA/Navy monitoring and CIA spying, and US customer
court action,  that the recipient is in Syria, who do you think goes to prison!?
It wont be Carl! Itll be me, as I have no basis to prove that I took due
care with export regs, for example, which require
country based id. The entire commerce and trade world is
stuffed with these id-based controls.

> If you, Peter, missed Carl's page on the generalized certificate stuff, let
> me point you to it again, it's at http://www.clark.net/pub/cme/html/cert.html
> Yes, he bashes X.509; yes, he bashes ASN.1, but the point is that for relying
> parties that are represented by autonomous daemons without human sensibilities,
> names are only an annoyance; it's the keys that are being authorized.  It is
> so in our distributed applications toolkit, even though we use the names to
> stand for the keys, and have to chase chains every time we want to authorize
> a request.

I find his arguments about ASN.1, & X.509 syntax purile. This is irrelevant though,
as are all arguments about formalisms and syntax.

I did read his web page once; I cant remember what it said, other that
X.509 is broken because ASN.1 is; hence all X.509 models and uses are broken; see
me for a real set of concepts.

Carl implicates, and links, the models behind those formalisms and synataxes with the
same disdain he has for the notation. And here I object intellectually, as in a
comparison, the solution he proposes has no different a set of ideas (from SDSI) as
X.509 in general, and some specific profiles of that generic std in practice. This
is strong reasoning which counters his own informal claims.

The particular control framework you seek for an autonomous responder seeking
to take human-style accountable decisions has been suggested as
a requirement, or basis, for a defn of the user's trust enforcement function
as (a) semantically equated to an authorization decision (b)
all control frameworks must so design such a linkage, as authorization
and trust decision are equivalent. As a consequence, management
of trust must be tied to the management of authorization. Hence
abandon id and names.

I dont accept either proposition, in general, or in the case in question.
Neither do I accept the inference chain.

However, I do accept the issue, you raise, that to fashion an authorization
enforcement decision, whose mechanism uses authority notions, you are
required to verify the digital signature on the authorizaton cert/token.
This is by definition (in our CPS) a process which relates the public
signature verification key to the operational period of the corresponding
private key. This MAY OR MAY NOT require cert chain validation, as such
may have occured on a previous occasion; such previous validation
is represented by a (qualified) trusted public key in the local cache; this
existance signals localised human/proxy trust and willingneses to rely on the previous validation
process. It is indeed a mere key, not a name. This may be Carls confusion,
and basis for promoting a key-centric idea during verification of
dig sigs on authorization or any other type of cert/token.

Most token designers use the cert name to id the verification key; here , we could 
use a hash of the key if one wishes; its just an optimization feature whereby
platforms are now forced to cache on the basis of one
extra identifier, versus another usually transferred for
human use, and audit controls in proxies, anyway.

I can see that in world where no non-ambiguous entity naming practices
occurs, it would be more sensible to not use the cert human name, as this could
cause vulnerable cache mis-lookups. Again, he may be
exploiting the case of trusted local keying material fact plus a non-ambigous 
naming infrastructure opton to justify an entirely new scheme. Its a queer set of cases,
which is hardly justify the design of a wholly new authenticaion
and authorization logic, and certainly makes no difference to
certificate data structures!  (Ive never deployed only a cache-oriented
public-key scheme, which had other than non-ambiguous naming,
to be honest though.)

> I have no problem with the rest of your comments in general.  I specifically
> agree strongly with the points about making judgements and accepting risks.
> The point is, how do I express these judgements to my cyber-agents, the
> software that I use?  The mechanisms currently popular seem to me to be too
> coarse, with their all-or-nothing authorization semantics, and too specific
> to certain applications, with certain bits defined by the standard to mean
> something not generally applicable to all certificate users, and too complex
> for human understandability.

In X.509, the bit encodings are certaninly not meant for humans to interpret.
I cannot believe text encoding would make the representation of these complex
semantics any more understandable. Its not the bits which are the 
problem  but the complexity of the signature semantics based on trust
assumptions. And in this SPKI looks as complex as all, though constrained to
for ever to a rather bizarre case of a wholly unmanaged infrastructure
of entities. I agree we should play with this, and see the outcome though!

> What I at least am arguing for is a simple way to implement a relying-party-
> as-issuer model, what you call the 'final assertion of id'.  For at least
> one broad class of uses, one I think was not envisioned by the original X.509
> design which clearly and perhaps artificially separated authentication from
> authorization, this model is the most straightforward, efficient, and secure.

X.509 assumed the trust hypothesis for authentication. As authorization
was not required, though approval (enforcement) to trust was, it was not acceptable
to force an authoriztion model into the equation. Bizarre to force people
to authorize, when all I want is your authenticated name so I can bill you
for your connection time.

> For other uses, such as email, the model includes both models currently in
> vogue(well, only one, PGP, is in wide use), since a name is effectively a
> privilege, one which a human can recognize and therefore authorize.  For yet
> another, such as non-repudiation, it may not serve well, because of the need
> for some external authority, but then it may; I have not explored that use.

I can only see, but Ill reread these mails next week after some thought, now that
we are catering for a bizarre case cuased at heart by the wholly broken
idea of a digital signature comms *service*.

Most coherent and general purpose comms models reject this, and have a signature mechanism
merely mechanically signing a data token, whose authentication or authorization
semantics are under the control of the security designer. Such tokens
proceduralize the verification procedures for the dig sig, which
required validated certs, which resolve to the trust hypothesis, and substantiates
the authentication or authorization semantics to which the signature itself
add no knowledge but provides procedural support for proof of use of
of the private key.

> If my comments have stirred up some real interest, I am glad.  I am sorry if
> my historical and theoretical knowledge is lacking.  In defense, however, both
> of Professor Rivest's work (boy, have I got cheek!) and Blaze's work to which
> you referred, if they have done nothing better, they have brought the light of
> past thought down to the current generation of users and implementors
> (including me) in terms we can understand, and that's worthwhile.

I agree. I only read the Blaze contribution to the world's knowledge
about distributed naming through Rivest/Lampson. I dont find
SDSI objectionable in the least. I didnt see anything in that paper
concerning the argument put forward by yourself, and Carl, I assume.

SDSI was pure X.509, with ADDMD domain theory, with different encoding rule. If what
you say is SPKI, then SDSI != SPKI yet.

I liked SDSI a lot. Im not sure about SPKI! I suspect Im just
not smart enough to understand the analysis, though; this is
usually the reason.

How do I subscribe?
> Brian Thomas - Distributed Systems Architect  bt0008@entropy.sbc.com
> Southwestern Bell                             bthomas@cdmnet.com(or primary.net)
> One Bell Center,  Room 23Q1                    Tel: 314 235 3141
> St. Louis, MO 63101                           Fax: 314 331 2755