[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: comments on client auth



From: "Brian M. Thomas" <bt0008@entropy.sbc.com>
> My issue, however, is one that I have not heard discussed at all so far on
> this list:  that of implicit trust.  There is implicit trust involved in any
> certification path that has a root that is someone else.  When I chase a
> chain of certificates to a trusted root, the thing that bothers me is *why*
> I trust that root.

To clarify here, when you speak of "I" you are referring to yourself as
the program?  You as the software running in a sense as the user's agent
need to trust the root?

> Put another way, *how* can I be sure that the trust that
> I as programmer had in the root that *I* intended to install in that place of
> trust is appropriate, given the possibility of compromise along the path from
> my statement of trust to the execution of the program?

So the issue is that in writing the program, you have in mind a root
which should be trusted, and you want to make sure that when instances
of the program later run, they use that same root?  It seems to me that
later instances should be thought of as not your agents, but those of
the user.  It is his wishes and trust which is relevant, not yours.
Now if he trusts you to choose the root, then your concern is valid,
since you are worrying about it as a proxy for him.

> The best thing that
> we could come up with in our implementation was that the programs all had the
> root name and key compiled into their execution code.  If we were able to be
> absolutely(well, all right, reasonably) certain that no one could substitute
> that key, either in the linking process, or during or after the delivery of
> the compiled executable, it would be all right, but a program doesn't know,
> does it, whether its code was appropriately protected from tampering?

Putting the root in the code is about as safe as you're going to get.  If
the root can be tampered with, the code could be tampered with as well.
Basically it is no longer "your program" if tampering has occured.  You
have no way of controlling the behavior of other people's programs.  This
fiction where you think of yourself as the program breaks down if
tampering has occured.  It's not "you" any more once that happens.

> Bear in mind that the issue is not whether the operating environment *can*
> be secured in such a way, but whether I can know at runtime that it *is* so
> secured.  If our friend A. Clueless User, for whom we are building all this,
> is fooled into running the wrong program, everything we worked for can be
> out the window (Obviously, this has always been true, and I want my level
> of paranoia to be reasonable %>, still...).

I think your second perspective here is the more useful one.  There is no
way for a program to know that it has not been altered.  Only an outsider
can check that, and that requires some other trust and verification
mechanism.

> On the other hand, if a certificate were to be made from any statement that
> could be uttered by any human in any appropriately understandable way (such
> as X.509 certificates and PGP certificates do about names, which humans who
> send or read mail messages can authorize appropriately), then chains of
> authority, not identity, would be possible, and (this is the point, finally)
> the final arbiter, the trusted root, is the relying party {him|her|it}self.

I'm afraid I don't understand this part.  Is the main point that the user
needs to be able to specify what he trusts, rather than the program?  He
is the "final arbiter"?  I think that is very reasonable but I don't
understand the comment about "chains of authority" and certificates being
made from any statement that could be uttered.

Maybe part of my problem is that you are criticizing something which I
do not accept.  The idea that a program should choose the trusted root
rather than the user seems a weak solution.  I realize that Netscape
has done this but hopefully it will not be widely copied.  What was the
application you discussed above where you built a root into the program?

> That's what we have been trying to do, but implicitly, via corruptible
> mechanisms.  All trust and authorization is therefore explicit, and the
> semantics of the authorization can be whatever the relying party wants it to
> be, since nobody else cares about it.  Obviously, standards could be evolved
> for those, but they could be specific to the applications, and not hold up
> generic standards efforts while they try to make absolutely sure they've
> figured out every nuance of every possible use of the generic standard.
> I think that's revolutionary in its simplicity and security.

So the idea would be that the user explicitly specifies whom he trusts to
do what?  And this is done on a per-application basis?  It would seem
that some commonality across applications would be useful.  If a user
switches from one email program to another it would be nice if his trust
relationships could be carried over.

Sorry if my comments do not seem relevant.  I worked several years ago on
the implementation of PGP and so I am coming at it from a perspective
where as much power as possible is put in the hands of the user.  You may
be looking at a different set of issues.

Hal Finney
hfinney@shell.portal.com