[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: three digital signature models ... for x9.59



I should point out that I now work for VeriSign, the following does not
represent VeriSign policy on the matter however but a personal opinion.


I disagree.

I do not believe that there are three models of PKI infrastructure as
stated. I believe that there are problems and a set of tools for meeting
those problems. I see a single unified and comprehensive model for PKI,
support for which is rapidly evolving and completion of which is not far
off.

I believe the key is to look at the 'signing act'. I'm concentrating on
signature because I believe that that is the most important application of
PKI. If you get signature right the rest follows. It is even arguable that
in a fully mature network PKI model one would wish to preclude any use of
certificates to exchange encryption keys, if a certificated signature key is
'online' use it to request a public encryption key on a 'per use basis'.
This arrangement ensures perfect forward secrecy and complete independence
of communications keys.

So we are signing an object, what does this mean? Merely that we are
creating a message that we expect to be interpreted in accordance with a
given semantics by one or more recipients.

The question then is what interpretation the recipient places on the
combination of certificate and signed object. What is the strongest
statement that can be made about a certificated key? The X.509v1 concept of
'identity' does not work as we all know by now, what does identity mean in
any case? There are a number of options:

That the entity with access to the private companent of the certificated
key:
* is refered to by the issuer using the stated subject name.
* has been determined to be commonly referred to using the stated subject
name using a specified method.
* is additionally indemnified by the issuing party for certain PKI failures
in a stated sum.
* is additionally guaranteed to third parties to be the stated entity in a
certain sum.

People will note the progression from an assertion of truth to an assertion
of method to an insurance model. In practice commerce has little requirement
for absolute knowledge of identity (whatever that is). Commerce functions on
the basis of aggregated risks and costs.

Of course we have introduced an additional object at this point, the
certificate. Certificate transport is therefore of importance. There are
only three basic means by which the recipient can obtain the certificate
just as there are only three tenses, past present or future. Either the
recipient has the certificate already, receives it with the signed object or
requests it after the message has been received. In the PKIX world these
tenses roughly correspond to local storage, PKCS#7 and LDAP.

Sending the certificate with each message has obvious advantages if the
message may be opened offline so there is no opportunity to make a
certificate request. On the other hand it is obviously wastefull to send
certificates that are not needed. One non-protocol means of avoiding such
waste in an address book based mail agent would be to keep a record of
people to whom a certificate had been sent. That is not too difficult for a
personal address book of a few hundred folk and (say) five certificates.

Certificate Revocation Lists may appear to change the principles of the
system but really they don't. The semantics of a CRL is that they are a
certificate stating 'yes I still mean what I said earlier'. The syntax of
that statement may be in the form of a negative (I hereby revoke x) but from
the receivers point of view it is used to create a positive statement.

So the issue of CRL transport is in fact no different to that of
certificates themselves. The problem may again be addressed in past, present
or future tenses. The past tense relates to CRL exchange via either the pull
or the push model. Present tense is when we send a CRL along with the
message that establishes the continuing validity of the certificate, future
tense is online status checking using either CRL pull or something like
OCSP.

The distinction between 'pull' and 'push' models is worth some comment. In
any communication there has to be an 'initiator' and a 'respondent'. The
Client/Server paradigm makes this a major issue and leads many to the
misleading conclusion that there is some essential and fundamental divide
between the two. It is amusing to note that at least one major PC
manufacturer offers 'workstation' and 'server' machines using the exact same
motherboard and peripherals but charging almost twice as much for the SERVER
version. That is not to say that there are not major architectural
differences in other product lines (The 128 vs 256 bit memory bus
configurations of the AXP series for example), but I digress.

The pull and push models for CRL exchange arise from the fact of a more or
less homogenous, distributed computing network. The only distinction between
the nodes is that certain types of information originates in some places and
is consumed in others. There is no essential logical distinction between the
mechanisms, both ensure that bits are transported from one place to another.
The 'Push' mechanism has the advantage of allowing rapid disemination of
information if required. The pull model has the adavntage of being more
robust in the face of network instability, the party which will use the
information is responsible for requesting it.


Each mechanism has advantages and disadvantages, each gets a job done.
Therefore in the current period in which the task is to establish
comprehensive PKI it makes sense to pick one strategy for each purpose and
implement that. In time however there will be demand forl the other
mechanisms and standard certificate infrastructure software will support all
of them, just as today mail servers support POP and IMAP even though POP and
IMAP perform much the same function but in different ways.

Similarly charging models will also change. The US govt. wants to pay for
PKI infrastructure on the basis of certificate use rather than issue.
Whenever information is exchanged between two points there is an opportunity
to make a charge.

In short I don't see a distinction between three different PKI models that
are in competition. Instead I see a single, unified and coherent framework
which allows the most appropriate mechanism in each circumstance.


The one case in which I see a radical disagreement is between the
centralizers and the distributors. I make no apology for being a proponent
of the distributed solution in almost every case. I have built big iron
(processing 6Tb/sec) and have spent the past five years working on the Web.

It is almost always possible to offer a simpler solution in a networking
problem by channeling every communication through a single point. What this
approach does essentially is to avoid the problem of transport and consider
every issue in the 'present' tense only. This complexity then reappears at
the lower levels since the fastest processor available typicaly sits in a
box underneath my desk. The centralized paradigm is based on the assumption
that large computing systems will be available offering ten to a hundred
times the power of ordinary machines. Such asymmetry simply does not exist
any longer.

There will always be people who present the centralized computing model as
the way forward. It is beloved of telephone companies and of some MIS
departments which still hanker after the days when they controlled the
corporate information flow. If the scaling problem bites it does not matter
since complex solutions make for larger bugets.

The centralized model will always stop at the corporate boundary however.
The major discovery of the Internet was that communications between
companies had become almost as important as those taking place inside them.
General Motors is not going to make itself reliant on Ford's PKI server
under any circumstance. Therefore even if the scaling problem did not bite
the locus of trust problem would.

PKI infrastucture needs to be engineered to support the distributed model
therefore since in the long term it is the only viable option. In any case
the niche market for a centralized solution is already filled and those who
are willing to accept a single point of failure as the price of an
integrated single vendor solution have bought it. Microsoft and Netscape
have other plans however.


At this point I'm having difficulty understanding quite where the SPKI model
is distinctive. I know that it is possible to subset the mechanisms in more
than one way. I do not believe that restricting mechansims is the route to
'simplicity'. Allowing the mechanism appropriate to the problem to be
selected from a plurality of options is frequently the route to
'simplicity'. At this point the syntactical differences between PKCS#7,
X.509 and SDSI are rapidly becomming moot, have people noticed that CAPI
already provides handling for these? I work on the assumption that if UNIX
fails to match Windows feature for feature it dies.


            Phill







Follow-Ups: