[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Private keys and the emperor's clothes

In a private message commenting on the risk of two private keys being generated that would match, the author made some 
observations that I think are worth commenting on:

> ...a hierarchy could detect
collisions more easily than could web-crawler-type technology searching
globally distributed uncoordinated directories.  But once one were detected
what do you do?  You either need to have the CAs generate the private
keys (clearly undesirable) or you have the CA notify the principal that
somewhere in the world there is someone he can spoof (also not desirable).

I replied:

But let me poke gently at another of your assumptions, that it would be "clearly undesirable" to have the CA generate 
the private key. This may be a sacred cow, but why is this necessarily so?

Let's consider the threat. Presumably you are concerned about the possibility that if the CA were involved in generating 
your key they might keep an unauthorized copy, and thus might be able to decrypt messages intended only 
for your eyes, or alternately, be able to forge your name to a document.

Although most of the legal analysis in the area of public key cryptography has focussed on the use of digital signatures, 
as opposed to encryption, the draft ABA Digital signature Guidelines make it clear that the  CA has a fiduciary relationship 
with respect to its subscribers. Disclosing or misappropriating a subscriber's private key would leave them wide 
open for huge civil damages in the case of an encryption key, and for both civil and criminal (fraud and forgery) 
charges in the case of a signature key. It would be similar to a bank disclosing your PIN to someone else, or a bank employee
transferring funds from your account without your permission. It is reasonably safe to assume, therefore, that any prudent 
CA would run screaming from even the possibility of retaining any knowledge of your private key once it has been generated,
unless they are escrowing your key for you for backup purposes, with your explicit consent.

But suppose you are so careless, naive, or stupid as to select a CA that is totally ignorant of such issues, and/or one that 
that is deliberately bent on mischief.  Such a CA, and in fact any CA, could impersonate you any time it wished to, just by 
issuing a certificate containing your name and/or other significant attributes, without your consent or even knowledge! It isn't
necessary to know your private key, or to attempt to find or create a duplicate -- since it is the CA that is binding your public key 
to your name, e-mail address, etc., they could issue a false certificate containing their own public key and bind it to your 
name, any day of the week.

This is why, back in the PEM days, we wrestled with the need for name subordination. Although the scheme 
eventually unraveled somewhat, especially in the case of the unaffiliated residential person, in the case of the 
organizational user it was a darned good way of ensuring that a rogue CA would only be able to compromise the 
security of those individuals whose certificates were beneath it in the certificate hierarchy. I.e., if XYZ Corp. sets up a 
CA to issue certificate to its employees, those employees would be beneath the CA's name in the naming tree, and 
presumably XYZ would be liable for the alleged misdeeds of its employees in any case. They would therefore be 
hung with their own rope, or hoist by their own petard, as it were.

I realize that the concept of a global directory is often deprecated on this list, but without such a directory, it becomes very 
difficult for a subscriber to search all possible certificate repositories for a certificate containing his name or other attributes.
And since this group also tends to deprecate the notion of a globally unambiguous name (smacks too much of X.500), even 
if you found a certificate which contained your name, the CA could say, "Oh, well, that's not you, that's the other David P. Kemp."

The X.509/X.500 paradigm provided certain mechanisms that tended to reinforce the basic trust model. If we are going to 
depart from those paradigms, we have to rethink the entire trust model, and how to achieve/enforce it.

But back to the  issue of private key security. Your unstated assumption seems to be that as an individual user, 
generating your private key on your own generic hardware platform, using whatever untrusted, pond scum operating 
system you happen to have available (almost certainly not even C2 rated), in an environment in which you are 
routinely connected to the Internet, perhaps running Java programs, executing (printing) PostScript scripts and word processing
document containing macros, and/or expanding downloaded files using convenient self-extracting utilities, and probably 
failing to make any  super-conscientious effort to monitor all of the CERT alerts and rid your system of all possible 
viruses and Trojan Horse programs, that nonetheless your private key will be more secure than if a CA had generated it. Sure it will.
You'll type in 1024 random key strokes in order to generate your private key seed, and a TSR or virus will be looking over your shoulder
all the while and will export your key the next time you connect to the outside world.

Pardon me, but the emperor is likely to catch a cold if he doesn't start wearing some more clothes!

In contrast to this environment, I would suggest that a reasonably prudent CA that chooses to generate public/private key 
pairs for its subscribers is going to make use of a very carefully developed and maintained system, in order to minimize their 
own risk. I would expect that any software used would be anonymously purchased in shrink wrapped form, brought 
into a sterile, clean room environment, and installed from the ground up on a system that had been completely purged by a 
low level formatting of the removable hard disk (the only storage mechanism used), which is locked in a safe when not in 
actual use. Such a system would almost surely be isolated by an air gap from any network, and would make use of certified hardware
random number generators rather than risky software based approaches which depend on lots of outside factors to achieve
randomness-- the same outside factors that tend to make the system insecure.

The more prudent CAs will probably use a system which has been rated at the B3 level of trust, in order to provide additional 
assurance that the private key won't be compromised once it is generated, and TEMPEST-certified and tamper-proof 
hardware will also be used. Assuming that the CA is a licensed CA, as envisioned by the Utah Digital Signature Act, then 
there is both state regulation and periodic auditing and inspection. I would assume that two-man, no lone zone kinds of rules 
would be in effect during business hours where the keys are generated and the certificates signed, and that the operation
would take place within an alarmed strong room or vault that would correspond to the DOD criteria for open storage of Top Secret

Of course not all CAs may be licensed, or provide such exalted levels of security. In that case, I would advise their subscribers
to insist that a very strongly worded caveat be included in their certificates, to warn potential relying parties not to put much faith in 
any document whose digital signature is to be validated by such a certificate, and likewise not to reveal anything that is terribly
important in an encrypted message that is sent to them, for these are only light weight certificates that are very trustworthy, 
regardless of how sophisticated the underlying encryption mechanisms might appear to be.


The original author's reply was also interesting:

>Your point about malicious CAs is well-taken, but if you use a hardware
token for key generation / signing (even in a malicious, untrusted dos
environment) the emperor does at least have underwear.

Good point. All of the quibbles I had regarding private key generation are also true
with respect to the actual use of a private key, of course, unless tamper-proof hardware is used.

And even then we have the garbage-in, garbage-out problem -- hardware devices sign what they 
are instructed to sign, and none that I am aware of have an independent display of what they are signing.

This just reinforces the point that it is necessary to look at the _entire_ system when evaluating the overall 
strength and weakness of an approach, not just particular components. This, of course, is a tough job. Sigh.


Robert R. Jueneman
Software Engineering Consultant
NetWare Security R&D
Internet Infrastructure Division
Novell, Inc. M/S-PRV-D241
122 East 1700 South
Provo, UT 84606