[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Further comments on capabilities [long]



Bill has sent out an excellent description.  There are two parts that
may bear expansion:

	+ The issue of delegation
	+ The issue of confinement

What follows is not directly related to encryption, it is rather a
description of some usage scenarios that ACLs do not appear
to readily support.

As an aside, we are planning an early release of EROS, a pure
capability system that runs on X86 boxes, in late June.  Information
on that system can be found at

	http://www.cis.upenn.edu/~eros

If you would like to be notified when that release happens, drop a 
piece of email to the address that appears on that web page.

As a substrate, SPKI ought to be able to handle these issues in
pretty much the same way that a straight capability model would.
In particular, note that certs are safely transferrable once the
secure bootstrap conditions are established; they therefore avoid
several of the problems I have pointed out for user identities.


*** Delegation:

One thing we would like to be able to do is *selectively* delegate
authority.  Suppose you have a spreadsheet, and you want one
of your support departments to do a thought experiment with it.  
Suppose further that you work in an environment where the standards 
of practice require decent attention to privacy: documents should
be disclosed only to those parties who have a need to work on them.
Staff members therefore are not routinely given access to each others 
documents.

If you knew who would be working on the document, you could add
those people to the access control list for the document.  But this
is a big project, and what you've really done is hand off the document
to a manager in another department, who is going to delegate the
actual effort to some individual(s) in that department.

If you use an ACL system, at this point you are pretty well stuck.  
You can either:

	1. Make your document readable to the world
	2. Grant the manager in the other department
	    the right to modify the ACL.
	3. Have the other manager make a COPY of the
	   document, which will then be handled according
	   to the usual practices in their department.

Option (1) is obviously unsatisfactory.  Option (2) gives far too much
authority away, and Option (3) destroys the preservation of identity
of the document, which destroys the audit trail on what changes
were made -- not a security issue in this environment (though it is in
some), but something you might well want to maintain.

These sorts of problems are handled more naturally in a capability
system, where the authority to modify the document can simply be
handed from one party to the next without breaking the audit trail.

One essential difference between ACLs and Capabilities is that
capabilities are document centric, while ACLs are user-centric.

In the context of distribution, ACLs raise a further problem: different 
sites administer user identities under different policies -- the name
space for users is therefore what I call a "political namespace".
In order for two machines to correctly enforce access controls, some
sort of conformance between the two administrators' user name
conventions must be established.  This mapping is usually 
constructed by a human being, who in effect gets universal authorities
by having this control.  This points up several issues:

	1. You may trust the *user* at another site without trusting
	    the *administrator* at that site.
	2. Stipulating that you trust the administrator, the mapping
	   problem is difficult for a well-intentioned administrator to
	   retain clarity about, and therefore prone to error.

Historically, such name translation schemes have often been buggy,
and have provided significant opportunities for security attacks.

In the capability model, we can draw a "chinese wall" between the
administrator and the object.  We can require that the administrator
provide the raw storage without disclosing to them how that storage
will be used.  If we know that the remote system runs a trusted
supervisor, we know that the information will not be improperly
disclosed to the administrator.

Even if the administrator is a terrific person, this approach is significantly
less prone to error.

Finally, this approach lets *administrators* delegate authority.  One
administrator can say "I know the user pool at site XXX, and I am
willing to lend up to 10% of my computing resources to anything those
users want to do."

I'm fairly sure that all of this can be done in the context of an ACL
model as well, but it seems much easier to think about in the capability
model, and the easier it is to understand the more likely we are to
get it right.



*** Confinement

The so-called "confinement problem" really has three different 
forms:

	1. ensuring that no unauthorized information goes IN to
	   a program or subsystem.
	2. ensuring that no unauthorized information goes OUT
	   from a program or subsystem.
	3. ensuring that no unauthorized information passes in
	   either direction.

The second of these is the one that Butler Lampson called the
"confinement problem" in his letter ("A Note on the Confinement
Problem", Communications of the ACM, V 16, N 10, October, 
1973. <http://www.cis.upenn.edu/~KeyKOS/Confinement.html> 
for a description and discussion of its limitations.)  

Case (1) is important for authority control and traceability -- if you
need a reproducible result, you need to know what exactly the inputs
were that produced your output.

Case (2) is important for security -- you hand sensitive information to 
a program and need to know that it will not be disclosed to (e.g.) the
programmer.

Case (3) is important for testing -- in the software industry, we engage
in a practice called "black box" testing, but we rarely bother to verify
that the box is actually black, and we then integrate the components
by removing the boxes.

Each case has other applications as well -- I'm simply trying to give a
clear example for each so you will have a mental framework to hang
them from.

EROS, and KeyKOS before it, both provide a solution to problem (2),
and it is not unduly difficult to handle problems (1) or (3) in either system.
Sam Weber (of the University of Pennsylvania) and I have in fact just
sketched a formal mathematical proof of the solution for problem (2).

My personal opinion is that confinement grows steadily less interesting
from the standpoint of security; I view it more as a reliability tool.  The
problem is threefold:

	1. Covert channels have ever increasing bandwidth
	2. The world is ever more interconnected
	3. The thieves are getting better at framing the really
	    important questions in yes/no terms.  These are
	   almost impossible to beat.

For reliability, though, the ability to enclose a subsystem in a black
box for testing -- and actually run it that way in the field with decent
performance -- seems rather more important.  This sort of structure
also lends itself to finer specification of the system.  It therefore
better supports the types of social processes that generate more 
reliable systems to begin with.


Apologies for the length.  Hopefully some of this is useful.


Jonathan S. Shapiro