[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Trust vs. accountability
On Tue, 17 Feb 1998, Michael Robinson wrote:
-> I really don't understand what the point is to all these arguments about
-> defining and managing trust.
-> Maybe I'm just dim, but it seems axiomatic to me that trust can be
-> violated. Otherwise we wouldn't call it trust; we would call it certainty.
-> In the real world that I'm familiar with, people create mechanisms to enforce
-> accountability, not trust.
Linguistically, "Trust" is akin to "true" and "faithful", with a usual
first dictionary meaning of "1 a : assured reliance on the character,
ability, strength, or truth of someone or =something b : one in which
confidence is placed."
So, in common English usage trust is what you place your confidence in or,
expect to be truthful. It is a basic concept in system security -- it's
your bastion, your stronghold. You may doubt anything in your design, but
not that which **you** designated as trusted (within the design limits, of
course). May it be your trusted private-key, your trusted Hanko, your
trusted computer, whatever.
And it is called trusted (and not certainty) for a good reason: because it
involves your judgement and not your knowledge -- so it may also change
from person to person or from time to time.
Now, violation of trust is just what Ben and Marc were talking about here:
how do you ascertain trust? Or, in your terms, how do you ascertain that
trust will NOT be violated? So, in the information-theoretic approach,
violation of trust happens when the trust you receive is tainted with
information (information = surprises, good or bad). So, it is indeed
axiomatic that it can happen -- but, the question is: how can I make the
probability of it happenning as low as I want it to be?
The objective here is to be able to make calculations, to arrive at
numbers, evaluators. Which means that questions such as evidence,
responsibility, validation, reliability, generalization, uncertainty,
consistency, truthfulness, accountability, legal reliance, liabilities,
warranties, ethics, etc. are issues which are untouched here because of a
good purpose and with a firm reason ;-)
I intend to totally exclude any possibly avoidable inter-subjective
dependence at this level -- and introducing **any** of these issues would
much taint the presented information-theoretical definition of Trust
(which is mostly subjective in this treatment) with unneeded
Thus, avoiding accountability in the communication layer is a good thing
and indeed accountability is a secondary concept that is not even useful
at that layer. For example, the fact that you trust your computer does not
make it accountable, no?
For a fuller discussion, so I don't further increase the bandwidth here ;-)
the following message of today can be useful:
Dr.rer.nat. E. Gerck email@example.com
--- Meta-Certificate Group member, http://www.mcg.org.br ---