Friday, January 6, 2012

Misunderstanding Trust


Last July, I blogged about “Understanding Trust”, in which I attempted to describe several properties of trust. Because I thought that most of these properties were obvious, I was somewhat surprised to see someone with an interest in security authoritatively quote a well-known Microsoft software developer in post on a cryptography mailing list that “trust is not transitive”.

Of course I strongly disagreed. If you are interested in the specific context, you can find the full text of my post in the crypto mailing list archives. However, based on the research that I did and this specific post made me aware that there are still several software and security engineers who still have a misunderstanding of trust. So I decided that perhaps I should attempt to clear up this misunderstanding.

Is Trust Transitive or Isn't It?

The post to the cryptography mailing list that I attempted to refute started out by citing Microsoft developer, Peter Biddle, stating “More fundamentally, as Peter Biddle points out, trust isn't transitive”.
So, before writing a rebuttal to his response, I thought it would be a good idea to track down the source of Peter Biddle's comments. I eventually found the source in Peter Biddle's blog post titled “Trust Isn’t Transitive (or, 'Someone fired a gun in an airplane cockpit, and it was probably the pilot')”.

Myself and I think most security pundits really believe that Peter Biddle is wrong about trust not being transitive. If you read carefully through Peter Biddle's blog on this topic, you will see (as Keith Irwin so aptly pointed out in a reply to the cryptography mailing list) that Biddle is mixing contexts here. In a nutshell, in Biddle's blog, he is making the argument that trust in two completely different contexts equates to trust in general (i.e., any context) and therefore concludes trust is not transitive.

However, trust clearly is context dependent and when considering whether or not trust is transitive, we need to consider the same context.

Specifically, if C1 and C2 are two different contexts, it does NOT logically follow that:
    There exists a context C1 such that “Alice trusts(C1) Bob”
    There exists a context C2, where C1 != C2, such that “Bob trusts<C2> Carol”
    Alice trusts<C> Carol for all contexts, C.
where trusts<C> means “trust in context C”.

That seems to be the way that Biddle is arguing about trust not being transitive. Well, if that's the way he's defining it, then of course it's not transitive.

If it is just that...well, that's the WRONG way to reason about transitivity in general, and trust being transitive in particular.

Transitivity is a mathematical property of some relationship R and says for x, y, and z are members belonging to some well-defined set, then we call relationship R transitive if:
    ( xRy ˄ yRz ) xRz
for all x, y, and z elements of some set S. (See the Wikipedis article on transitive relationships for more thorough, but very comprehensible treatment of this.)

However, in Biddle's blog where he gives his examples, all the examples that he mentions is is talking about two different contexts (e.g., flying planes and handling firearms, or working on cars and taking care of kids).

That is, Biddle is really discussing two different relationships
    trust<flying planes>
    trust<handling firearms>
and what he is then trying to conclude is that
    ( x trust<flying planes> y ) AND ( y trust<handling firearms> z ) IMPLIES ( x trust<C> z )

for any context C. Well, duh! If you make a fallacious straw man argument about trust being transitive in this manner, of course your conclusion this going to be that "trust is NOT transitive". But you would also, IMHO, be wrong. If we stick to a specific context / attribute however, then I think you will find the logic concludes that trust is transitive. (But, I'll show later it's not really quite that simple.)

Here's a really nutty case restricted to a specific context that I hope will make the point. Let's conjecture that both
    Passengers trust<flying planes> Pilot P
    Pilot P trust<flying planes> Chimpanzees
are true. (That is, “passengers trust some specific pilot P in flying planes” and “some (same) specific pilot P trusts chimpanzees in flying planes.) So, some pilot P brings his trusted chimpanzee into the cockpit and shortly after takeoff, he decides to take a little nap so handles the controls over to his chimp pal. And all this occurs unbeknownst to the passengers. So what do we conclude? Well, logic dictates that based on the premises, we may conclude:
    Passengers trust<flying planes> Chimpanzees

But wait! That's absurd you say. Well, perhaps. But then again, whether the passengers know it or not, the Chimp who is supposedly flying the plane is pretty much holding the lives of the passengers in his hands (or is that paws?).

On one hand, these passengers are literally (unbeknownst to them) trusting that chimp to safely fly that plane. (Or course, on the other hand, if there where a dozen parachutes on the plan, there would be a blood bath seeing who would get them. ;-)

Now lets make a little change in the premise. Let's substitute 'Auto Pilot System' for 'Chimpanzees'.

The conclusion is now:
    Passengers trust<flying planes> Auto Pilot System

All I've done is exchanged one symbol (Chimpanzee) with another (Auto Pilot System), but all of sudden most of us feel a whole lot better.

So what does that tell us about 'trust'? Well, for one, the human concept of trust is much more complex than some simplistic quantifiable mathematical property as we have been trying to model it thus far. And herein a big problem in security. Why? Because the software systems that we construct can no way approach the complexity of all these nuances. (Not that it matters a whole lot. History has shown that we can't even get the simpler model correct, but I digress.)

But Wait, There's More

In the post that I responded to where the poster was arguing that trust was transitive, they continued with this example:
When CAs [Certificate Authorities] get in the habit of delegating their power, that process is at risk of being bypassed and in any case starts to happen much less transparently. There are plenty of cases in the real world where someone is trusted with the power to take an action, but not automatically trusted with the power to delegate that power to others without external oversight. And that makes sense, because trust isn't transitive.

This statement makes sense, but NOT because 'trust isn't transitive'. Here the mistake in reasoning is not in trying to equate two different contexts. Rather, it makes sense because of another aspect of trust that I have discussed before on my “Understanding Trust” blog post. Specifically,
    Trust is not binary.

Trust is not black or white; it is shades of gray. As humans, for a given context, we "assign" more trust to some and less to others. This "level of trust" is largely based on our perception of experience and reputation, the latter which we sometimes try to model in reputation-based systems.
An example...unfortunately, you need brain surgery. (If you are reading this blog, that should be proof enough. I rest my case. ;-) You have two surgeons to choose from:
    Surgeon #1: 10 years of experience and over 300 operations.
    Surgeon #2: 1 year of experience and 6 operations.

All other things being equal, who you gonna choose? Surgeon #1, right? (Well, unless in those 300 operations, s/he has had 250 malpractice results. ;-) And at least by comparison, you probably do NOT trust Surgeon #2.

So, with that in mind, let's get back to the transitivity part:
    You trust<brain surgery> Surgeon #1
    Surgeon #1 trust<brain surgery> Surgeon #2
so, obviously,
    You trust<brain surgery> Surgeon #2.

Whoa! Wait a minute. Didn't we just say that we did NOT trust Surgeon #2. Yep!

So what went wrong here? Well, that went wrong is that we are assuming that trust behaves as a binary relationship...that I either have complete trust or zero trust. But trust is not binary. It is shades of gray. That means that to more accurately model trust in the real world, we need some property for that relationship that indicates a level of trust, rather than trust just being T/F. So we need that in addition to a context.

So now we see we need (at least) something like:
    trust<level, context>
to model trust. Where before we just were (implicitly) using something like
    trust<{T,F}, context>
(which allowed us to model only complete trust or no trust), we find we now need something more like:
    trust<[0,1], context>

That is, we model level as a real number in the range 0 to 1, inclusive.

Cryptographer and software engineer Ben Laurie pointed out that trusted modeled in this way is very similar to KeyMan, a piece of software that he and Matthew Byng-Maddick developed back in 2002 to facilitate the management of keys, certificates and signatures used in signing software in a distributed and exportable network of trust.

So... we're done now, right? Well, not so fast Sparky. There are other important properties of trust that I already covered in my “Understanding Trust” blog post last July. If you have not already done so, I would encourage you to go back and read it.

Recasting Trust

The term “trust” is overloaded with several meanings and therefore causes a lot of confusion. On the crypto mailing list, Marsh Ray suggested that we use the term “relies on” as suggested by his former colleague Mark S. Miller.

I think in general, this is a great idea. If we say that “A relies on B” and “B relies on C”, then it is intuitively obvious that “A relies on C”, and hence transitivity immediately follows.

Using “relies on” works in many situations when we normally might use the word “trust” as a verb. I for one intend on starting to use it much more often than I do, because you have no idea how many times I almost accidentally made an embarrassing typo and misspelled “trust” as “tryst”. But perhaps that's the true hidden cryptographic meaning of cryptographers using Alice, Bob, and Carol in their discussions. As with many things cryptographic, maybe there's more going on there than is apparent. (I'll kindly spare you the obvious pun in this case.)

P.S.- I promise I will try to be a little more consistent with my blogging in 2012. (Did I just make a New Year's resolution? ;-) But thanks to all of you who have been faithful in reading and haven't completely given up on me.


  1. Hi Kevin, this is an interesting argument and definition... however (as you point out) the term "trust" is overloaded so is quite dangerous. Consider the following passage from NIST SP800-144, Guidelines on Security and Privacy in Public Cloud Computing:

    "Cloud services that use third-party cloud providers to outsource or subcontract some of their services should raise concerns, including the scope of control over the third party, the responsibilities involved (e.g., policy and licensing arrangements), and the remedies and recourse available should problems occur. Public cloud providers that host applications or services of other parties may involve other domains of control, but through transparent authentication mechanisms, appear to a consumer to be that of the cloud provider. Trust is often not transitive, requiring that third-party arrangements are disclosed in advance of reaching an agreement with the cloud provider, and that the terms of these arrangements are maintained throughout the agreement or until sufficient notification can be given of any anticipated changes."

    Hmmm... I'd say "trust" (of the general sort you're defining -- there are a few other types) can be transitive only if everyone in a chain-of-trust has the same understanding of the "context" in which their trusting decision is being made. As pointed out by these NIST authors, the context of "cloud computing" is not narrow enough to support a fully-transitive trust (unless you make some additional assumptions about well-informed trusters).

    Anyway, I'm adding this blog-entry to my "taxonomic zoo" of trust-definitions. Thanks for the posting!

    I have not developed (and probably never will develop) a full classification tree, but your definition would be in the Axiomatic kingdom if I ever did develop such a tree. The other kingdom would be Behavioural. See if you want the gory details...

    1. Clark, thanks for your thoughtful comment. I think that in cloud services, the _intent_ is often is not to promote a chain-of-trust through transitivity, but the result is often that it that chain-of-trust is still there, if only implicitly. Ideally one could handle such things through contractual agreements and periodic audits, but the reality is that these things are seldom, if ever, done. Any while context is important in this regard, it is not generally something that most businesses discuss with cloud service providers.

  2. I agree that trust isn't binary, but I think it goes much further than shades of gray. If you put the context into the single variable as well, then trust becomes a picture, not just an integer (not to mention a single bit)

  3. David, great comment.

    When I wrote that trust was shades of gray, I was not really implying the level of trust was necessarily a function of a single variable. Note the use of "(at least)" in my statement:

    "So now we see we need (at least) something like:
    trust<level, context>
    to model trust. Where before we just were (implicitly) using something like
    trust<{T,F}, context>
    (which allowed us to model only complete trust or no trust), we find we now need something more like:
    trust<[0,1], context>

    That is, we model level as a real number in the range 0 to 1, inclusive."

    That is, think of a function, possibly of multiple variables, that maps to some real number between 0 (representing total distrust) to 1 (representing total trust).

    And in reality, the context also is dependent on multiple variables and the context also can influence the level of trust that we assign.

    So what I meant by stating that trust is shades of gray was simply that it varies somewhere between complete distrust and complete trust; that there is a spectrum of trust that we, as humans, assign almost unconsciously. If that we not true--if trust was really a binary function--then we would not be able to assert statements such as "I trust Alice more than I trust Bob and I trust Bob more than I trust Carol".

    Hope that clarifies things for you.

  4. I believe the root cause of confusion here is the qualitative difference between trust in the sense of public key infrastructure as performed by cryptographic operations within silicon computing machines and the dynamism of mammalia trust. Combining "context" (worlds) with "levels" (properties) in tautologies is similar to how existential modalities are expressed on a metaphysical level. Worlds can be isolated, united, overlapped, etc. and multiple properties can be associated with a variable where zero or more may or may not exist depending upon existential quantifiability. Transitivity is a ternary relation that can be broken down into two binary relations such as implications. Existential logic has quantified modes, so transitivity may appear in sentential notation like this, "if iRj and jRk, then iRk." If you're interested in the details, I suggest reading "On the Plurality of Worlds" by David Lewis. The meta-physics course I used it in was a prerequisite for time travel study because it teaches you to describe hypothetical situations in parallel universes.

  5. Hi Derek. Thanks for taking the time to comment.
    I'm not sure if *the* root cause of the confusion is the qualitative difference of how computer scientists treat trust in the realm of information security versus the imprecise way that the general population uses the term in the more generic scenarios, but it certainly is a major contributing factor to the misunderstanding about trust. As to whether we need to involve multiple worlds into this discourse to help us with the explanation of whether or not trust is transitive, let me just say that I have enough trouble understanding a single world so I will leave that debate to the philosophers. However, I would hope that Ockham's razor would not lead us to seek such elaborate explanations in this case. I personally would prefer an approach that first starts with us more precisely defining the terms we are about to use, especially when we use them with different nuances then does the rest of the general populace. As least as far as the domain of information security goes, computer scientists have been less than stellar in defining our terminology and I am probably just as guilty of that as anyone.
    And lastly, as I am apt to joke about, I have given up my practice of time travel ever since I accidentally left my time machine in the future. I hate it when that happens.