Monday, November 12, 2012

RSA Distributed Credential Protection: Solving the Wrong Problem?

Recently, RSA Distributed Credential Protection (DCP) was announced by RSA Security.  I’ve read the literature, sat through a presentation by an RSA sales representative, and watched the YouTube videos. And most of all, have formed an opinion.

Being a crypto geek of sorts, I’ll be the first to admit that this seems like a really cool and interesting application of secret splitting.  But, as much as RSA makes it sound like the most innovative thing since sliced bread, I believe that is fundamentally a solution to the wrong security problem. Let’s have a look at why.

As I’ve written many times, security is fundamentally about ensuring trust and managing risk. When attempting to lower risk, there is always a cost / benefit balance that needs to be studied. Just as one would not spend $10,000 for a home safe only to store $1000 in it, IT is not going to spend six figures on a solution that will not reduce the perceived cost of the risk at least that much. (Note that while I am not certain of the pricing of RCA DCP, it is not farfetched to think that an enterprise rollout would be near the six figure range.)

RSA advertises DCP to be transparent to the end users, and so it is. However, that is not the only thing that is important here. Another major factor that only came out during the live presentation by RSA was that any application wishing to take advantage of DCP needed to change their application to adapt to its API. This means of course if you have N applications all authenticating users to an SQL database (or perhaps an LDAP directory store, if the API works with that), all N of those applications need to be changed. If you fail to change one application, then you still need the user’s stored in a single data store where DCP is not applied and consequently still remain unprotected. So take the cost of licensing the RSA DCP software and add to it the cost of each of your N applications integrating the DCP API and you will have something closer to the total cost of deployment. Of course, the operational costs are also likely to increase somewhat as well. Whereas before you had but a single data store for said credentials, now you have two. The end result is that the total cost to incorporate RSA DCP into your environments is likely to exceed the six figure level even if the software licensing costs is nowhere near that amount.

Well, still, that might be OK, right? After all, if the perceived benefits greatly exceed the total costs of mitigation, we still have a security win.

So what benefits does RSA DCP bring to the enterprise? According to the RSA press release as well as this YouTube video, the threat that RSA is trying to prevent is the “smash-and-grab” of credentials by an attacker. Specifically, DCP is designed to make it more difficult for an attacker who has infiltrated your company network and has managed to get direct access to your database server to obtain credentials (either plaintext or hashes).  DCP also would likely mitigate a rogue DBA doing a “smash-and-grab” of your company’s credential data as well, as long as care was taken to provide separation of duties and not give a single administrator a DBA role on both DCP servers.

So we still need to answer this question: Is this a common way for an attacker to gather user credentials?  In my opinion, it is not. By far, the biggest attack vector for adversaries stealing credential material is via SQL (or possibly LDAP) injection attacks. Will DCP do anything to mitigate SQLi attacks? The answer would appear to be “no” (at least according to the RSA sales rep that we talked to). In fact, given that one has to bolt new DCP API code into one’s application to use DCP, there is a chance that new SQLi vulnerabilities may be introduced as developers change the application code.

So is there a place where using RSA DCP would make sense? I believe so, but I think it is a niche market rather than the broad market RSA Security would like it to be.  RSA DCP could be very valuable where you have an extremely high-value target (credentials or otherwise) that are difficult to upgrade. The perfect example that comes to my mind is protection of the RSA Security SecurID seeds. Compromise of those SecurID seeds required RSA to replace all the hard token SecurID devices.  In fact, it is not unreasonable to speculate that this product came directly out of researching ways to protect those high-value credentials from the smash-and-grab type of direct attacks resulting out of that breach. If RSA Security wishes to broaden the market for their new DCP product, I believe that the best approach is for them to integrate DCP seamlessly in with their other products, starting with RSA Access Manager. If you are going to make a believer of us security folk, you first have to be willing to eat your own dog food.

However, in the meantime, for your regular user passwords, salting with a sufficiently long random salt, enforcing password complexity rules when users select their passwords, and enforcing account lockout are likely to be sufficient protection for your customer passwords.  If doing those things is not sufficient, you seriously need to consider whether passwords are a strong enough form of authentication for your users.

Note that the views expressed herein are wholly my own and do not represent those of my company, of OWASP, nor any other organizations with whom I am associated.

Regards,

-kevin

Friday, January 20, 2012

USACM Policy Statement on SOPA and PIPA

For now, the immediate battle seems to have been won, but you can be sure that the MPAA and their cohorts will be back soon.

However, I wanted to point everyone to this USACM blog post that provides their policy statement regarding SOPA and PIPA. If you don't want to read their full policy statement, I would at least encourage you to take a quick look at their very approachable (by non-Geeks ;-) "Analysis of SOPAŹ¼s impact on DNS and DNSSEC".

Friday, January 6, 2012

Misunderstanding Trust

Background

Last July, I blogged about “Understanding Trust”, in which I attempted to describe several properties of trust. Because I thought that most of these properties were obvious, I was somewhat surprised to see someone with an interest in security authoritatively quote a well-known Microsoft software developer in post on a cryptography mailing list that “trust is not transitive”.

Of course I strongly disagreed. If you are interested in the specific context, you can find the full text of my post in the crypto mailing list archives. However, based on the research that I did and this specific post made me aware that there are still several software and security engineers who still have a misunderstanding of trust. So I decided that perhaps I should attempt to clear up this misunderstanding.

Is Trust Transitive or Isn't It?

The post to the cryptography mailing list that I attempted to refute started out by citing Microsoft developer, Peter Biddle, stating “More fundamentally, as Peter Biddle points out, trust isn't transitive”.
So, before writing a rebuttal to his response, I thought it would be a good idea to track down the source of Peter Biddle's comments. I eventually found the source in Peter Biddle's blog post titled “Trust Isn’t Transitive (or, 'Someone fired a gun in an airplane cockpit, and it was probably the pilot')”.

Myself and I think most security pundits really believe that Peter Biddle is wrong about trust not being transitive. If you read carefully through Peter Biddle's blog on this topic, you will see (as Keith Irwin so aptly pointed out in a reply to the Randombits.net cryptography mailing list) that Biddle is mixing contexts here. In a nutshell, in Biddle's blog, he is making the argument that trust in two completely different contexts equates to trust in general (i.e., any context) and therefore concludes trust is not transitive.


However, trust clearly is context dependent and when considering whether or not trust is transitive, we need to consider the same context.

Specifically, if C1 and C2 are two different contexts, it does NOT logically follow that:
    There exists a context C1 such that “Alice trusts(C1) Bob”
    There exists a context C2, where C1 != C2, such that “Bob trusts<C2> Carol”
Therefore,
    Alice trusts<C> Carol for all contexts, C.
where trusts<C> means “trust in context C”.

That seems to be the way that Biddle is arguing about trust not being transitive. Well, if that's the way he's defining it, then of course it's not transitive.

If it is just that...well, that's the WRONG way to reason about transitivity in general, and trust being transitive in particular.

Transitivity is a mathematical property of some relationship R and says for x, y, and z are members belonging to some well-defined set, then we call relationship R transitive if:
    ( xRy ˄ yRz ) xRz
for all x, y, and z elements of some set S. (See the Wikipedis article on transitive relationships for more thorough, but very comprehensible treatment of this.)

However, in Biddle's blog where he gives his examples, all the examples that he mentions is is talking about two different contexts (e.g., flying planes and handling firearms, or working on cars and taking care of kids).

That is, Biddle is really discussing two different relationships
    trust<flying planes>
and
    trust<handling firearms>
and what he is then trying to conclude is that
    ( x trust<flying planes> y ) AND ( y trust<handling firearms> z ) IMPLIES ( x trust<C> z )

for any context C. Well, duh! If you make a fallacious straw man argument about trust being transitive in this manner, of course your conclusion this going to be that "trust is NOT transitive". But you would also, IMHO, be wrong. If we stick to a specific context / attribute however, then I think you will find the logic concludes that trust is transitive. (But, I'll show later it's not really quite that simple.)



Here's a really nutty case restricted to a specific context that I hope will make the point. Let's conjecture that both
    Passengers trust<flying planes> Pilot P
and
    Pilot P trust<flying planes> Chimpanzees
are true. (That is, “passengers trust some specific pilot P in flying planes” and “some (same) specific pilot P trusts chimpanzees in flying planes.) So, some pilot P brings his trusted chimpanzee into the cockpit and shortly after takeoff, he decides to take a little nap so handles the controls over to his chimp pal. And all this occurs unbeknownst to the passengers. So what do we conclude? Well, logic dictates that based on the premises, we may conclude:
    Passengers trust<flying planes> Chimpanzees

But wait! That's absurd you say. Well, perhaps. But then again, whether the passengers know it or not, the Chimp who is supposedly flying the plane is pretty much holding the lives of the passengers in his hands (or is that paws?).

On one hand, these passengers are literally (unbeknownst to them) trusting that chimp to safely fly that plane. (Or course, on the other hand, if there where a dozen parachutes on the plan, there would be a blood bath seeing who would get them. ;-)

Now lets make a little change in the premise. Let's substitute 'Auto Pilot System' for 'Chimpanzees'.

The conclusion is now:
    Passengers trust<flying planes> Auto Pilot System

All I've done is exchanged one symbol (Chimpanzee) with another (Auto Pilot System), but all of sudden most of us feel a whole lot better.

So what does that tell us about 'trust'? Well, for one, the human concept of trust is much more complex than some simplistic quantifiable mathematical property as we have been trying to model it thus far. And herein a big problem in security. Why? Because the software systems that we construct can no way approach the complexity of all these nuances. (Not that it matters a whole lot. History has shown that we can't even get the simpler model correct, but I digress.)

But Wait, There's More

In the post that I responded to where the poster was arguing that trust was transitive, they continued with this example:
When CAs [Certificate Authorities] get in the habit of delegating their power, that process is at risk of being bypassed and in any case starts to happen much less transparently. There are plenty of cases in the real world where someone is trusted with the power to take an action, but not automatically trusted with the power to delegate that power to others without external oversight. And that makes sense, because trust isn't transitive.

This statement makes sense, but NOT because 'trust isn't transitive'. Here the mistake in reasoning is not in trying to equate two different contexts. Rather, it makes sense because of another aspect of trust that I have discussed before on my “Understanding Trust” blog post. Specifically,
    Trust is not binary.

Trust is not black or white; it is shades of gray. As humans, for a given context, we "assign" more trust to some and less to others. This "level of trust" is largely based on our perception of experience and reputation, the latter which we sometimes try to model in reputation-based systems.
An example...unfortunately, you need brain surgery. (If you are reading this blog, that should be proof enough. I rest my case. ;-) You have two surgeons to choose from:
    Surgeon #1: 10 years of experience and over 300 operations.
    Surgeon #2: 1 year of experience and 6 operations.

All other things being equal, who you gonna choose? Surgeon #1, right? (Well, unless in those 300 operations, s/he has had 250 malpractice results. ;-) And at least by comparison, you probably do NOT trust Surgeon #2.

So, with that in mind, let's get back to the transitivity part:
    You trust<brain surgery> Surgeon #1
    Surgeon #1 trust<brain surgery> Surgeon #2
so, obviously,
    You trust<brain surgery> Surgeon #2.

Whoa! Wait a minute. Didn't we just say that we did NOT trust Surgeon #2. Yep!

So what went wrong here? Well, that went wrong is that we are assuming that trust behaves as a binary relationship...that I either have complete trust or zero trust. But trust is not binary. It is shades of gray. That means that to more accurately model trust in the real world, we need some property for that relationship that indicates a level of trust, rather than trust just being T/F. So we need that in addition to a context.

So now we see we need (at least) something like:
    trust<level, context>
to model trust. Where before we just were (implicitly) using something like
    trust<{T,F}, context>
(which allowed us to model only complete trust or no trust), we find we now need something more like:
    trust<[0,1], context>

That is, we model level as a real number in the range 0 to 1, inclusive.

Cryptographer and software engineer Ben Laurie pointed out that trusted modeled in this way is very similar to KeyMan, a piece of software that he and Matthew Byng-Maddick developed back in 2002 to facilitate the management of keys, certificates and signatures used in signing software in a distributed and exportable network of trust.

So... we're done now, right? Well, not so fast Sparky. There are other important properties of trust that I already covered in my “Understanding Trust” blog post last July. If you have not already done so, I would encourage you to go back and read it.

Recasting Trust

The term “trust” is overloaded with several meanings and therefore causes a lot of confusion. On the Randombit.net crypto mailing list, Marsh Ray suggested that we use the term “relies on” as suggested by his former colleague Mark S. Miller.

I think in general, this is a great idea. If we say that “A relies on B” and “B relies on C”, then it is intuitively obvious that “A relies on C”, and hence transitivity immediately follows.

Using “relies on” works in many situations when we normally might use the word “trust” as a verb. I for one intend on starting to use it much more often than I do, because you have no idea how many times I almost accidentally made an embarrassing typo and misspelled “trust” as “tryst”. But perhaps that's the true hidden cryptographic meaning of cryptographers using Alice, Bob, and Carol in their discussions. As with many things cryptographic, maybe there's more going on there than is apparent. (I'll kindly spare you the obvious pun in this case.)

Regards,
-kevin
P.S.- I promise I will try to be a little more consistent with my blogging in 2012. (Did I just make a New Year's resolution? ;-) But thanks to all of you who have been faithful in reading and haven't completely given up on me.