Friday, March 28, 2014

ESAPI No Longer an OWASP Flagship Project

I read the news today oh, boy…

By now, you’ve probably heard about several of the OWASP board members (and perhaps, some bored members as well) coming to the conclusion that several of the current OWASP flagship code projects should be demoted and others should take their places. (If you’re not up on the news, you can read about it in these 3 email threads archived here, here, and here.)

Among the flagship projects suggested for being demoted is OWASP ESAPI, of which I am the project owner for the Java implementation.  After hearing about the recent ESAPI Hackathon, you may be puzzled or perhaps even surprised by this news. You shouldn’t be. While it may sound like heresy coming from an ESAPI believer, although I am not happy about it, I think this action is long overdue and is best for OWASP. So I guess I’m sorry if I disappoint those of you who are ESAPI fans because I’m not standing up for ESAPI and defending it, especially since it's not yet a done-deal.

I’m not, because I can’t. I, for one, can see the writing on the wall. (Pun intended.) All of the allegations that are being made against ESAPI are spot-on:
·         Only one minor point release in since July 2011.
·         164 open issues, including 4 marked Critical and 11 marked as High.
·         Far too many dependencies, something that has never been addressed despite being promised for almost 3 years.
·         Wiki page still in the old OWASP format.
·         Minimal signs of life of for ESAPI 3.0 in GitHub and ESAPI 2.x for Java on Google Code. Zero signs of life for implementations in other programming languages. [Note: Discounting the SalesForce one as I’ve not kept track of it.]
·         For ESAPI for Java, a boogered up architecture where everything is a singleton making some things such as mock-testing all but impossible. Less than 80% test code coverage, in part, because of that.
·         Lack of any significant user documentation outside of the Javadoc and the ESAPI crypto documentation.
·         Disappointing participation at the ESAPI Hackathon.

I could go on, but I won’t.

ESAPI has dropped the baton and I’ll take as much blame for that as anyone. I have neither the contacts nor the people skills needed to entice developers to participate in ESAPI. Of the 3 or 4 people that I have recruited at one time or another, they’ve not contributed to even a single bug fix. And it goes beyond that. I’ve still not yet put together a release based on the few updates (a couple minor bug fixes and a small enhancement in handling configuration files) that were done by those participating at the ESAPI Hackathon. I have also still yet to release a fix for CVE-2013-5960. (Aside: Hey! Could use a little help here! Ideally someone who knows a thing or three about crypto.)

It’s time that ESAPI yield the baton to some of the other worthy candidates like OWASP HTML Sanitizer, OWASP Dependency Check, and OWASP Java Encoder to name just a few.

What does this mean for ESAPI support?

Well, probably nothing. I mean, honestly, the support really hasn’t been that great in the past 3 years. I do suspect that it might radically give those thinking about adopting it for a new project a reason to rethink that commitment though.

I will personally commit to trying to fix any known bugs in the ESAPI crypto (including a few you didn’t even know were there; well, okay, maybe “annoyances” would be a better tem), including a fix to CVE-2013-5960. But I am likely through with any planned feature enhancements other than the minor ones that I’ve been working on. If I do any further enhancements for the crypto features, I will split off ESAPI Crypto into its own new incubator project. No more promises of time frames though. I work a 40+ hour job like most of you and the ESAPI work is volunteer hours and I have family and other commitments as well.

As for ESAPI 3, will it flourish or die out before it even gets started? Well, that’s up to you, OWASP Community. I certainly am a believer in the ESAPI concept of a common set of interfaces for common security controls, but the problem has always been the implementation, not the concept. As they say, the devil is in the details.

So if you want to volunteer to work on ESAPI, give us a shout out on the ESAPI-Dev list. If you aren’t already signed up, you can sign up here.

Best regards,
-kevin

Sunday, June 23, 2013

Appalachian Security

This is just too funny to keep to myself. This was written about 3 years ago by a former colleague of mine who was the PM for our Application Security group. He wrote it when I announced that I was leaving the Application Security team to join the Information Security team under Corporate Security.  I happened to run across it again as I was cleaning out stuff in preparation for my last day at CenturyLink (which was Friday, 6/21/2013).  It was originally posted along with a photo of The Beverly Hillbillies' character, Jed Clampett. Out of respect for Buddy Ebsen, who played Jed Clampett, I've chosen not to include the photo so as not to diminish Ebsen's legacy by being associated with me.


Naturally, this was meant to be accompanied by the original theme song from the Beverly Hillbillies. Now maybe if I can just get Gary McGraw and Where’s Aubrey to record it... :)


Enjoy,
-kevin
Appalachian Security
by Mark Hersman (July 2010)


Come and listen to a story about a man named Kev
A poor engineer, barely kept his family fed,
Then one day he was "working" on an app,
His pager started beeping, nearly woke him from his nap.


Awake that is , Consciousness.


Well the first thing you know ol Kevs got a scare,
His judgement says "Kev get away from there"
Says "The Lavratory is the place you ought to be"
So he slips Hanbin the pager, and he goes to take a ……...


A break, that is. Quiet time.


Well now its time to say goodbye to Kev and all the guys.
And they would like to thank fer usin ClearTrust APIs.
You're all invited back again to this locality
To have a heapin helpin of app security


Kevin style that is. Set a spell, Take your shoes off.


Y'all come back now, y'hear?

Monday, November 12, 2012

RSA Distributed Credential Protection: Solving the Wrong Problem?

Recently, RSA Distributed Credential Protection (DCP) was announced by RSA Security.  I’ve read the literature, sat through a presentation by an RSA sales representative, and watched the YouTube videos. And most of all, have formed an opinion.

Being a crypto geek of sorts, I’ll be the first to admit that this seems like a really cool and interesting application of secret splitting.  But, as much as RSA makes it sound like the most innovative thing since sliced bread, I believe that is fundamentally a solution to the wrong security problem. Let’s have a look at why.

As I’ve written many times, security is fundamentally about ensuring trust and managing risk. When attempting to lower risk, there is always a cost / benefit balance that needs to be studied. Just as one would not spend $10,000 for a home safe only to store $1000 in it, IT is not going to spend six figures on a solution that will not reduce the perceived cost of the risk at least that much. (Note that while I am not certain of the pricing of RCA DCP, it is not farfetched to think that an enterprise rollout would be near the six figure range.)

RSA advertises DCP to be transparent to the end users, and so it is. However, that is not the only thing that is important here. Another major factor that only came out during the live presentation by RSA was that any application wishing to take advantage of DCP needed to change their application to adapt to its API. This means of course if you have N applications all authenticating users to an SQL database (or perhaps an LDAP directory store, if the API works with that), all N of those applications need to be changed. If you fail to change one application, then you still need the user’s stored in a single data store where DCP is not applied and consequently still remain unprotected. So take the cost of licensing the RSA DCP software and add to it the cost of each of your N applications integrating the DCP API and you will have something closer to the total cost of deployment. Of course, the operational costs are also likely to increase somewhat as well. Whereas before you had but a single data store for said credentials, now you have two. The end result is that the total cost to incorporate RSA DCP into your environments is likely to exceed the six figure level even if the software licensing costs is nowhere near that amount.

Well, still, that might be OK, right? After all, if the perceived benefits greatly exceed the total costs of mitigation, we still have a security win.

So what benefits does RSA DCP bring to the enterprise? According to the RSA press release as well as this YouTube video, the threat that RSA is trying to prevent is the “smash-and-grab” of credentials by an attacker. Specifically, DCP is designed to make it more difficult for an attacker who has infiltrated your company network and has managed to get direct access to your database server to obtain credentials (either plaintext or hashes).  DCP also would likely mitigate a rogue DBA doing a “smash-and-grab” of your company’s credential data as well, as long as care was taken to provide separation of duties and not give a single administrator a DBA role on both DCP servers.

So we still need to answer this question: Is this a common way for an attacker to gather user credentials?  In my opinion, it is not. By far, the biggest attack vector for adversaries stealing credential material is via SQL (or possibly LDAP) injection attacks. Will DCP do anything to mitigate SQLi attacks? The answer would appear to be “no” (at least according to the RSA sales rep that we talked to). In fact, given that one has to bolt new DCP API code into one’s application to use DCP, there is a chance that new SQLi vulnerabilities may be introduced as developers change the application code.

So is there a place where using RSA DCP would make sense? I believe so, but I think it is a niche market rather than the broad market RSA Security would like it to be.  RSA DCP could be very valuable where you have an extremely high-value target (credentials or otherwise) that are difficult to upgrade. The perfect example that comes to my mind is protection of the RSA Security SecurID seeds. Compromise of those SecurID seeds required RSA to replace all the hard token SecurID devices.  In fact, it is not unreasonable to speculate that this product came directly out of researching ways to protect those high-value credentials from the smash-and-grab type of direct attacks resulting out of that breach. If RSA Security wishes to broaden the market for their new DCP product, I believe that the best approach is for them to integrate DCP seamlessly in with their other products, starting with RSA Access Manager. If you are going to make a believer of us security folk, you first have to be willing to eat your own dog food.

However, in the meantime, for your regular user passwords, salting with a sufficiently long random salt, enforcing password complexity rules when users select their passwords, and enforcing account lockout are likely to be sufficient protection for your customer passwords.  If doing those things is not sufficient, you seriously need to consider whether passwords are a strong enough form of authentication for your users.

Note that the views expressed herein are wholly my own and do not represent those of my company, of OWASP, nor any other organizations with whom I am associated.

Regards,

-kevin

Friday, January 20, 2012

USACM Policy Statement on SOPA and PIPA

For now, the immediate battle seems to have been won, but you can be sure that the MPAA and their cohorts will be back soon.

However, I wanted to point everyone to this USACM blog post that provides their policy statement regarding SOPA and PIPA. If you don't want to read their full policy statement, I would at least encourage you to take a quick look at their very approachable (by non-Geeks ;-) "Analysis of SOPAʼs impact on DNS and DNSSEC".

Friday, January 6, 2012

Misunderstanding Trust

Background

Last July, I blogged about “Understanding Trust”, in which I attempted to describe several properties of trust. Because I thought that most of these properties were obvious, I was somewhat surprised to see someone with an interest in security authoritatively quote a well-known Microsoft software developer in post on a cryptography mailing list that “trust is not transitive”.

Of course I strongly disagreed. If you are interested in the specific context, you can find the full text of my post in the crypto mailing list archives. However, based on the research that I did and this specific post made me aware that there are still several software and security engineers who still have a misunderstanding of trust. So I decided that perhaps I should attempt to clear up this misunderstanding.

Is Trust Transitive or Isn't It?

The post to the cryptography mailing list that I attempted to refute started out by citing Microsoft developer, Peter Biddle, stating “More fundamentally, as Peter Biddle points out, trust isn't transitive”.
So, before writing a rebuttal to his response, I thought it would be a good idea to track down the source of Peter Biddle's comments. I eventually found the source in Peter Biddle's blog post titled “Trust Isn’t Transitive (or, 'Someone fired a gun in an airplane cockpit, and it was probably the pilot')”.

Myself and I think most security pundits really believe that Peter Biddle is wrong about trust not being transitive. If you read carefully through Peter Biddle's blog on this topic, you will see (as Keith Irwin so aptly pointed out in a reply to the Randombits.net cryptography mailing list) that Biddle is mixing contexts here. In a nutshell, in Biddle's blog, he is making the argument that trust in two completely different contexts equates to trust in general (i.e., any context) and therefore concludes trust is not transitive.


However, trust clearly is context dependent and when considering whether or not trust is transitive, we need to consider the same context.

Specifically, if C1 and C2 are two different contexts, it does NOT logically follow that:
    There exists a context C1 such that “Alice trusts(C1) Bob”
    There exists a context C2, where C1 != C2, such that “Bob trusts<C2> Carol”
Therefore,
    Alice trusts<C> Carol for all contexts, C.
where trusts<C> means “trust in context C”.

That seems to be the way that Biddle is arguing about trust not being transitive. Well, if that's the way he's defining it, then of course it's not transitive.

If it is just that...well, that's the WRONG way to reason about transitivity in general, and trust being transitive in particular.

Transitivity is a mathematical property of some relationship R and says for x, y, and z are members belonging to some well-defined set, then we call relationship R transitive if:
    ( xRy ˄ yRz ) xRz
for all x, y, and z elements of some set S. (See the Wikipedis article on transitive relationships for more thorough, but very comprehensible treatment of this.)

However, in Biddle's blog where he gives his examples, all the examples that he mentions is is talking about two different contexts (e.g., flying planes and handling firearms, or working on cars and taking care of kids).

That is, Biddle is really discussing two different relationships
    trust<flying planes>
and
    trust<handling firearms>
and what he is then trying to conclude is that
    ( x trust<flying planes> y ) AND ( y trust<handling firearms> z ) IMPLIES ( x trust<C> z )

for any context C. Well, duh! If you make a fallacious straw man argument about trust being transitive in this manner, of course your conclusion this going to be that "trust is NOT transitive". But you would also, IMHO, be wrong. If we stick to a specific context / attribute however, then I think you will find the logic concludes that trust is transitive. (But, I'll show later it's not really quite that simple.)



Here's a really nutty case restricted to a specific context that I hope will make the point. Let's conjecture that both
    Passengers trust<flying planes> Pilot P
and
    Pilot P trust<flying planes> Chimpanzees
are true. (That is, “passengers trust some specific pilot P in flying planes” and “some (same) specific pilot P trusts chimpanzees in flying planes.) So, some pilot P brings his trusted chimpanzee into the cockpit and shortly after takeoff, he decides to take a little nap so handles the controls over to his chimp pal. And all this occurs unbeknownst to the passengers. So what do we conclude? Well, logic dictates that based on the premises, we may conclude:
    Passengers trust<flying planes> Chimpanzees

But wait! That's absurd you say. Well, perhaps. But then again, whether the passengers know it or not, the Chimp who is supposedly flying the plane is pretty much holding the lives of the passengers in his hands (or is that paws?).

On one hand, these passengers are literally (unbeknownst to them) trusting that chimp to safely fly that plane. (Or course, on the other hand, if there where a dozen parachutes on the plan, there would be a blood bath seeing who would get them. ;-)

Now lets make a little change in the premise. Let's substitute 'Auto Pilot System' for 'Chimpanzees'.

The conclusion is now:
    Passengers trust<flying planes> Auto Pilot System

All I've done is exchanged one symbol (Chimpanzee) with another (Auto Pilot System), but all of sudden most of us feel a whole lot better.

So what does that tell us about 'trust'? Well, for one, the human concept of trust is much more complex than some simplistic quantifiable mathematical property as we have been trying to model it thus far. And herein a big problem in security. Why? Because the software systems that we construct can no way approach the complexity of all these nuances. (Not that it matters a whole lot. History has shown that we can't even get the simpler model correct, but I digress.)

But Wait, There's More

In the post that I responded to where the poster was arguing that trust was transitive, they continued with this example:
When CAs [Certificate Authorities] get in the habit of delegating their power, that process is at risk of being bypassed and in any case starts to happen much less transparently. There are plenty of cases in the real world where someone is trusted with the power to take an action, but not automatically trusted with the power to delegate that power to others without external oversight. And that makes sense, because trust isn't transitive.

This statement makes sense, but NOT because 'trust isn't transitive'. Here the mistake in reasoning is not in trying to equate two different contexts. Rather, it makes sense because of another aspect of trust that I have discussed before on my “Understanding Trust” blog post. Specifically,
    Trust is not binary.

Trust is not black or white; it is shades of gray. As humans, for a given context, we "assign" more trust to some and less to others. This "level of trust" is largely based on our perception of experience and reputation, the latter which we sometimes try to model in reputation-based systems.
An example...unfortunately, you need brain surgery. (If you are reading this blog, that should be proof enough. I rest my case. ;-) You have two surgeons to choose from:
    Surgeon #1: 10 years of experience and over 300 operations.
    Surgeon #2: 1 year of experience and 6 operations.

All other things being equal, who you gonna choose? Surgeon #1, right? (Well, unless in those 300 operations, s/he has had 250 malpractice results. ;-) And at least by comparison, you probably do NOT trust Surgeon #2.

So, with that in mind, let's get back to the transitivity part:
    You trust<brain surgery> Surgeon #1
    Surgeon #1 trust<brain surgery> Surgeon #2
so, obviously,
    You trust<brain surgery> Surgeon #2.

Whoa! Wait a minute. Didn't we just say that we did NOT trust Surgeon #2. Yep!

So what went wrong here? Well, that went wrong is that we are assuming that trust behaves as a binary relationship...that I either have complete trust or zero trust. But trust is not binary. It is shades of gray. That means that to more accurately model trust in the real world, we need some property for that relationship that indicates a level of trust, rather than trust just being T/F. So we need that in addition to a context.

So now we see we need (at least) something like:
    trust<level, context>
to model trust. Where before we just were (implicitly) using something like
    trust<{T,F}, context>
(which allowed us to model only complete trust or no trust), we find we now need something more like:
    trust<[0,1], context>

That is, we model level as a real number in the range 0 to 1, inclusive.

Cryptographer and software engineer Ben Laurie pointed out that trusted modeled in this way is very similar to KeyMan, a piece of software that he and Matthew Byng-Maddick developed back in 2002 to facilitate the management of keys, certificates and signatures used in signing software in a distributed and exportable network of trust.

So... we're done now, right? Well, not so fast Sparky. There are other important properties of trust that I already covered in my “Understanding Trust” blog post last July. If you have not already done so, I would encourage you to go back and read it.

Recasting Trust

The term “trust” is overloaded with several meanings and therefore causes a lot of confusion. On the Randombit.net crypto mailing list, Marsh Ray suggested that we use the term “relies on” as suggested by his former colleague Mark S. Miller.

I think in general, this is a great idea. If we say that “A relies on B” and “B relies on C”, then it is intuitively obvious that “A relies on C”, and hence transitivity immediately follows.

Using “relies on” works in many situations when we normally might use the word “trust” as a verb. I for one intend on starting to use it much more often than I do, because you have no idea how many times I almost accidentally made an embarrassing typo and misspelled “trust” as “tryst”. But perhaps that's the true hidden cryptographic meaning of cryptographers using Alice, Bob, and Carol in their discussions. As with many things cryptographic, maybe there's more going on there than is apparent. (I'll kindly spare you the obvious pun in this case.)

Regards,
-kevin
P.S.- I promise I will try to be a little more consistent with my blogging in 2012. (Did I just make a New Year's resolution? ;-) But thanks to all of you who have been faithful in reading and haven't completely given up on me.

Sunday, August 21, 2011

Colombus OWASP presentation posted

On Thursday, 2011/08/18, I made a presentation about OWASP ESAPI and my thoughts about it to the Columbus, OH local OWASP chapter.  Bill Sempf, one of the chapter leaders, was kind enough to put the slides up to make them available to everyone.

If you are interested in what I presented, the slides are up on the main OWASPI wiki, at:
https://www.owasp.org/index.php/File:OWASP_ESAPI-2011.ppt




Wednesday, July 20, 2011

Understanding Trust

It's often been said that Confidentiality, Integrity, and Availability, the so-called CIA triad, are the core principles of information security. However, I want to examine even something more fundamental than these principles; I want to look at the goals of information security. And not just goals such as preventing unauthorized access, disclosure, disruption of use, etc. which really are just extensions of the CIA triad, but the core, essential goals that are at information security's foundation.

At its core, information security is largely about the two goals of “ensuring trust” and “managing risk”. We may deal with managing risk some other time, but today I want to focus on ensuring trust.

In order to ensure trust, we first must understand not only what it is, but what its properties are.

Let's start with a definition. Merriam-Webster's dictionary defines the noun, trust as:
1 a : assured reliance on the character, ability, strength, or truth of someone or something b : one in which confidence is placed
2 a : dependence on something future or contingent : hope b : reliance on future payment for property (as merchandise) delivered : credit <bought furniture on trust>
3 a : a property interest held by one person for the benefit of another b : a combination of firms or corporations formed by a legal agreement; especially : one that reduces or threatens to reduce competition
4 archaic : trustworthiness
5 a (1) : a charge or duty imposed in faith or confidence or as a condition of some relationship (2) : something committed or entrusted to one to be used or cared for in the interest of another b : responsible charge or office c : care, custody <the child committed to her trust>

One thing that I'd like to draw your attention to is that none of these definitions (with the possible exception of #3) implies that any sort of qualitative measure for trust exists. Trust is one of those things that we think that we completely understand when we talk about it, but when we explore it a bit deeper, we discover that it has some properties that are not all that intuitive, at least not in the normal manner that we refer to trust. My intent is to help us gaze at some of these properties and in so doing, see the disconnect of how we use the word “trust” in everyday usage versus how we use it in the security world. As we will discover, it is some of these very properties that make trust so difficult to ensure in the world of information security.

Properties of Trust

Trust is not commutative

If Alice trusts Bob, it does not follow that Bob trusts Alice. To any parent, this seems pretty obvious, at least when your children are of a certain age where they still trust you. You trust you doctor, but they likely do not trust you in a similar manner. You trust your bank, but they don't trust you in the same way. Trust is not symmetrical in this way.

Trust is transitive

This means “If Alice trusts Bob, and Bob trusts Carol, then Alice trusts Carol”. This one doesn't seem quite so obvious to us. That's because in the real world where we interact with other people, we don't treat trust in this manner. If I trust my wife and my wife trusts her friend, I don't usually automatically extend my trust to her friend. But therein lies the problem. In the information security context at least, trust implicitly extends in this way.

Think of the analogy where Alice, Bob, and Carol are all names of servers and there are trust relationships established by virtue of a /etc/hosts.equiv on each of these servers. When described this way, it is easier to see how trust extends to be transitive even though Alice may not be aware of Bob's trust of Carol. (In a latter blog post, I hope to illustrate how this affects trust boundaries.) Alice (usually implicitly) extends here trust to Carol via Bob's trusting Carol. The big issue here is whether or not Alice's trust of Bob is warranted in the first place and that is muddied by the fact that rational humans (at least those who are worthy of that trust) will act in a moral way so as not to abuse it. But in computer security, we need to think beyond this. The big issue in making such trust decisions involving transitivity is that Alice may not be aware of any of the trust relationships that Bob has with any other parties. For example, if Alice knows Carol to be untrustworthy, she may rethink her position on trusting Bob. In other words, Alice's trust in Bob may be misplaced. The bottom line is that there's little that Alice can do to tell, except to go on Bob's known reputation. Where this really gets sticky of course is that it can extend indefinitely. Alice trusts Bob, Bob trusts Carol, Carol trusts David, David trusts Emily, so... transitivity implies that Alice trusts Emily. We can quickly get lost. Of all the aspects of trust, I think that it is this disconnect that humans have with trust being transitive that makes securing computers so difficult. We simply don't think this way intuitively when it comes to securing our systems.
Let's try to clarify this with an example. Let's say that you (Alice) trust a merchant (Bob's Bakery) with your credit card number. You do this by paying for your weekly doughnut supply with your credit card (online or in person, it really doesn't matter). Bob's Bakery relies on a credit agency (Carol's Credit Check) to do credit checks on its customer's credit cards before accepting each payment.
No problem so far, right? Now you (Alice) may or may not be aware that Bob's Bakery does a credit check. As long as they accept your payment and you get your sweets, you probably don't care. You probably are not consciously thinking about any credit agency or credit card payment service. Most people are not even aware of the connection. Regardless, you are implicitly extending your trust to this credit agency when you you complete such a transaction.
No problem, still, you say. My credit card issuer will cover any fraudulent charges. But note that is a red herring. That is why you trust your bank and your credit card, not why you trust Bob's Bakery or Carol's Credit Check.
So let's change up the scenario a bit. Let's assume that Carol's Credit Check is really run by organized crime and the purpose of their existence is to perpetuate credit card fraud. Does this change your trust of Carol's Credit Check (assuming you new of the relationship with Bob's Bakery)? Probably. Does it change your trust in Bob's Bakery? It may, if Bob's Bakery were aware that Carol's Credit Check was run by crooks. But why does you trust change? Nothing has changed except your awareness / perception. In abstract terms, it's still the same Alice trusts Bob, Bob trusts Carol, therefore Alice trusts Carol.

Trust is not binary

Trust is a matter of degree; it is gray scale, not something that is merely black or white, on or off. If this were not true, there would be no way for us to state that we trust one party more than some other party. If trust were only a binary yes/no, if Alice separately trusted Bob and Carol, Alice would never has any reason to trust Bob more than she trusts Carol. Obviously that is not how we act in the real world. Often times, even when we do not personally know someone, we grant different levels of trust based solely on reputation.

Trust is context dependent

Alice may trust Bob in some situations, but not in other situations. While this is true in computer security, it is a rather difficult property for us to model. But there is no doubt that we believe this in the real world. In real life, Alice may trust Bob to cut her hair or work on her car, but not to baby sit her kids, and almost certainly not to preform brain surgery on her (even if Bob has taken the time to read Brain Surgery for Dummies).
In real life, we humans are quite adept at switching between these various contexts. So much so, that we hardly give it conscious thought. However, when we try to codify these contexts into a programmatic information model, we realize that this is a very difficult concept to formalize.

Trust is not constant over time

Alice may trust Bob at time t, but not at some later time t+Δt. In real life, Bob may do something to screw up. For instance, Alice may trust Bob while she is dating him, but after she sees Bob chasing after Carol, not so much. In computer security, we have similar analogies. For instance, we trust a session ID until after it has expired, or we trust a certificate until it has expired. It is at the basis of why we recommend that passwords expire after a time. This property of trust is one reason that authentication protocols use things like expiration times and/or freshness indicators.

Final Observations

These last two properties (trust is context dependent, trust is not constant over time) mean that when we discuss ensuring trust, we must do so within a specific context and time-frame. (Sometimes the time-frame is explicit and sometimes it is implied.) Generally, in computer security, we should strive to make all trust relationships explicit and leave nothing to chance or misinterpretation. That's one key step in defining a trust model, which I hope to discuss in a future blog post. In the meantime, I hope you will keep in mind some of the properties we discussed today when you are trying to secure your systems.

And one more thing, my apologies for not being consistent as of late posting to this blog. Not only have I been busy for with ESAPI for C++, but I've also been at loss for interesting topics. So if you have something that you'd like to see me ramble on about, please shot me a note. (But don't be surprised if I recommend that you get so serious counseling if you do so. ;-) Thanks!
-kevin