Sunday, August 21, 2011

Colombus OWASP presentation posted

On Thursday, 2011/08/18, I made a presentation about OWASP ESAPI and my thoughts about it to the Columbus, OH local OWASP chapter.  Bill Sempf, one of the chapter leaders, was kind enough to put the slides up to make them available to everyone.

If you are interested in what I presented, the slides are up on the main OWASPI wiki, at:
https://www.owasp.org/index.php/File:OWASP_ESAPI-2011.ppt




Wednesday, July 20, 2011

Understanding Trust

It's often been said that Confidentiality, Integrity, and Availability, the so-called CIA triad, are the core principles of information security. However, I want to examine even something more fundamental than these principles; I want to look at the goals of information security. And not just goals such as preventing unauthorized access, disclosure, disruption of use, etc. which really are just extensions of the CIA triad, but the core, essential goals that are at information security's foundation.

At its core, information security is largely about the two goals of “ensuring trust” and “managing risk”. We may deal with managing risk some other time, but today I want to focus on ensuring trust.

In order to ensure trust, we first must understand not only what it is, but what its properties are.

Let's start with a definition. Merriam-Webster's dictionary defines the noun, trust as:
1 a : assured reliance on the character, ability, strength, or truth of someone or something b : one in which confidence is placed
2 a : dependence on something future or contingent : hope b : reliance on future payment for property (as merchandise) delivered : credit <bought furniture on trust>
3 a : a property interest held by one person for the benefit of another b : a combination of firms or corporations formed by a legal agreement; especially : one that reduces or threatens to reduce competition
4 archaic : trustworthiness
5 a (1) : a charge or duty imposed in faith or confidence or as a condition of some relationship (2) : something committed or entrusted to one to be used or cared for in the interest of another b : responsible charge or office c : care, custody <the child committed to her trust>

One thing that I'd like to draw your attention to is that none of these definitions (with the possible exception of #3) implies that any sort of qualitative measure for trust exists. Trust is one of those things that we think that we completely understand when we talk about it, but when we explore it a bit deeper, we discover that it has some properties that are not all that intuitive, at least not in the normal manner that we refer to trust. My intent is to help us gaze at some of these properties and in so doing, see the disconnect of how we use the word “trust” in everyday usage versus how we use it in the security world. As we will discover, it is some of these very properties that make trust so difficult to ensure in the world of information security.

Properties of Trust

Trust is not commutative

If Alice trusts Bob, it does not follow that Bob trusts Alice. To any parent, this seems pretty obvious, at least when your children are of a certain age where they still trust you. You trust you doctor, but they likely do not trust you in a similar manner. You trust your bank, but they don't trust you in the same way. Trust is not symmetrical in this way.

Trust is transitive

This means “If Alice trusts Bob, and Bob trusts Carol, then Alice trusts Carol”. This one doesn't seem quite so obvious to us. That's because in the real world where we interact with other people, we don't treat trust in this manner. If I trust my wife and my wife trusts her friend, I don't usually automatically extend my trust to her friend. But therein lies the problem. In the information security context at least, trust implicitly extends in this way.

Think of the analogy where Alice, Bob, and Carol are all names of servers and there are trust relationships established by virtue of a /etc/hosts.equiv on each of these servers. When described this way, it is easier to see how trust extends to be transitive even though Alice may not be aware of Bob's trust of Carol. (In a latter blog post, I hope to illustrate how this affects trust boundaries.) Alice (usually implicitly) extends here trust to Carol via Bob's trusting Carol. The big issue here is whether or not Alice's trust of Bob is warranted in the first place and that is muddied by the fact that rational humans (at least those who are worthy of that trust) will act in a moral way so as not to abuse it. But in computer security, we need to think beyond this. The big issue in making such trust decisions involving transitivity is that Alice may not be aware of any of the trust relationships that Bob has with any other parties. For example, if Alice knows Carol to be untrustworthy, she may rethink her position on trusting Bob. In other words, Alice's trust in Bob may be misplaced. The bottom line is that there's little that Alice can do to tell, except to go on Bob's known reputation. Where this really gets sticky of course is that it can extend indefinitely. Alice trusts Bob, Bob trusts Carol, Carol trusts David, David trusts Emily, so... transitivity implies that Alice trusts Emily. We can quickly get lost. Of all the aspects of trust, I think that it is this disconnect that humans have with trust being transitive that makes securing computers so difficult. We simply don't think this way intuitively when it comes to securing our systems.
Let's try to clarify this with an example. Let's say that you (Alice) trust a merchant (Bob's Bakery) with your credit card number. You do this by paying for your weekly doughnut supply with your credit card (online or in person, it really doesn't matter). Bob's Bakery relies on a credit agency (Carol's Credit Check) to do credit checks on its customer's credit cards before accepting each payment.
No problem so far, right? Now you (Alice) may or may not be aware that Bob's Bakery does a credit check. As long as they accept your payment and you get your sweets, you probably don't care. You probably are not consciously thinking about any credit agency or credit card payment service. Most people are not even aware of the connection. Regardless, you are implicitly extending your trust to this credit agency when you you complete such a transaction.
No problem, still, you say. My credit card issuer will cover any fraudulent charges. But note that is a red herring. That is why you trust your bank and your credit card, not why you trust Bob's Bakery or Carol's Credit Check.
So let's change up the scenario a bit. Let's assume that Carol's Credit Check is really run by organized crime and the purpose of their existence is to perpetuate credit card fraud. Does this change your trust of Carol's Credit Check (assuming you new of the relationship with Bob's Bakery)? Probably. Does it change your trust in Bob's Bakery? It may, if Bob's Bakery were aware that Carol's Credit Check was run by crooks. But why does you trust change? Nothing has changed except your awareness / perception. In abstract terms, it's still the same Alice trusts Bob, Bob trusts Carol, therefore Alice trusts Carol.

Trust is not binary

Trust is a matter of degree; it is gray scale, not something that is merely black or white, on or off. If this were not true, there would be no way for us to state that we trust one party more than some other party. If trust were only a binary yes/no, if Alice separately trusted Bob and Carol, Alice would never has any reason to trust Bob more than she trusts Carol. Obviously that is not how we act in the real world. Often times, even when we do not personally know someone, we grant different levels of trust based solely on reputation.

Trust is context dependent

Alice may trust Bob in some situations, but not in other situations. While this is true in computer security, it is a rather difficult property for us to model. But there is no doubt that we believe this in the real world. In real life, Alice may trust Bob to cut her hair or work on her car, but not to baby sit her kids, and almost certainly not to preform brain surgery on her (even if Bob has taken the time to read Brain Surgery for Dummies).
In real life, we humans are quite adept at switching between these various contexts. So much so, that we hardly give it conscious thought. However, when we try to codify these contexts into a programmatic information model, we realize that this is a very difficult concept to formalize.

Trust is not constant over time

Alice may trust Bob at time t, but not at some later time t+Δt. In real life, Bob may do something to screw up. For instance, Alice may trust Bob while she is dating him, but after she sees Bob chasing after Carol, not so much. In computer security, we have similar analogies. For instance, we trust a session ID until after it has expired, or we trust a certificate until it has expired. It is at the basis of why we recommend that passwords expire after a time. This property of trust is one reason that authentication protocols use things like expiration times and/or freshness indicators.

Final Observations

These last two properties (trust is context dependent, trust is not constant over time) mean that when we discuss ensuring trust, we must do so within a specific context and time-frame. (Sometimes the time-frame is explicit and sometimes it is implied.) Generally, in computer security, we should strive to make all trust relationships explicit and leave nothing to chance or misinterpretation. That's one key step in defining a trust model, which I hope to discuss in a future blog post. In the meantime, I hope you will keep in mind some of the properties we discussed today when you are trying to secure your systems.

And one more thing, my apologies for not being consistent as of late posting to this blog. Not only have I been busy for with ESAPI for C++, but I've also been at loss for interesting topics. So if you have something that you'd like to see me ramble on about, please shot me a note. (But don't be surprised if I recommend that you get so serious counseling if you do so. ;-) Thanks!
-kevin

Wednesday, April 6, 2011

Please Stop Already—Links in HTML Emails Considered E-V-I-L

Phishy or Not?
OK, what's wrong with this picture... you receive an unsolicited email from a financial institute with whom you have a prior existing business relationship. The email sounds very official and has none of the usual sloppy spelling or grammatical errors that are the usual tip-offs to most phishing attempts. (Thank God that the most of the idiots constructing these phishing attempts to exploit the naïve don't know how to use spelling and grammar checkers in their word processing software! ;-)

However, the email is also written in HTML1 and it obfuscates the URL they wish you to visit with the ubiquitous “Click here to activate your account” link. Furthermore, it states that once you login with your user name and password, you will be requested to enter your Social Security Number. So the email is starting to smell somewhat phishy to me.

This particular email was purportedly from Morgan Stanley Smith Barney. (I'm sorry, but they deserved to be called out and shamed by this!) Naturally, I checked out the 'Received' email headers and note that they in fact do come from Citigroup, who indeed now owns MSSB. (I had to run a whois on a few of the domains I wasn't sure about, but in the end the all the domains listed in the Received headers checked out as being associated with Citigroup.)

Still, I wasn't about to just hand over my SSN without being a bit more cautious. After all, one never knows if Citigroup introduced a recent misconfigured open mail relay somewhere that some phisher was trying to exploit. So I send the email with all email headers to our company spam police. After several hours, they got back to me and assured me that the email was legitimate and that it had to do with the conversion of Qwest shares to CenturyLink shares,which I thought it might and was why I didn't completely dismiss the email outright.

Do As We Say, Not As We Do
So here is the issue. Financial institution of all kinds—whether it be banks, broker agencies, credit card companies, etc.—are constantly reminding their patrons to “not click on links from suspicious emails that appear to come from us”. If anything, they tell you to manually enter their URL into your browser's Location / Address bar. They tell us not to provide our SSN or user names and passwords to such sites. But then they turn right around and send their own customers out HTML emails with obfuscated links asking them to do the very thing that they spent several earlier emails and newsletters repeatedly educating their users not to do.

I ask you, is this practice not insane? While I'm focusing on financial institutes here because of the MSSB email from them today, they are by no means the only culprit. I've seen this very same practice done by Microsoft in some of their emails. In fact, I'm sure that I've seen one or two that were security related emails (in HTML naturally, even though I've signed up for the plain text variety) to their subscribers where in one paragraph they warn you about clicking on links provided in suspicious emails and a few paragraphs latter they are advertising a link where you can click on it to download some free Microsoft software like Silverlight or Security Essentials or Windows Defender. And often those links are redirected through some 3rd party mass email distribution company—just for that extra secure feeling I guess.

Argh!!! Please stop it already! Don't the people composing these emails out realize that they are reinforcing the very practice that they claim to be trying to educate users to stop doing? Oh, the irony of it all. Are they trying to lampoon themselves?

So please don't be telling me how we ought to require that regular users be better educated about security matters so as to make the rest of us all safer if this is the best that we have to offer in user education. Some have gone as far as suggesting that ordinary citizens be required to get some sort of license before they are allowed to connect to the Internet. Yea, right. Like we're doing such an excellent job educating folks now. This is like politicians talking out of both sides of their mouth at once. We can't have it both ways, so let's get our own house in order first before we start pointing out how clueless everyone else is. Perhaps, just perhaps, its been because we've been sending them mixed messages.

Let me hear your rants and/or reactions on this topic and thanks for your time.
-kevin
_________________
1. Eye candy is way more important than security, because after all, who wouldn't leave their bank or broker if they didn't send out slick looking emails. Sigh...

Sunday, April 3, 2011

Mobile Devices: Are We Repeating History?


"Those who cannot remember the past are condemned to repeat it."        – George Santayana

During the past month or so, I have moved into the “modern” era. I stopped using my 3 year old cell phone and started using a smart phone (Droid X). During the same time, I also purchased a Barnes & Noble NookColor eBook reader. Of course, being the security geek that I am, I immediately rooted1 each of them to see what makes them tick as well as to make them more useful to me. Here are my initial security thoughts on this technology.

Since both of these mobile devices are Android based, I can't speak with any degree of experience for other devices based on Apple's iOS or Microsoft Windows, but I would be surprised if things are that much different there either.

History 101
In the early days of personal computing (and I'm collectively including all personal computers here, everything from Commodore 64 to Apple Macs, not just IBM PCs and their clones), none of these were originally designed with the concept of really being multi-user devices. Instead, it was assumed that either these early computers were either used only by a single person, or perhaps shared by the entire family with the assumption that there was no need of privacy or separation.

Jump ahead about ten years, to the mid-1990s, and by then all the major vendors who were left (basically Microsoft and Apple) realized that this was the wrong assumption. So, for example, we see in Windows 95, the concept of a login screen (if one desired to use it), but under the hood, still no real integrated multi-user concept—a legacy that lived on until Windows XP. However, by then, the “damage” was done; parents had been “trained” that only a single login was required and for their children's PCs, they would just provide one login for their son or daughter. More often than not, that user account was one that had an administrator role.

Lessons from the Past
The result, at least in most of the households that I observed, was chaos. Teenagers—who were frequently more technically adept than their parents, but lacking the general wisdom of adults—would download and install various malware-infected games, P2P software, etc. In short, personal computing platforms of many households were so mired in malware as to be rendered completely unusable.

During the ten year period from about 1995 until up 2005, I assisted numerous friends with malware removal. More often than not, the attack vector was from something that a teenage son or daughter deliberately had downloaded to share music, pictures, or videos with their friends. (Note: nothing magical happened in 2005, other than I switched almost exclusively to using Linux so I had an “excuse” when someone asked for assistance with the latest Windows OS. In reality, I simply got wise enough to graciously say “no, I won't fix your computer” without friends taking offense. I wonder if surgeons are ever asked to perform free appendectomies by their friends. ;-)

But I think that the lesson learned by Microsoft and Apple was that compartmentalizing user data by user accounts was important, not only from a privacy perspective, but also from a security stability perspective. Of course, Linux growing out of UNIX roots had the multi-user concept from the start, but in a few cases even some of the Linux distros attempted to “dumb things down” a bit (e.g., via automated login) to appeal more to the casual users more familiar with Windows and MacOS environments.

The Present
Flash forward to the present. We see mobile devices—namely smart phones and tablet PCs—that are being treated by the manufactures / distributors as single user devices.

From a vendor perspective, this makes sense...there simply is less code to develop. From a user interface, it simplifies things there as well. But remember this single user approach originally seemed acceptable, for a short while at least, for OS vendors of early personal computers. They too approached computing platforms from a single user approach, only later to realize it proving detrimental. In the long term, going back and having to redo things soon after they've originally been implemented improperly always is a disability because of the dreaded “backward compatibility curse” and the additional complexity required for the retrofit.

So a fair question to ask is “Is industry missing the mark with their assumption that these mobile devices will be exclusively used by a single individual?”. I'm willing to concede that for cell phones this may be a reasonable assumption, but based on how I've seen tablet PCs being used, I'd have to say the answer there is definitive “yes, vendors have missed the mark”. My son has already used my rooted NookColor tablet and I have a friend whose entire family shares their iPad. I don't see it being that much different in other families unless those families are sufficiently well-to-do to by each of the family members their own tablet device.

The code used to root the NookColor seems to be influenced heavily by the Motorola Xoom system, so maybe I'm jumping to conclusions here. But for the both the original B&N NookColor as well as the rooted versions, there is no concept of access by different user accounts. The closest I see to a login screen for either is a 4-digit security code and once the device is unlocked, the user is able to do anything.2 According to may friend, a similar situation exists with the iPad. It only supports one user account that connects to Apple's iTunes.

Repeating History
You might ask, why is sharing a tablet PC with your family members a problem? Well, if you are willing to risk the security of your tablet PC and all the data on it to your teenage son or daughter, it isn't. (Not to mention that a compromised tablet PC may also provide a jumping off point inside your router's firewall that allows easier access to your other computers and their data.) Most kids that I know are not too discriminating about what they might download. It's doubtful that most of them would even take the time to read through a list of permissions that an Android app is requesting let alone fully understand the implications involved. While Apple and Google do their best to prevent malware and spyware on from their respective official download sites, there are other sites that one can download iPad and Android apps from might not do this at all. And while you may never visit these sites, your kids might. (You might argue that this is not possible if your device has not been jail-breaked or rooted, and you may be correct. But this is an almost trivially accomplished feat well within the technical skills of today's youth. Furthermore, returning it to the stock OS—after one downloads and installs that favorite free warez version of Angry Birds—is also fairly easy.)

The question is why does it have to be this way? Android OS not only already supports multiple user ids, but is uses them so that each of the different apps runs using a different user account. (I can't speak as to iOS as I've not yet researched it. As for Windows OS for mobile devices, I suspect that the stock OS also supports multiple user accounts under the hood although it may not support multiple end user accounts.)

If we have learned anything over the past 30 years, it is that there are inherent dangers associated with having a user run with a single, all-powerful account. At the minimum, there should be two accounts supported and presented to end users—one an administrative account used only for installing, upgrading, and deleting apps and other systems management functions, and one a limited-user account that has no special privileges at all. Ideally, for mobile devices likely to be shared, there also ought to be separate limited user accounts at as well. With multiple limited end user accounts you won't have situations where 13 year old Bobbie makes unauthorized posts of embarrassing pictures to his 16 year old sister's Facebook page. The only alternative that I see is for families not to use the automated sign-on into social networking sites. (But convenience wins over security almost every time; recall McGraw's and Felten's “dancing pigs” comment—well, at least until Bobbie posts a picture of his sister wearing her ratty bathrobe, hair up in curlers, and face covered in acne cream. ;-)

Even more importantly, consider the idyllic vision that some mobile device moguls seem to dream about where your mobile device (usually cell phone, but perhaps small tablets as well) contains all your credit information enabling you to make automatic payments using with your mobile device through the use of a Near Field Communications (NFC) chip. We need multiple end user accounts even more there. Surely one doesn't want your children to have access to use Dad's credit cards simply by waving a mobile device near the POS device. True, one could protect such electronic payments with a PIN as well, but most would find this an inconvenience and would either likely disable it or choose the same PIN that they use for the device itself to avoid committing yet another PIN / password to memory.

The Not-Too-Distant Future
The good news is, I think manufacturers and sponsors of mobile devices still have time to get this right. Currently, I don't think that the target is lucrative enough to entice all the malware writers to switch gears and immediately begin targeting mobile devices rather than PCs and Macs. I predict this will change within a few years, especially if the vision of making e-payments from mobile devices comes sooner rather than later. So if mobile device manufacturers are going to do something, the time is short.

The bad news is there is absolutely no consumer outcry to entice this to happen. Neither do I see anyone in the security community discussing it either. So perhaps it never will change ha until consumers have suitably suffered enough from resulting security breaches to get angry. Until then, we will have to make do with a patchwork of AV and other bolt-on security solutions.

So, the question is what should we do as security professionals? Should we call this out as an issue, or am I just a misguided old fool telling people that the sky is falling?3

You decide and let me know what you think.

-kevin
_____________

Wednesday, March 30, 2011

Signs of Broken Authentication (Part 4)


So today's post is the last of the series of “Six Signs of Broken Authentication”.

So let's review. Thus far, we have covered the following red flags:
  1. Restricting the maximum length of a password.
  2. Restricting the set of characters you are allowed to use in your password.
  3. Mapping all your password characters to digits 0 through 9 so you may enter it via an ordinary telephone keypad.
  4. No password lockout.
Today, we will be covering the last two warning signs of broken authentication:
  1. Missing or inappropriate use of SSL/TLS
  2. Password reset via poorly implemented “security questions”

Red Flag #5: Missing or Inappropriate use of SSL/TLS

You go to a web site and you notice that that login page is not using https (i.e., HTTP over “Secure Socket Layer” (SSL) or “Transport Layer Security” (TLS)). Or for those of who've been trained to look to see if the little lock is showing, you notice that it is conspicuously absent.

Of course, there are the obvious signs and—depending on which browser you are using, your particular browser configurations, and which browser plug-ins you may be using—the not so obvious signs that things are awry.

Let's start with the more visible ones and then we will follow up with the less obvious ones.

Note: In the rest of this ensuing discussion I will be using the term “SSL” to refer to both SSLv3 and TLSv1 unless otherwise specified. Also note that this list is not complete, but is intended to cover the most common and egregious issues. Post a reply to this blog to provide your feedback if your favorite one is missing.

The Web Site's Login Form Is Not Using SSL/TLS At All

Setting up an SSL site when the only thing that needs protecting is users' passwords seems like such a waste of time to many IT teams. And sure, there's always the argument to be made that if the rest of the site isn't using SSL at all, is there really that much to be gained? (But, that's probably not the right question to be asking to begin with; the more appropriate question is “what is your threat model?”. Only after answering that can you answer whether the rest of your site should be using SSL/TLS or not.)
One might think this is an uncommon practice and that major sites would never make such a faus pax, but until rather recently, the general login pages for Facebook did not default to use SSL.
Now this practice might be OK in the world predating open Wi-Fi hotspots, but nowadays it is trivial for someone to snoop the ether for user name / password combinations not transported over SSL. That combined with the fact that the average netizen (certainly not you astute folks reading this blog! :) has a tendency to reuse a small handful of passwords makes capturing someone password a potentially valuable exercise to those ready to exploit such things.
So while Gene Spafford's advice of “Using encryption on the Internet is the equivalent of arranging an armored car to deliver credit card information from someone living in a cardboard box to someone living on a park bench” may have been appropriate advice when LANs were over traditional Ethernet, with the advent of open Wi-Fi hotspots such advice is outdated at best.

The Web Site's Login Form Uses SSL, But the Form Is Displayed Using http

Today, it seems to be in vogue for web sites to have a link to the login form directly off their main page. This seems to be especially popular with telecommunications companies and their residential “My Account” portals; witness Qwest, Verizon, and AT&T.
Now, while these sites do post the login requests to their respective servers using https, the fact that the login forms are displayed on an non-https page is somewhat disturbing.
The issue is that a man-in-the-middle (MITM) attacker can mirror the main site, but alter the login page so that the authentication is redirected to a rogue site used to capture your password. (Depending on their sophistication, some may pass through your password to the actual intended site, while others may simply return a “login failed” indication.) Often this attack occurs using a technique known as “pharming”, whereby an attacker—usually placed conveniently near an open Wi-Fi—hijacks DNS requests to a web site. If it is one they have set up a mirrored rogue site for, the DNS request returns something with a similar name. The attacker may also attempt to obfuscate the URL by various means in hope that their social engineering attack won't be noticed.
What you, as users, can do to make things safer: Most of these sites will redirect you to a more bare bones, all SSL, login form if you simply click on their “Submit” or “Login” buttons from the main (non-SSL) containing page. For example, if you were to click on the “Sign In” button from Qwest's main page, you will be redirected to https://www.qwest.com/MasterWebPortal/freeRange/Login_validateUserLoginAction.action, which of course is using SSL. From there, you can verify the certificate, in particular that it belongs to whom you think it should, to make sure this is the site that you intended to visit.

The Web Site's Login Form Uses SSL, But the Form Is Using Components Not Using SSL

This is similar to, but in general not quite as bad, as the above case. Here there is some component of the login page that is not using SSL whereas the rest of the page is. Generally, this is caused by the site using vanilla http to load an image, some JavaScript, or a cascading style sheet. Also, generally most modern browsers will provide some sort of warning for this (although many people have disabled this warning because they see it so often).
In theory, such a thing can still be exploited by a MITM attack, although the difficulty of this varies greatly depending on what the pages are that are loading over vanilla http, whether that page is cached in a proxy somewhere, and many other variables. This attack is usually much more difficult than simple DNS hijacking. If a proxy cache is involved and the specific page is cacheable, then sometimes a proxy cache-poisoning approach will work for an attacker. (This can be accomplished in many different different ways, such as HTTP response splitting, HTTP request smuggling, DNS cache poisoning, etc. The details are beyond the scope of this blog post.)
Generally, if you see this warning in your browser, you can either choose to heed the warning or take your chances. If you decide to take your chances, you might at least wish to explore why the warning is being issued. From Firefox, you can do this by right-clicking on a page and selecting 'View Page Info'. From the 'Page Info' dialog box, select 'Media'. In the top section, right click and choose 'Select All', followed by right click 'Copy'. Then paste the copied addresses into a text editor and search through them. But caution: unless you understand HTTP, HTML, and web application security, you are are better off just reporting this situation to the web master. There be dragons in these waters, so tread carefully.

The Web Site's Login Form Uses GET Rather Than POST

The default action for HTML forms is to make an HTTP POST, but occasionally a web developer will use an HTTP GET for the HTML form's “action”. While both work, a GET will pass the form parameters as query parameters. HTTP web servers generally log all GET requests including any query parameters are part of the requested URL. In such cases, your user name and password likely will end up in someone's log files. Furthermore, by default these parameters will also go into your browser's history, so if you are using a web browser from a kiosk (probably a bad idea in in the first place) your user name and password ends up there as well.

The Web Site Uses a Dubious Certificate

[Note: This subsection could probably be called “Why Certificates and Public Key Infrastructure Do Not Work”, but that's a topic for another day. If interested, Google what CS professor and cryptographer Peter Gutmann has written about the subject.]
When a web site is configured to use SSL, the web server will be configured to use an X.509 server-side certificate. Often these certificates are not correctly configured. Generally, these misconfigurations will cause a warning in your browser. (I will not attempt to describe the wording warnings here because they vary greatly depending on which browser you are using and even with browser version. However, the browser developers put these warnings there for a reason and you should generally heed them when they advise you to “run away”, the law of “dancing pigs” not withstanding.)
Here are some of the more common certificate-related problems.
  1. Self-signed Certificate or Certificate Not Signed by Trusted CA
    Your web browser has several dozen “trusted” (by the vendor of your browser, not necessarily by you) certificates issued by various Certificate Authorities (CAs). An SSL server certificate issued by one of these trusted CAs will be trusted by your browser because it was properly signed by a trusted CA. But occasionally, in an attempt to save money (or because of general cluelessness of the IT staff administering the web site) a web site will use a self-signed (i.e., self-issued) certificate instead of one signed by a trusted CA. (A variant of this is that they will sign a certificate by one of their internal CAs that your browser does not trust.) In either case, your browser should issue a warning if you have not disabled this specific warning. (If it doesn't, it's time to switch to a different browser.)
    The problem here is that for self-signed certificates, anyone can create one, so unless you can trust this specific self-signed certificate (for example, by verifying beforehand and out-of-band that this specific certificate's fingerprint from a trustworthy source), a MITM attack is again possible. (Not trivial mind you, but certainly possible.) If you must use such a site, you are strongly advised to choose a unique password for it. And if that site happens to be your bank..., well, then I'd advise you to find a new bank, at least until their IT staff gets it fixed.
  2. Certificate's CN Does Not Match Host Name of Web Site for Login Page
    Earlier, I mentioned Eugene Spafford's quote about how using encryption on the Internet overkill. This never meant that SSL was useless. Indeed, in pre-open Wi-Fi times, arguably the most important purpose of SSL was the server-side authentication that your browser performed as it made an SSL connection. This server-side authentication consists of validating that the digital signature on the SSL server-side certificate is valid and was signed by one of your browser's trusted CAs as well as ensuring that the host name on the SSL server certificate matches the host name that your browser believes that it is trying to connect to. To do this, your browser compares the host name portion of the URL it is visiting to the 'CN' (Common Name) on the SSL server-side certificate. If there is a mismatch, your browser will issue a warning.
    This server-side authentication is important in preventing simple phishing attacks. Without this check, an attacker could redirect your browser to a rogue site that (say) looks like your bank's site and get you to authenticate to their site instead, thereby handing over your bank account to them. So, as users, don't get into the habit of just accepting mismatched host names on certificates or you will be setting up yourself for future phishing attacks.
  3. Site Is Using A Revoked Certificate
    Either a CA or the owner a certificate may “revoke” a certificate because they believe that the private key associated with the public key on the certificate has been compromised. So chances are, if you encounter a revoked certificate on a web site, you are dealing with a rogue web site set up to mimic the real web site's appearance. Stay away and notify those running the site that you were originally intending to visit as it is possible (though unlikely) that the site's legitimate owner has accidentally put the revoked certificate back up.
  4. Certificate Supports No Revocation Checking
    Your browser relies on hints on the SSL server certificate or the signed CA's root certificate for details whether or not a certificate has been revoked. Your browser does this by examining these certificates for certain optional extensions which instruct it how and where to check for revoked certificates. (The details are again beyond this particular blog post.) Furthermore, depending on your browser and its version, this revocation checking may or may not be automatically enabled in your browser. If it isn't enabled, your browser generally won't be able to detect revoked certificates with the exception of those revoked certificates built into the browser itself. (Check your browser's documentation for details of how to enable revocation checking for your specific browser version.)
    However, occasionally, a browser will use a cheap(er) CA and that CA might not support revocation checking. (I suspect most CAs do; perhaps even all, if they support X.509v3 certificates at all. But it certainly is a possibility. I'll leave that as an exercise for the users of all the browsers to check if all your browser's built-in CAs support revocation checking. If any of the CAs have issued version 1 X.509 root certificates, these probably do not support revocation checking because they do not support certificate extensions.) Also, I cannot speak as to which, if any, browsers would issue a warning for such CAs. If you know, please post a comment to educate us.
Finally, you will notice that I did not include “Expired Certificate” in this list of dubious certificates. I would describe that as a “pink” flag, not a red flag. The primary reason for CAs expiring certificates to begin with is to ensure a continued revenue stream. Yes, there are some valid arguments for expiring certificates, but assuming that one takes reasonable precautions to protect the associated private keys and one is using a reasonably sized public/private key size ( 1024-bits for RSA; sorry NIST!), an expiration period of 3 to 5 years should be very reasonable. But as I don't really want to get into the politics of CAs, I am not going to belabor this point. If a certificate has been expired for several years, it probably is a red flag, indicating that the IT staff is either oblivious or apathetic (as this almost always generates a huge warning in almost every browser available). Of course it might also indicate that the site is as popular as a skunk sniffing contest and no one other than the IT staff actually visit their site. ;-)

Red Flag #6: Password Reset via Poorly Implemented “Security Questions”

So is that is? Anything else? Of course. I've saved the best (or would that be the worst) for last? The last authentication red flag on my list of authentication e-v-i-l-s is sites that allow you to reset your passwords based your answer to the ubiquitous “security question”.

You know the ones... They ask you to chose a “security” question such as:
  • What is your favorite sports team?
  • Who is your favorite author?
  • What was the name of the school you attended in first grade?
  • What was your first car?
  • Where is your favorite vacation spot?

etc., and then to provide your answer that they check when you need your password reset.

Depending on whose figures you quote, one hears of help desk assisted password resets running between $50 to $150 per call. Also one hears figures that between 20-40% of all help desk calls are to reset passwords. So given such costs, it is not surprising that companies have decided to automate their password resets. Unfortunately, since most companies don't have a second form of authentication that they support for all of their customers, their self-help mechanism for resetting passwords is often demoted to using “security” questions. Since any more, this practice includes almost every web site authenticating users with a password, one can't use this criteria alone to classify poor security practice. So instead, we will try provide insight on how to recognize the bad from the worse.

We will take a three pronged approach in discussing this topic:
  1. How to recognize bad practice from the worst practice (aimed at both users and developers)
  2. Offer suggestions of how users can make the best of a bad practice
  3. Offer suggestions of how developers can better implement password resets

Recognizing Bad Practice

First, as noted in Good Security Questions,

... there really are NO GOOD security questions; only fair or bad questions. 'Good' gives the impression that these questions are acceptable and protect the user. The reality is, security questions present an opportunity for breach and even the best security questions are not good enough to screen out all attacks. There is a trade-off; self-service vs. security risks.”

So, from an external perspective, how do we recognize the fair questions from the bad questions? GoodSecurityQuestions.com distinguishes four criteria. That site states that a “good” (relatively speaking) security question is one whose answer will have these four characteristics:
  1. cannot be easily guessed or researched (safe),
  2. doesn't change over time (stable),
  3. is memorable,
  4. is definitive or simple.
While some sites are getting better at formulating canned questions, few meet all four of these above characteristics. Most web sites still only have questions that lead their audience to very predictable one or two word answers. On the plus side though, more and more sites now are now allowing their users to pose your their own questions (such as “What is my password?” ;-) and answers.

The password reset process typically works by a user starting out by clicking on a “Forgot Password” link. Upon clicking on this the link, the user will be prompted for their user name. Once this is done, most sites prompt you to answer these security question(s) correctly (sometimes you will need to answer multiple questions correctly). After you provide the correct answer(s) to the posed question(s), the web site will send you an email to your email address that you used to register with that web site. That email message will typically contain either a temporary password that you can use at the main login screen or a special link—often only valid for a short amount of time, such as a few hours—and that link allows you to reset your forgotten password. The better designed systems will send you a special link to your email address on record that will allow you to answer your security question(s), and only then—if you answer them correctly—will allow you to proceed immediately to reset your password. This has the advantage of not allowing an adversary to see your specific questions first in order to research them.

However, a few of the sites still allow you to reset your password directly just by answering your security question directly—no email or SMS text, etc. side-channel is involved. A few others have the poor practice of displaying the email address that the new temporary password is being sent do. If their site does the latter, that opens up possible social engineering exploits to their help desk—such as an attacker claiming that they no longer use that email address and could the help desk personnel please change it to this other email address that they now use. Then all the attacker need to is to guess the answer(s) to your security question(s) correctly.

Advice For Users

Fortunately, many ordinary citizens are getting smarter with this, and when posed with some canned question such as “What was the name of the school you attended in first grade?” will answer “spaghetti”. (In truth [seriously], “spaghetti” seems to be a favorite answer of these questions, so you may want to start trying something else, like perhaps “linguine”. :)

Unfortunately, many people don't understand how answer these questions realistically can work against them. For when faced with answering the security question “What was your first car?”, they might answer “1969 Plymouth Satellite” (or in my case, that would be “1909 Ford Model T”...JK; actually it was a “Paleolithic Era Flintstones Mobile”) . But seriously, which is easier for an attacker to do? To guess your 8 character password (assuming that your password isn't “password” :) or to guess the answer to your security question? Generally, it's the latter. It's not to hard for me to write a small program that will guess all reasonable permutations of year, make, and model of cars or all the sports teams or whatever your security question happens to be. After all, how many possible “favorite sports teams” are there? Maybe thousands at the most. Developers could go a long way to help out here by not permitting unlimited attempts at guessing the answer to these security questions, but they seldom do. So it's up to you—as users—to chose some technique to use to defeat this avenue of compromising your web site user account. Pick some standard technique that you remember. For example, maybe add a common preamble to the all your security answers, like “xyzzy-” or “plugh:” or “meh/” or whatever you want. Or you can always answer all such security questions by some common (but secret) pass phrase, such as “I think that all politicians should serve a term in office and then a term in jail.”, etc. But because there is no common recognized “best practice” for password resets, it is going to be a long time until the development community catches up. But then again, “best practice” for a “poor practice” is an oxymoron. As noted from GoodSecurityQuestions.com earlier, there are no “good” security questions, only fair or bad questions. So it is up to you, as users, to protect yourself until something better comes along.

Lastly, if you, as users, are able to define your own security question / answer, that can provide a higher level of security than stock questions, assuming that you give your question some thought beforehand. I often advise friends to select a question that might be somewhat embarrassing to them if it were discovered. (Although, use with caution; as users, you can never assume that the developers are actually encrypting the questions and answers.) So, for example, a question like “What was the nick name that bullies used to taunt me with in grade school?” or “What is the name of the girl that I had a secret crush on in ninth grade?” might be appropriate, whereas a question such as “How much money have I embezzled from my last employer?”, not so good. Common sense should prevail here.

Advice For Developers

There is no consensus for what constitutes “best practice” for password resets using security questions / answers. Most likely, this is in part, because most security experts recognize that passwords are a weak form of authentication themselves and these password reset techniques are even weaker. But that said, from a pragmatic perspective—for the moment at least—we need to play the cards we are dealt.
There is evidence that the security industry is starting to pay attention to this issue. For instance, FishNet Security's Dave Ferguson published a white paper on this in 2010 as well as participating in a recent OWASP Podcast on the subject. Based on Ferguson's, OWASP has started on a “Forgot Password Cheat Sheet” (which still needs a lot of work, but its an extraordinary start by Dave Ferguson and Jim Manico). [NOTE: The OWASP “cheat sheet” page is also a bit misnamed as it assumes that the only mechanism to reset passwords is via security questions / answers; if a site were using multi-factor authentication, it probably would be better to involve those other authentication factors in the process, but admittedly, outside the banking / finance industry and the military, multi-factor authentication is a rare practice.]
At the risk of repeating a lot of what is already spelled out in the OWASP Forgot Password Cheat Sheet, here is what I would advise. (I hope to get these comments folded into the cheat sheet in the not too distant future.)
Step 1) Gather Identity Data
Not much to disagree with here except for the obvious don't be collecting social security numbers unless it is something that your site actually has legitimate need for. The same goes for collecting only the last 4 digits of the SSN.
Step 2) Selecting Initial Security Questions / Answers
The appropriate time to require that a user choose security the time the user initial registers with your site. Ideally, allow them to select their own security question. If this is not possible or desirable for some reason, then all them to select from a large set of well thought out security questions. It is a good idea to require that the answer to any security question be longer than some minimal length (say, 8 to 10 characters), otherwise brute force attempts become likely. Finally, regarding the storage of the security questions / answers, questions should be encrypted (especially if they are chosen by the user) and any security questions should be hashed.
Step 3) Send a Time-Limited Token Over a Side-Channel
This is follows the step to verify security questions in the OWASP cheat sheet, but I think it is better that it precedes this verification so as to not even allow the possibility of answering the security questions until one has received this out-of-band token. A random 8 character token is sufficient for SMS, but using something like ESAPI's CryptoToken is better if generating an emailed link. Making this step precede the verification of the security questions increases the difficulty of a potential attacker researching the answers ahead of time (unless there are only a small set of possible questions). In addition, the token usage should ideally be restricted to a particular time duration after which it was created, say two hours or so.
Step 4) Require User Return the Token
The user must return the token sent to her over a side-channel. So they must reply to the SMS text message or click on the link sent to their email address on record. Furthermore, they must do so within the required amount of time, otherwise the token becomes invalid.
Step 5) Redirect the User to a Secure Page to Verify Security Question(s)
If the token is valid, take the user to a page (using SSL/TLS) where they can answer the questions. Do not allow the user to select which question they desire to answer (assuming that there are multiple question / answer pairs; ideally, make them correctly answer them all). Developers should also take precautions to limit the effectiveness of guessing. For example, using CAPTCHAs to reduce the success rate of automated attacks and allowing only (say) 5 consecutive failed attempts before temporarily locking out the account from further attempts to answer the security question. (A temporary lockout of a few minutes should be sufficient.) Finally, if the account is locked out because of N consecutive failed attempts, note it in a security audit log as well as notifying the user via email or an SMS text message.
Step 6) Allow User to Change Password
Once the user has correctly answered all required security questions (one or more, based on risk of potential lost of compromised password), allow the user to change the password. Then (optionally) redirect the user to the login page and require they re-login with their newly selected password. (Requiring that they re-enter their password again will reinforce their password in their mind. Of course, one must weigh this benefit against the inconvenience of the user experience.)

Conclusion

Well, that's enough ranting for this topic of warning signs of poorly implemented authentication practice. Tell me what are your thoughts on this. Have you seen any additional authentication red flags that I've forgotten? If so, let me know.

Regards,
-kevin

Saturday, March 19, 2011

Signs of Broken Authentication (Part 3)


Today, I'll cover two more warning signs of broken authentication. But first, a word from our sponsor. (Warning: Shameless plug ahead.)

Hey boys and girls! Do you have trouble thinking up new secure passwords for each new web site that you visit and have resorted to using passwords like “password1”, “password2”, “password3”, etc. because you know someone has told you to use different passwords for each site? Or you actually have secure passwords for your sites, but you have trouble remembering them? Well, look no further than Kevin Wall's Creating Good Passwords. You'll be glad you did.

We now return you to our regularly scheduled blog.

Red Flag #3: Mapping Password Characters to Digits for Entry via Telephone Keypad

Another red flag which I have been running across much more frequently, are sites where they allow you to enter your password via a standard telephone key pad. You can check this if you are aware of sites that allow this. For instance, if your password was “JeHgr72w”, can you enter your password as “53447729” from the numeric keypad of your phone? If you can, you know that they are very likely mapping the characters in your password to digits 0 through 9 as arranged on a phone keypad. In such cases, this really dumbs down the entropy of your password—much worse than simply restricting which characters they allow you to use. If you run across such sites, I would advise you to choose the maximum length password that they allow. You would think that this would not be too common with sites that hand confidential data, but just last year, I discovered that a site handling our benefits weas doing this. I ran across it when I called their customer service desk and they asked me to confirm who I was via my password. When I asked how to enter alphabetic characters on a telephone key pad and they incredulously answered “for A, B, or C, enter 2; for D, E, or F, enter 3; etc.”. I should have been suspicious when their web site didn't allow me to choose any special characters at all. (See Red Flag #2.) So far, the sites where I've encountered have been limited to IVR systems associated with customer support. I guess one could argue that this is better than requesting sites requesting things like your SSN for verification of identity—especially in cases when they shouldn't be using your SSN any longer any longer (health care services come to mind). But the down side is that this site is dumbing down potentially strong passwords without the knowledge of their users, so consumers beware!

Red Flag #4: No Account Lockout

Another authentication red flag are sites that support no account lockout after so many consecutive failed login attempts. If a web site allows someone to guess unlimited attempts of your password, you had better have a really strong password because there's nothing there to slow the attacker down.

So what should developers do? Well, it should be obvious that what they do NOT want to do is to permanently lock out an user account. If you thought help desk calls about password resets were expensive before, just try implementing a permanent lockout. Some hacker will come along and hit your site with something that guesses user names (in general, not very difficult to guess especially if you have a list of first and last names for users) and just try N intentionally incorrect passwords for each user name (where N is the threshold for failed attempts where the site locks out the account). If you are the developer of such a site that implements this permanent lockout policy, lets just say for your sake, I hope you are away on vacation in the deep woods of Canada where no one can find you if / when this happens.

So what's the correct approach? Well, the idea is to sufficiently slow down an attacker who is doing an off-line attack. So pick some reasonable number of password attempts (5 seems about right), and if there are (say) 5 consecutive failed login attempts for a specific user name, then temporarily lock out that user account for some short amount of time (between 5 to 10 minutes is good). On the password attempt, you should display an error message that says something like:

You're user account has been temporarily locked out for T minutes after N consecutive failed attempts. Please try again in T minutes.

where T and N are the lockout duration and failed attempt threshold respectively. A similar error message would be displayed if a login attempt was made while the account is temporarily locked out.

Once a user successfully authenticates after an account had been temporarily locked out, it also a good idea to display a message that effectively gives notice to the user that this had happened and the time when it occurred. This helps the user know that someone else may have been trying to crack their account, and if so, they may wish to change their password as a result.

POLICY NOTE: If your company has a policy to not divulge failed whether a login attempt failed because the user name or password was incorrect (e.g.,”Login failed: Invalid user name or password” rather than “Login failed: Invalid user name” / “Login failed: Invalid password”), then you need to temporarily lock out even invalid user accounts! Otherwise, an attacker can use this account lockout information to discern whether or not a guessed user name is valid.

Wednesday, March 16, 2011

Builders vs. Breakers Dichotomy

I've just posted a reply to Marisa Fagan's Dark Reading blog. Even though my reply is short (well, for me at least ;-). Marisa weighs in on the adversarial relationship between developers and security people. I've added my $.02 as to why that's not necessarily a bad thing as long as respect between the two roles is maintained. I am not going to repost it here. If you are interested, you can find it here.

-kevin

Tuesday, March 15, 2011

Response to Mark Curphey's Blog “OWASP—Has It Reached a Tipping Point”


Back on February 18, 2011, Mark Curphey, the original founder of OWASP, wrote a very thought provoking blog regarding the direction that OWASP seemed to be taking.

And while I hadn't originally intended on commenting on it, Rohit Sethi's post to the Secure Coding mailing list, caused me to rethink this. (Pardon me for not addressing everything in the sequential order that Mark brings them up.)

Curphey's blog refers to tweets coming from the OWASP Summit in Portugal as the singular event that precipitated his reflections about OWASP's directions. He points out the tweet that caught his eye was one that where John Wilander tweeted that an OWASP Summit attendee remarked on stage that “Developers don't know shit about security”. Curphey refers to John Wilander's post “Security People vs. Developers” which Wilander blogged about 5 days after his initial Twitter post. Wilander also has a very thoughtful post. In Wilander's blog, he says he latter added the comment “Well, I got news. You don't know shit about development”, obviously in reference to the OWASP Summit attendee who originally made the comment.

Well, I'm going to respond in part to both Curphey's and Wilander's blog posts, and perhaps an a little (in)sanity to the mix. Why? Well, for one, I have extensive experience in both development and security. I've been a developer for 30+ years, and been involved with application security for about 12 years. So, IMNSHO, I know “shit” about both developer and security and it is that frame of reference that I am posting this.

So Who Is Right?

So let's start out trying to answer the question, who is correct here? Which if any of these are the correct point of view:
  • Developers don't know jack about security.
  • Security people don't know jack about development.
Or perhaps, let's go even further and contemplate:
  • Developers don't know squat about development.
  • Security professionals don't know squat about security.
Well, I could answer “none of the above” or I could just as easily answer “all of the above”. That alone should tell us that we are trying to answer the wrong question. But I have honestly seen developers (some of them even with PhDs in computer science) who couldn't write code if their life depended on it and architects who no longer remember how to code. I've also seen security professionals who are great pretenders. They know all the right buzzwords and have all the right certifications (e.g., ISC2's CISSP, GIAC's Information Security Professional, etc.), but ask them to write up a threat model for some system and all the can give you is a blank stare.
That is not the point here!
A former colleague and close friend of mine once told me that he thought that about 20-25% of the people in any profession are clueless @holes. Any while that may be true (I think that the percentage is a rather high, and am in no position to judge other professions so I will refrain from further comment), that isn't the point. Companies, professional societies, and society as a whole have to play they hand we've been dealt. Companies generally don't have the option to use another company's employees.So the real point is not “who is correct?” in this debate, but rather “how can we help IT as a whole to move into a direction of more secure code without sacrificing developer (and perhaps more importantly, business) interests?”.
Which brings us back to the original "developers don't know shit about security comment". While I completely understand one's needs occasionally to vent frustration (and a cage wrestling match between Linus Torvalds and Bruce Schneier might even prove amusing ;-), sniping at each other is seldom a productive way to reach your goals, as all it does is alienate the two parties that need very much to be working together to solve these issues. Taking an “us versus them” mentality is bound to fail, but by taking a “collective 'we'” stance, we might just make some progress.

So What Does Matter?

Well, we could start with respect, for one. Now I will be the first to admit that I do a poor job at this. All too often, I confuse the issues of “trust” vs “respect”. I believe that it is OK to require that people earn some degree of trust. It is not OK to try to make them earn respect. There should be a certain amount of respect that we have for our both our development and security colleagues just based on common dignity of humankind and the common profession to which we belong.
What is not respectful is to assume that you clearly understand the motivations of individuals. At times, we are all probably guilty of this. So we might make the assumption that all VB programmers are stupid. I must confess that I have propagated this myth by quoting Dijkstra's comment on BASIC numerous times in my email .sig and that is disrespectful. Likewise for my .sig with Richard Clarke's quote “The reason you have people breaking into your software all over the place is because your software sucks...”. So I just want to be the first to step up and admit my guilt and to try to begin the healing process. I am unfortunately rather jaded over my 30+ years in IT, seeing the same dumb things repeated over and over again, often by the same people (myself included). But that is no reason to be disrespectful to them. So if I've offended anyone, I apologize for being so inconsiderate and ask your forgiveness. And even though the Dijkstra and Clarke quotes are two of my favorites, I will not use them again in my email .sig.

Reaching Common Ground

Wilander cites a poll that he took of 200+ developers and asked them to rate where security fell in their priority. Their rankings came out like this:
  1. Functions and features as specified or envisioned
  2. Performance
  3. Usability
  4. Uptime
  5. Maintainability
  6. Security
If you have done any development in the trenches, you probably are not surprised at these things. But I think Wilander let off one important rating. When I talk to developers and indeed in my former life as one, “schedule” would always come up as #1, and not overrunning the budget was always a close second or third. Perhaps meeting schedule and budget was just implied; I don't know as I haven't see the poll to which John refers. However, if nothing else, it does point so some other contributing forces as to why security gets ranked so low. (Surprisingly, this is often even the case when security software is involved, so these forces—probably coming from the business—seem to be universal, at least in the commercial sector.)
I have often said that I believe that development is more difficult than security. Why? Well, for one, developers are expected to deal with the security aspect of their software plus all these other items in Wilander's list.
Because of that, I think that my observation is true in the general case:
    “It's easier to teach a good developer about security than it is to teach a good security person how about software development.”
So if I have to start training people about security (and I have done this with several), I always prefer to start with someone who is a good developer teach them about security rather than going in the other direction.

Back to Curphey's Blog

Which brings me back, albeit in a circuitous way, to Mr. Curphey's astute observations.
Mark states:
    “I had always hoped that the community would develop into a community of developers that were interested in security rather than a community of security people that were interested in software. I wanted to be part of a community that was driving WS* standards, deep in the guts of SAML and OAuth, framework (run-time) / language security and modern development practices like Agile and TDD rather than people seemingly obsessed by HTTP and web hacking techniques.”
Why hasn't this happened? Well, in my opinion, one reason is that an open source community's involvement in standards work is definitely at a disadvantage to special interest groups comprised of well-funded vendors who have a vested interest to develop such forward-looking work for the benefit of their respective companies. Involvement in these standards is takes a lot of time and usually there is a low reward, that being seeing people adopt your respective standard. Look how long it is taking the various OASIS WS-* standards to gain critical mass.
Secondly,this sort of ambitious work requires a broad base of expertise. Let's take SAML, for instance. That requires expertise in XML, XSDs, encryption, and authentication protocols, and experience in writing specifications is highly recommended as well. That sort of breadth, beyond a surface level, is rare in individuals. However, a company sponsoring a specific standard need not rely a a single individual (even if they may have a single point of contact); they can afford to have several people participate. After all, why not? There frequently is a profit motive there. (This is especially true in standards that on the 2.0 or later revisions.)
Thirdly, people typically stay with what they are comfortable. So if their expertise is HTTP-based attacks, they stick with it until it no longer provides their meal ticket. And let's face it, when OWASP was started a decade or so ago, HTTP was pretty much all you needed to understand to get by. (Well, that plus a fundamental understanding of JavaScript.)
Mr. Curphey continues with
    “We can’t have security people who know development. We must have developers who know security. There is a fundamental difference and it is important.”
I wholeheartedly agree with this. I think it aligns well with my above observation that it's easier to take a good developer and train her about security than vice-versa. If we fail to keep this idea in the forefront of our minds, we are bound to fail. My only addendum to Mark's comment would be to rephrase it to say that “we can't only have security people who know development”. Those folks are still valuable. I think and hope Mark would concur.
Curphey continues...
    Manage the Project Portfolio – When I look at the OWASP site today its hard to see it as anything else but a “bric-a-brac” shop of random projects. There are no doubt some absolute gems in there like ESAPI but the quality of those projects is totally undermined by projects like the Secure Web Application Framework Manifesto. When I first looked I honestly thought this project was a spoof or a joke.”
I see the same disorganization. In part, I think that's in part something that Wikis seem to encourage. Once they evolve beyond a certain critical mass, similar things get written down many times on many different pages unless there is an overall editor / project manager to manage it all. AFAIK, OWASP does not have this and it suffers for it. I think it also explains the lack of uniform consistency and quality across the various OWASP projects.
Also, while I appreciate Curphey's candor, I appreciate even more Rohit Sethi's non-defensive response. When I read Rohit's comment to Mark's blog, I must say that while I'm sure it must have been a hard pill for him to swallow, he stepped up and did so like an honorable man. However, I do disagree in part with Curphey's assessment of the OWASP “Secure Web Application Framework Manifesto”. I don't think it is a total waste. In reply posted earlier today by Benjamin Tomhave, seems to have similar sentiment when Ben writes “but I hate to see the white paper lost. Why not also look at joining efforts with something like the Rugged Manifesto movement?”. There is something to be redeemed in almost every mess. If nothing else, it is useful to examine in more detail to see why it was deemed a failure. (I must admit I only took a quick 5 minute read through the whole manifesto, however, my overall sense was not that it lacked valuable information, but rather that it lacked appropriate organization along with a certain degree of incompleteness. If it were written in such a way that could be referenced BY DEVELOPERS (your audience) as a specification document, it could prove useful...especially for new frameworks that are only now getting underway. But if such a document is not well-organized and approachable from the point of view of developers, it will not get used. Period! (And don't even ask why 'Period' demands an exclamation point; I just like the sense of irony.)
Regarding Curphey's second point on “Industry Engagement and Communications”, I have no personal basis from which to speak, so I'll just keep my mouth shut on this one. (I can hear you all saying “Thank God!” :)
To Mark's third point on “Ethics / Code of Conduct”, I think he is spot-on. Vendors use the “OWASP” word to sell the products more than ever. (“Successfully defends against the OWASP Top Ten attacks”, etc.) But unless OWASP is willing to take a stand and bite the hand that sometimes feeds it (Mark's second point), this will never change. In particular, unless OWASP is willing to litigate against those who take advantage of the OWASP name to pander their products, I don't see this changing at all. IANAL and thankfully, don't even play one on TV (or YouTube or that matter.), so that is up to the OWASP Board to decide, not me. (Hey, Jeff! Are you listening? What's your $.02 on this?)
And finally to Mark's last point “Engaging Developers”. Mark writes:
    “Maybe Software Security is for developers and Application Security is for security people. The first persona is the builder and the second persona the breaker. ... Developers best understand what they need and want, security people best understand what they need and want.”
No! I don't think so. There may be a continuous spectrum from builder to breaker, but if we treat these as independent goals, we will never get to where I think we all want to be, which is “secure software”. So they better have a common goal; they better both “need and want” the same things. Otherwise the whole security effort overall will fragment and fall apart. We need each other, and breakers and builders do have different means to achieve the same goal. But it had better be the same goal (well, assuming you breakers are “white hats”), and that goal is to improve software and systems security. Don't forget that!

Wow; I've blathered on much longer than intended. My original intent was to post this as a reply to SC-L, but at 4 pages, its a bit long for that. (Well, if you only learned one thing from this post, it's probably a side note observation: “Now I know why Kevin is not on Twitter”. ;-)
Send me your thoughts.
-kevin