Wednesday, March 30, 2011

Signs of Broken Authentication (Part 4)


So today's post is the last of the series of “Six Signs of Broken Authentication”.

So let's review. Thus far, we have covered the following red flags:
  1. Restricting the maximum length of a password.
  2. Restricting the set of characters you are allowed to use in your password.
  3. Mapping all your password characters to digits 0 through 9 so you may enter it via an ordinary telephone keypad.
  4. No password lockout.
Today, we will be covering the last two warning signs of broken authentication:
  1. Missing or inappropriate use of SSL/TLS
  2. Password reset via poorly implemented “security questions”

Red Flag #5: Missing or Inappropriate use of SSL/TLS

You go to a web site and you notice that that login page is not using https (i.e., HTTP over “Secure Socket Layer” (SSL) or “Transport Layer Security” (TLS)). Or for those of who've been trained to look to see if the little lock is showing, you notice that it is conspicuously absent.

Of course, there are the obvious signs and—depending on which browser you are using, your particular browser configurations, and which browser plug-ins you may be using—the not so obvious signs that things are awry.

Let's start with the more visible ones and then we will follow up with the less obvious ones.

Note: In the rest of this ensuing discussion I will be using the term “SSL” to refer to both SSLv3 and TLSv1 unless otherwise specified. Also note that this list is not complete, but is intended to cover the most common and egregious issues. Post a reply to this blog to provide your feedback if your favorite one is missing.

The Web Site's Login Form Is Not Using SSL/TLS At All

Setting up an SSL site when the only thing that needs protecting is users' passwords seems like such a waste of time to many IT teams. And sure, there's always the argument to be made that if the rest of the site isn't using SSL at all, is there really that much to be gained? (But, that's probably not the right question to be asking to begin with; the more appropriate question is “what is your threat model?”. Only after answering that can you answer whether the rest of your site should be using SSL/TLS or not.)
One might think this is an uncommon practice and that major sites would never make such a faus pax, but until rather recently, the general login pages for Facebook did not default to use SSL.
Now this practice might be OK in the world predating open Wi-Fi hotspots, but nowadays it is trivial for someone to snoop the ether for user name / password combinations not transported over SSL. That combined with the fact that the average netizen (certainly not you astute folks reading this blog! :) has a tendency to reuse a small handful of passwords makes capturing someone password a potentially valuable exercise to those ready to exploit such things.
So while Gene Spafford's advice of “Using encryption on the Internet is the equivalent of arranging an armored car to deliver credit card information from someone living in a cardboard box to someone living on a park bench” may have been appropriate advice when LANs were over traditional Ethernet, with the advent of open Wi-Fi hotspots such advice is outdated at best.

The Web Site's Login Form Uses SSL, But the Form Is Displayed Using http

Today, it seems to be in vogue for web sites to have a link to the login form directly off their main page. This seems to be especially popular with telecommunications companies and their residential “My Account” portals; witness Qwest, Verizon, and AT&T.
Now, while these sites do post the login requests to their respective servers using https, the fact that the login forms are displayed on an non-https page is somewhat disturbing.
The issue is that a man-in-the-middle (MITM) attacker can mirror the main site, but alter the login page so that the authentication is redirected to a rogue site used to capture your password. (Depending on their sophistication, some may pass through your password to the actual intended site, while others may simply return a “login failed” indication.) Often this attack occurs using a technique known as “pharming”, whereby an attacker—usually placed conveniently near an open Wi-Fi—hijacks DNS requests to a web site. If it is one they have set up a mirrored rogue site for, the DNS request returns something with a similar name. The attacker may also attempt to obfuscate the URL by various means in hope that their social engineering attack won't be noticed.
What you, as users, can do to make things safer: Most of these sites will redirect you to a more bare bones, all SSL, login form if you simply click on their “Submit” or “Login” buttons from the main (non-SSL) containing page. For example, if you were to click on the “Sign In” button from Qwest's main page, you will be redirected to https://www.qwest.com/MasterWebPortal/freeRange/Login_validateUserLoginAction.action, which of course is using SSL. From there, you can verify the certificate, in particular that it belongs to whom you think it should, to make sure this is the site that you intended to visit.

The Web Site's Login Form Uses SSL, But the Form Is Using Components Not Using SSL

This is similar to, but in general not quite as bad, as the above case. Here there is some component of the login page that is not using SSL whereas the rest of the page is. Generally, this is caused by the site using vanilla http to load an image, some JavaScript, or a cascading style sheet. Also, generally most modern browsers will provide some sort of warning for this (although many people have disabled this warning because they see it so often).
In theory, such a thing can still be exploited by a MITM attack, although the difficulty of this varies greatly depending on what the pages are that are loading over vanilla http, whether that page is cached in a proxy somewhere, and many other variables. This attack is usually much more difficult than simple DNS hijacking. If a proxy cache is involved and the specific page is cacheable, then sometimes a proxy cache-poisoning approach will work for an attacker. (This can be accomplished in many different different ways, such as HTTP response splitting, HTTP request smuggling, DNS cache poisoning, etc. The details are beyond the scope of this blog post.)
Generally, if you see this warning in your browser, you can either choose to heed the warning or take your chances. If you decide to take your chances, you might at least wish to explore why the warning is being issued. From Firefox, you can do this by right-clicking on a page and selecting 'View Page Info'. From the 'Page Info' dialog box, select 'Media'. In the top section, right click and choose 'Select All', followed by right click 'Copy'. Then paste the copied addresses into a text editor and search through them. But caution: unless you understand HTTP, HTML, and web application security, you are are better off just reporting this situation to the web master. There be dragons in these waters, so tread carefully.

The Web Site's Login Form Uses GET Rather Than POST

The default action for HTML forms is to make an HTTP POST, but occasionally a web developer will use an HTTP GET for the HTML form's “action”. While both work, a GET will pass the form parameters as query parameters. HTTP web servers generally log all GET requests including any query parameters are part of the requested URL. In such cases, your user name and password likely will end up in someone's log files. Furthermore, by default these parameters will also go into your browser's history, so if you are using a web browser from a kiosk (probably a bad idea in in the first place) your user name and password ends up there as well.

The Web Site Uses a Dubious Certificate

[Note: This subsection could probably be called “Why Certificates and Public Key Infrastructure Do Not Work”, but that's a topic for another day. If interested, Google what CS professor and cryptographer Peter Gutmann has written about the subject.]
When a web site is configured to use SSL, the web server will be configured to use an X.509 server-side certificate. Often these certificates are not correctly configured. Generally, these misconfigurations will cause a warning in your browser. (I will not attempt to describe the wording warnings here because they vary greatly depending on which browser you are using and even with browser version. However, the browser developers put these warnings there for a reason and you should generally heed them when they advise you to “run away”, the law of “dancing pigs” not withstanding.)
Here are some of the more common certificate-related problems.
  1. Self-signed Certificate or Certificate Not Signed by Trusted CA
    Your web browser has several dozen “trusted” (by the vendor of your browser, not necessarily by you) certificates issued by various Certificate Authorities (CAs). An SSL server certificate issued by one of these trusted CAs will be trusted by your browser because it was properly signed by a trusted CA. But occasionally, in an attempt to save money (or because of general cluelessness of the IT staff administering the web site) a web site will use a self-signed (i.e., self-issued) certificate instead of one signed by a trusted CA. (A variant of this is that they will sign a certificate by one of their internal CAs that your browser does not trust.) In either case, your browser should issue a warning if you have not disabled this specific warning. (If it doesn't, it's time to switch to a different browser.)
    The problem here is that for self-signed certificates, anyone can create one, so unless you can trust this specific self-signed certificate (for example, by verifying beforehand and out-of-band that this specific certificate's fingerprint from a trustworthy source), a MITM attack is again possible. (Not trivial mind you, but certainly possible.) If you must use such a site, you are strongly advised to choose a unique password for it. And if that site happens to be your bank..., well, then I'd advise you to find a new bank, at least until their IT staff gets it fixed.
  2. Certificate's CN Does Not Match Host Name of Web Site for Login Page
    Earlier, I mentioned Eugene Spafford's quote about how using encryption on the Internet overkill. This never meant that SSL was useless. Indeed, in pre-open Wi-Fi times, arguably the most important purpose of SSL was the server-side authentication that your browser performed as it made an SSL connection. This server-side authentication consists of validating that the digital signature on the SSL server-side certificate is valid and was signed by one of your browser's trusted CAs as well as ensuring that the host name on the SSL server certificate matches the host name that your browser believes that it is trying to connect to. To do this, your browser compares the host name portion of the URL it is visiting to the 'CN' (Common Name) on the SSL server-side certificate. If there is a mismatch, your browser will issue a warning.
    This server-side authentication is important in preventing simple phishing attacks. Without this check, an attacker could redirect your browser to a rogue site that (say) looks like your bank's site and get you to authenticate to their site instead, thereby handing over your bank account to them. So, as users, don't get into the habit of just accepting mismatched host names on certificates or you will be setting up yourself for future phishing attacks.
  3. Site Is Using A Revoked Certificate
    Either a CA or the owner a certificate may “revoke” a certificate because they believe that the private key associated with the public key on the certificate has been compromised. So chances are, if you encounter a revoked certificate on a web site, you are dealing with a rogue web site set up to mimic the real web site's appearance. Stay away and notify those running the site that you were originally intending to visit as it is possible (though unlikely) that the site's legitimate owner has accidentally put the revoked certificate back up.
  4. Certificate Supports No Revocation Checking
    Your browser relies on hints on the SSL server certificate or the signed CA's root certificate for details whether or not a certificate has been revoked. Your browser does this by examining these certificates for certain optional extensions which instruct it how and where to check for revoked certificates. (The details are again beyond this particular blog post.) Furthermore, depending on your browser and its version, this revocation checking may or may not be automatically enabled in your browser. If it isn't enabled, your browser generally won't be able to detect revoked certificates with the exception of those revoked certificates built into the browser itself. (Check your browser's documentation for details of how to enable revocation checking for your specific browser version.)
    However, occasionally, a browser will use a cheap(er) CA and that CA might not support revocation checking. (I suspect most CAs do; perhaps even all, if they support X.509v3 certificates at all. But it certainly is a possibility. I'll leave that as an exercise for the users of all the browsers to check if all your browser's built-in CAs support revocation checking. If any of the CAs have issued version 1 X.509 root certificates, these probably do not support revocation checking because they do not support certificate extensions.) Also, I cannot speak as to which, if any, browsers would issue a warning for such CAs. If you know, please post a comment to educate us.
Finally, you will notice that I did not include “Expired Certificate” in this list of dubious certificates. I would describe that as a “pink” flag, not a red flag. The primary reason for CAs expiring certificates to begin with is to ensure a continued revenue stream. Yes, there are some valid arguments for expiring certificates, but assuming that one takes reasonable precautions to protect the associated private keys and one is using a reasonably sized public/private key size ( 1024-bits for RSA; sorry NIST!), an expiration period of 3 to 5 years should be very reasonable. But as I don't really want to get into the politics of CAs, I am not going to belabor this point. If a certificate has been expired for several years, it probably is a red flag, indicating that the IT staff is either oblivious or apathetic (as this almost always generates a huge warning in almost every browser available). Of course it might also indicate that the site is as popular as a skunk sniffing contest and no one other than the IT staff actually visit their site. ;-)

Red Flag #6: Password Reset via Poorly Implemented “Security Questions”

So is that is? Anything else? Of course. I've saved the best (or would that be the worst) for last? The last authentication red flag on my list of authentication e-v-i-l-s is sites that allow you to reset your passwords based your answer to the ubiquitous “security question”.

You know the ones... They ask you to chose a “security” question such as:
  • What is your favorite sports team?
  • Who is your favorite author?
  • What was the name of the school you attended in first grade?
  • What was your first car?
  • Where is your favorite vacation spot?

etc., and then to provide your answer that they check when you need your password reset.

Depending on whose figures you quote, one hears of help desk assisted password resets running between $50 to $150 per call. Also one hears figures that between 20-40% of all help desk calls are to reset passwords. So given such costs, it is not surprising that companies have decided to automate their password resets. Unfortunately, since most companies don't have a second form of authentication that they support for all of their customers, their self-help mechanism for resetting passwords is often demoted to using “security” questions. Since any more, this practice includes almost every web site authenticating users with a password, one can't use this criteria alone to classify poor security practice. So instead, we will try provide insight on how to recognize the bad from the worse.

We will take a three pronged approach in discussing this topic:
  1. How to recognize bad practice from the worst practice (aimed at both users and developers)
  2. Offer suggestions of how users can make the best of a bad practice
  3. Offer suggestions of how developers can better implement password resets

Recognizing Bad Practice

First, as noted in Good Security Questions,

... there really are NO GOOD security questions; only fair or bad questions. 'Good' gives the impression that these questions are acceptable and protect the user. The reality is, security questions present an opportunity for breach and even the best security questions are not good enough to screen out all attacks. There is a trade-off; self-service vs. security risks.”

So, from an external perspective, how do we recognize the fair questions from the bad questions? GoodSecurityQuestions.com distinguishes four criteria. That site states that a “good” (relatively speaking) security question is one whose answer will have these four characteristics:
  1. cannot be easily guessed or researched (safe),
  2. doesn't change over time (stable),
  3. is memorable,
  4. is definitive or simple.
While some sites are getting better at formulating canned questions, few meet all four of these above characteristics. Most web sites still only have questions that lead their audience to very predictable one or two word answers. On the plus side though, more and more sites now are now allowing their users to pose your their own questions (such as “What is my password?” ;-) and answers.

The password reset process typically works by a user starting out by clicking on a “Forgot Password” link. Upon clicking on this the link, the user will be prompted for their user name. Once this is done, most sites prompt you to answer these security question(s) correctly (sometimes you will need to answer multiple questions correctly). After you provide the correct answer(s) to the posed question(s), the web site will send you an email to your email address that you used to register with that web site. That email message will typically contain either a temporary password that you can use at the main login screen or a special link—often only valid for a short amount of time, such as a few hours—and that link allows you to reset your forgotten password. The better designed systems will send you a special link to your email address on record that will allow you to answer your security question(s), and only then—if you answer them correctly—will allow you to proceed immediately to reset your password. This has the advantage of not allowing an adversary to see your specific questions first in order to research them.

However, a few of the sites still allow you to reset your password directly just by answering your security question directly—no email or SMS text, etc. side-channel is involved. A few others have the poor practice of displaying the email address that the new temporary password is being sent do. If their site does the latter, that opens up possible social engineering exploits to their help desk—such as an attacker claiming that they no longer use that email address and could the help desk personnel please change it to this other email address that they now use. Then all the attacker need to is to guess the answer(s) to your security question(s) correctly.

Advice For Users

Fortunately, many ordinary citizens are getting smarter with this, and when posed with some canned question such as “What was the name of the school you attended in first grade?” will answer “spaghetti”. (In truth [seriously], “spaghetti” seems to be a favorite answer of these questions, so you may want to start trying something else, like perhaps “linguine”. :)

Unfortunately, many people don't understand how answer these questions realistically can work against them. For when faced with answering the security question “What was your first car?”, they might answer “1969 Plymouth Satellite” (or in my case, that would be “1909 Ford Model T”...JK; actually it was a “Paleolithic Era Flintstones Mobile”) . But seriously, which is easier for an attacker to do? To guess your 8 character password (assuming that your password isn't “password” :) or to guess the answer to your security question? Generally, it's the latter. It's not to hard for me to write a small program that will guess all reasonable permutations of year, make, and model of cars or all the sports teams or whatever your security question happens to be. After all, how many possible “favorite sports teams” are there? Maybe thousands at the most. Developers could go a long way to help out here by not permitting unlimited attempts at guessing the answer to these security questions, but they seldom do. So it's up to you—as users—to chose some technique to use to defeat this avenue of compromising your web site user account. Pick some standard technique that you remember. For example, maybe add a common preamble to the all your security answers, like “xyzzy-” or “plugh:” or “meh/” or whatever you want. Or you can always answer all such security questions by some common (but secret) pass phrase, such as “I think that all politicians should serve a term in office and then a term in jail.”, etc. But because there is no common recognized “best practice” for password resets, it is going to be a long time until the development community catches up. But then again, “best practice” for a “poor practice” is an oxymoron. As noted from GoodSecurityQuestions.com earlier, there are no “good” security questions, only fair or bad questions. So it is up to you, as users, to protect yourself until something better comes along.

Lastly, if you, as users, are able to define your own security question / answer, that can provide a higher level of security than stock questions, assuming that you give your question some thought beforehand. I often advise friends to select a question that might be somewhat embarrassing to them if it were discovered. (Although, use with caution; as users, you can never assume that the developers are actually encrypting the questions and answers.) So, for example, a question like “What was the nick name that bullies used to taunt me with in grade school?” or “What is the name of the girl that I had a secret crush on in ninth grade?” might be appropriate, whereas a question such as “How much money have I embezzled from my last employer?”, not so good. Common sense should prevail here.

Advice For Developers

There is no consensus for what constitutes “best practice” for password resets using security questions / answers. Most likely, this is in part, because most security experts recognize that passwords are a weak form of authentication themselves and these password reset techniques are even weaker. But that said, from a pragmatic perspective—for the moment at least—we need to play the cards we are dealt.
There is evidence that the security industry is starting to pay attention to this issue. For instance, FishNet Security's Dave Ferguson published a white paper on this in 2010 as well as participating in a recent OWASP Podcast on the subject. Based on Ferguson's, OWASP has started on a “Forgot Password Cheat Sheet” (which still needs a lot of work, but its an extraordinary start by Dave Ferguson and Jim Manico). [NOTE: The OWASP “cheat sheet” page is also a bit misnamed as it assumes that the only mechanism to reset passwords is via security questions / answers; if a site were using multi-factor authentication, it probably would be better to involve those other authentication factors in the process, but admittedly, outside the banking / finance industry and the military, multi-factor authentication is a rare practice.]
At the risk of repeating a lot of what is already spelled out in the OWASP Forgot Password Cheat Sheet, here is what I would advise. (I hope to get these comments folded into the cheat sheet in the not too distant future.)
Step 1) Gather Identity Data
Not much to disagree with here except for the obvious don't be collecting social security numbers unless it is something that your site actually has legitimate need for. The same goes for collecting only the last 4 digits of the SSN.
Step 2) Selecting Initial Security Questions / Answers
The appropriate time to require that a user choose security the time the user initial registers with your site. Ideally, allow them to select their own security question. If this is not possible or desirable for some reason, then all them to select from a large set of well thought out security questions. It is a good idea to require that the answer to any security question be longer than some minimal length (say, 8 to 10 characters), otherwise brute force attempts become likely. Finally, regarding the storage of the security questions / answers, questions should be encrypted (especially if they are chosen by the user) and any security questions should be hashed.
Step 3) Send a Time-Limited Token Over a Side-Channel
This is follows the step to verify security questions in the OWASP cheat sheet, but I think it is better that it precedes this verification so as to not even allow the possibility of answering the security questions until one has received this out-of-band token. A random 8 character token is sufficient for SMS, but using something like ESAPI's CryptoToken is better if generating an emailed link. Making this step precede the verification of the security questions increases the difficulty of a potential attacker researching the answers ahead of time (unless there are only a small set of possible questions). In addition, the token usage should ideally be restricted to a particular time duration after which it was created, say two hours or so.
Step 4) Require User Return the Token
The user must return the token sent to her over a side-channel. So they must reply to the SMS text message or click on the link sent to their email address on record. Furthermore, they must do so within the required amount of time, otherwise the token becomes invalid.
Step 5) Redirect the User to a Secure Page to Verify Security Question(s)
If the token is valid, take the user to a page (using SSL/TLS) where they can answer the questions. Do not allow the user to select which question they desire to answer (assuming that there are multiple question / answer pairs; ideally, make them correctly answer them all). Developers should also take precautions to limit the effectiveness of guessing. For example, using CAPTCHAs to reduce the success rate of automated attacks and allowing only (say) 5 consecutive failed attempts before temporarily locking out the account from further attempts to answer the security question. (A temporary lockout of a few minutes should be sufficient.) Finally, if the account is locked out because of N consecutive failed attempts, note it in a security audit log as well as notifying the user via email or an SMS text message.
Step 6) Allow User to Change Password
Once the user has correctly answered all required security questions (one or more, based on risk of potential lost of compromised password), allow the user to change the password. Then (optionally) redirect the user to the login page and require they re-login with their newly selected password. (Requiring that they re-enter their password again will reinforce their password in their mind. Of course, one must weigh this benefit against the inconvenience of the user experience.)

Conclusion

Well, that's enough ranting for this topic of warning signs of poorly implemented authentication practice. Tell me what are your thoughts on this. Have you seen any additional authentication red flags that I've forgotten? If so, let me know.

Regards,
-kevin

Saturday, March 19, 2011

Signs of Broken Authentication (Part 3)


Today, I'll cover two more warning signs of broken authentication. But first, a word from our sponsor. (Warning: Shameless plug ahead.)

Hey boys and girls! Do you have trouble thinking up new secure passwords for each new web site that you visit and have resorted to using passwords like “password1”, “password2”, “password3”, etc. because you know someone has told you to use different passwords for each site? Or you actually have secure passwords for your sites, but you have trouble remembering them? Well, look no further than Kevin Wall's Creating Good Passwords. You'll be glad you did.

We now return you to our regularly scheduled blog.

Red Flag #3: Mapping Password Characters to Digits for Entry via Telephone Keypad

Another red flag which I have been running across much more frequently, are sites where they allow you to enter your password via a standard telephone key pad. You can check this if you are aware of sites that allow this. For instance, if your password was “JeHgr72w”, can you enter your password as “53447729” from the numeric keypad of your phone? If you can, you know that they are very likely mapping the characters in your password to digits 0 through 9 as arranged on a phone keypad. In such cases, this really dumbs down the entropy of your password—much worse than simply restricting which characters they allow you to use. If you run across such sites, I would advise you to choose the maximum length password that they allow. You would think that this would not be too common with sites that hand confidential data, but just last year, I discovered that a site handling our benefits weas doing this. I ran across it when I called their customer service desk and they asked me to confirm who I was via my password. When I asked how to enter alphabetic characters on a telephone key pad and they incredulously answered “for A, B, or C, enter 2; for D, E, or F, enter 3; etc.”. I should have been suspicious when their web site didn't allow me to choose any special characters at all. (See Red Flag #2.) So far, the sites where I've encountered have been limited to IVR systems associated with customer support. I guess one could argue that this is better than requesting sites requesting things like your SSN for verification of identity—especially in cases when they shouldn't be using your SSN any longer any longer (health care services come to mind). But the down side is that this site is dumbing down potentially strong passwords without the knowledge of their users, so consumers beware!

Red Flag #4: No Account Lockout

Another authentication red flag are sites that support no account lockout after so many consecutive failed login attempts. If a web site allows someone to guess unlimited attempts of your password, you had better have a really strong password because there's nothing there to slow the attacker down.

So what should developers do? Well, it should be obvious that what they do NOT want to do is to permanently lock out an user account. If you thought help desk calls about password resets were expensive before, just try implementing a permanent lockout. Some hacker will come along and hit your site with something that guesses user names (in general, not very difficult to guess especially if you have a list of first and last names for users) and just try N intentionally incorrect passwords for each user name (where N is the threshold for failed attempts where the site locks out the account). If you are the developer of such a site that implements this permanent lockout policy, lets just say for your sake, I hope you are away on vacation in the deep woods of Canada where no one can find you if / when this happens.

So what's the correct approach? Well, the idea is to sufficiently slow down an attacker who is doing an off-line attack. So pick some reasonable number of password attempts (5 seems about right), and if there are (say) 5 consecutive failed login attempts for a specific user name, then temporarily lock out that user account for some short amount of time (between 5 to 10 minutes is good). On the password attempt, you should display an error message that says something like:

You're user account has been temporarily locked out for T minutes after N consecutive failed attempts. Please try again in T minutes.

where T and N are the lockout duration and failed attempt threshold respectively. A similar error message would be displayed if a login attempt was made while the account is temporarily locked out.

Once a user successfully authenticates after an account had been temporarily locked out, it also a good idea to display a message that effectively gives notice to the user that this had happened and the time when it occurred. This helps the user know that someone else may have been trying to crack their account, and if so, they may wish to change their password as a result.

POLICY NOTE: If your company has a policy to not divulge failed whether a login attempt failed because the user name or password was incorrect (e.g.,”Login failed: Invalid user name or password” rather than “Login failed: Invalid user name” / “Login failed: Invalid password”), then you need to temporarily lock out even invalid user accounts! Otherwise, an attacker can use this account lockout information to discern whether or not a guessed user name is valid.

Wednesday, March 16, 2011

Builders vs. Breakers Dichotomy

I've just posted a reply to Marisa Fagan's Dark Reading blog. Even though my reply is short (well, for me at least ;-). Marisa weighs in on the adversarial relationship between developers and security people. I've added my $.02 as to why that's not necessarily a bad thing as long as respect between the two roles is maintained. I am not going to repost it here. If you are interested, you can find it here.

-kevin

Tuesday, March 15, 2011

Response to Mark Curphey's Blog “OWASP—Has It Reached a Tipping Point”


Back on February 18, 2011, Mark Curphey, the original founder of OWASP, wrote a very thought provoking blog regarding the direction that OWASP seemed to be taking.

And while I hadn't originally intended on commenting on it, Rohit Sethi's post to the Secure Coding mailing list, caused me to rethink this. (Pardon me for not addressing everything in the sequential order that Mark brings them up.)

Curphey's blog refers to tweets coming from the OWASP Summit in Portugal as the singular event that precipitated his reflections about OWASP's directions. He points out the tweet that caught his eye was one that where John Wilander tweeted that an OWASP Summit attendee remarked on stage that “Developers don't know shit about security”. Curphey refers to John Wilander's post “Security People vs. Developers” which Wilander blogged about 5 days after his initial Twitter post. Wilander also has a very thoughtful post. In Wilander's blog, he says he latter added the comment “Well, I got news. You don't know shit about development”, obviously in reference to the OWASP Summit attendee who originally made the comment.

Well, I'm going to respond in part to both Curphey's and Wilander's blog posts, and perhaps an a little (in)sanity to the mix. Why? Well, for one, I have extensive experience in both development and security. I've been a developer for 30+ years, and been involved with application security for about 12 years. So, IMNSHO, I know “shit” about both developer and security and it is that frame of reference that I am posting this.

So Who Is Right?

So let's start out trying to answer the question, who is correct here? Which if any of these are the correct point of view:
  • Developers don't know jack about security.
  • Security people don't know jack about development.
Or perhaps, let's go even further and contemplate:
  • Developers don't know squat about development.
  • Security professionals don't know squat about security.
Well, I could answer “none of the above” or I could just as easily answer “all of the above”. That alone should tell us that we are trying to answer the wrong question. But I have honestly seen developers (some of them even with PhDs in computer science) who couldn't write code if their life depended on it and architects who no longer remember how to code. I've also seen security professionals who are great pretenders. They know all the right buzzwords and have all the right certifications (e.g., ISC2's CISSP, GIAC's Information Security Professional, etc.), but ask them to write up a threat model for some system and all the can give you is a blank stare.
That is not the point here!
A former colleague and close friend of mine once told me that he thought that about 20-25% of the people in any profession are clueless @holes. Any while that may be true (I think that the percentage is a rather high, and am in no position to judge other professions so I will refrain from further comment), that isn't the point. Companies, professional societies, and society as a whole have to play they hand we've been dealt. Companies generally don't have the option to use another company's employees.So the real point is not “who is correct?” in this debate, but rather “how can we help IT as a whole to move into a direction of more secure code without sacrificing developer (and perhaps more importantly, business) interests?”.
Which brings us back to the original "developers don't know shit about security comment". While I completely understand one's needs occasionally to vent frustration (and a cage wrestling match between Linus Torvalds and Bruce Schneier might even prove amusing ;-), sniping at each other is seldom a productive way to reach your goals, as all it does is alienate the two parties that need very much to be working together to solve these issues. Taking an “us versus them” mentality is bound to fail, but by taking a “collective 'we'” stance, we might just make some progress.

So What Does Matter?

Well, we could start with respect, for one. Now I will be the first to admit that I do a poor job at this. All too often, I confuse the issues of “trust” vs “respect”. I believe that it is OK to require that people earn some degree of trust. It is not OK to try to make them earn respect. There should be a certain amount of respect that we have for our both our development and security colleagues just based on common dignity of humankind and the common profession to which we belong.
What is not respectful is to assume that you clearly understand the motivations of individuals. At times, we are all probably guilty of this. So we might make the assumption that all VB programmers are stupid. I must confess that I have propagated this myth by quoting Dijkstra's comment on BASIC numerous times in my email .sig and that is disrespectful. Likewise for my .sig with Richard Clarke's quote “The reason you have people breaking into your software all over the place is because your software sucks...”. So I just want to be the first to step up and admit my guilt and to try to begin the healing process. I am unfortunately rather jaded over my 30+ years in IT, seeing the same dumb things repeated over and over again, often by the same people (myself included). But that is no reason to be disrespectful to them. So if I've offended anyone, I apologize for being so inconsiderate and ask your forgiveness. And even though the Dijkstra and Clarke quotes are two of my favorites, I will not use them again in my email .sig.

Reaching Common Ground

Wilander cites a poll that he took of 200+ developers and asked them to rate where security fell in their priority. Their rankings came out like this:
  1. Functions and features as specified or envisioned
  2. Performance
  3. Usability
  4. Uptime
  5. Maintainability
  6. Security
If you have done any development in the trenches, you probably are not surprised at these things. But I think Wilander let off one important rating. When I talk to developers and indeed in my former life as one, “schedule” would always come up as #1, and not overrunning the budget was always a close second or third. Perhaps meeting schedule and budget was just implied; I don't know as I haven't see the poll to which John refers. However, if nothing else, it does point so some other contributing forces as to why security gets ranked so low. (Surprisingly, this is often even the case when security software is involved, so these forces—probably coming from the business—seem to be universal, at least in the commercial sector.)
I have often said that I believe that development is more difficult than security. Why? Well, for one, developers are expected to deal with the security aspect of their software plus all these other items in Wilander's list.
Because of that, I think that my observation is true in the general case:
    “It's easier to teach a good developer about security than it is to teach a good security person how about software development.”
So if I have to start training people about security (and I have done this with several), I always prefer to start with someone who is a good developer teach them about security rather than going in the other direction.

Back to Curphey's Blog

Which brings me back, albeit in a circuitous way, to Mr. Curphey's astute observations.
Mark states:
    “I had always hoped that the community would develop into a community of developers that were interested in security rather than a community of security people that were interested in software. I wanted to be part of a community that was driving WS* standards, deep in the guts of SAML and OAuth, framework (run-time) / language security and modern development practices like Agile and TDD rather than people seemingly obsessed by HTTP and web hacking techniques.”
Why hasn't this happened? Well, in my opinion, one reason is that an open source community's involvement in standards work is definitely at a disadvantage to special interest groups comprised of well-funded vendors who have a vested interest to develop such forward-looking work for the benefit of their respective companies. Involvement in these standards is takes a lot of time and usually there is a low reward, that being seeing people adopt your respective standard. Look how long it is taking the various OASIS WS-* standards to gain critical mass.
Secondly,this sort of ambitious work requires a broad base of expertise. Let's take SAML, for instance. That requires expertise in XML, XSDs, encryption, and authentication protocols, and experience in writing specifications is highly recommended as well. That sort of breadth, beyond a surface level, is rare in individuals. However, a company sponsoring a specific standard need not rely a a single individual (even if they may have a single point of contact); they can afford to have several people participate. After all, why not? There frequently is a profit motive there. (This is especially true in standards that on the 2.0 or later revisions.)
Thirdly, people typically stay with what they are comfortable. So if their expertise is HTTP-based attacks, they stick with it until it no longer provides their meal ticket. And let's face it, when OWASP was started a decade or so ago, HTTP was pretty much all you needed to understand to get by. (Well, that plus a fundamental understanding of JavaScript.)
Mr. Curphey continues with
    “We can’t have security people who know development. We must have developers who know security. There is a fundamental difference and it is important.”
I wholeheartedly agree with this. I think it aligns well with my above observation that it's easier to take a good developer and train her about security than vice-versa. If we fail to keep this idea in the forefront of our minds, we are bound to fail. My only addendum to Mark's comment would be to rephrase it to say that “we can't only have security people who know development”. Those folks are still valuable. I think and hope Mark would concur.
Curphey continues...
    Manage the Project Portfolio – When I look at the OWASP site today its hard to see it as anything else but a “bric-a-brac” shop of random projects. There are no doubt some absolute gems in there like ESAPI but the quality of those projects is totally undermined by projects like the Secure Web Application Framework Manifesto. When I first looked I honestly thought this project was a spoof or a joke.”
I see the same disorganization. In part, I think that's in part something that Wikis seem to encourage. Once they evolve beyond a certain critical mass, similar things get written down many times on many different pages unless there is an overall editor / project manager to manage it all. AFAIK, OWASP does not have this and it suffers for it. I think it also explains the lack of uniform consistency and quality across the various OWASP projects.
Also, while I appreciate Curphey's candor, I appreciate even more Rohit Sethi's non-defensive response. When I read Rohit's comment to Mark's blog, I must say that while I'm sure it must have been a hard pill for him to swallow, he stepped up and did so like an honorable man. However, I do disagree in part with Curphey's assessment of the OWASP “Secure Web Application Framework Manifesto”. I don't think it is a total waste. In reply posted earlier today by Benjamin Tomhave, seems to have similar sentiment when Ben writes “but I hate to see the white paper lost. Why not also look at joining efforts with something like the Rugged Manifesto movement?”. There is something to be redeemed in almost every mess. If nothing else, it is useful to examine in more detail to see why it was deemed a failure. (I must admit I only took a quick 5 minute read through the whole manifesto, however, my overall sense was not that it lacked valuable information, but rather that it lacked appropriate organization along with a certain degree of incompleteness. If it were written in such a way that could be referenced BY DEVELOPERS (your audience) as a specification document, it could prove useful...especially for new frameworks that are only now getting underway. But if such a document is not well-organized and approachable from the point of view of developers, it will not get used. Period! (And don't even ask why 'Period' demands an exclamation point; I just like the sense of irony.)
Regarding Curphey's second point on “Industry Engagement and Communications”, I have no personal basis from which to speak, so I'll just keep my mouth shut on this one. (I can hear you all saying “Thank God!” :)
To Mark's third point on “Ethics / Code of Conduct”, I think he is spot-on. Vendors use the “OWASP” word to sell the products more than ever. (“Successfully defends against the OWASP Top Ten attacks”, etc.) But unless OWASP is willing to take a stand and bite the hand that sometimes feeds it (Mark's second point), this will never change. In particular, unless OWASP is willing to litigate against those who take advantage of the OWASP name to pander their products, I don't see this changing at all. IANAL and thankfully, don't even play one on TV (or YouTube or that matter.), so that is up to the OWASP Board to decide, not me. (Hey, Jeff! Are you listening? What's your $.02 on this?)
And finally to Mark's last point “Engaging Developers”. Mark writes:
    “Maybe Software Security is for developers and Application Security is for security people. The first persona is the builder and the second persona the breaker. ... Developers best understand what they need and want, security people best understand what they need and want.”
No! I don't think so. There may be a continuous spectrum from builder to breaker, but if we treat these as independent goals, we will never get to where I think we all want to be, which is “secure software”. So they better have a common goal; they better both “need and want” the same things. Otherwise the whole security effort overall will fragment and fall apart. We need each other, and breakers and builders do have different means to achieve the same goal. But it had better be the same goal (well, assuming you breakers are “white hats”), and that goal is to improve software and systems security. Don't forget that!

Wow; I've blathered on much longer than intended. My original intent was to post this as a reply to SC-L, but at 4 pages, its a bit long for that. (Well, if you only learned one thing from this post, it's probably a side note observation: “Now I know why Kevin is not on Twitter”. ;-)
Send me your thoughts.
-kevin

Saturday, March 12, 2011

Signs of Broken Authentication (Part 2)


Red Flag #2: Restricted Character Set for Passwords

In the last post, we examined limiting password length as an authentication red flag. Today I want to look at another common red flag, that of restricting the allowed character set for passwords. As a simple example, sometimes web sites will only allow alphanumeric characters in passwords.

The scenario goes something like this... you register for an exciting new web site that all your friends are clamoring about, and you, being the clever type, try a password like 'G00d-bye!'. (One that clearly no one would ever be able to guess ;-). After confirming this as your new password, you click on the 'Submt' button, and the web site returns an error and informs you that "you have one or more invalid characters in your password; please try again”. (Thankfully, some of the more informative sites will actually tell you what characters that they do accept; how helpful is that, huh?)

After trying various other unacceptable passwords, you eventually discover that this site's developers have apparently read XKCD's Exploits of a Mom and think they are preventing SQLi because they don't allow the evil “-” character in their passwords.

The really "ingenious" sites also don't allow '<', because after all, someone might create a password whose value is something like “<script>insert_evil_javascript_here</script>” exposing a XSS vulnerability. (These same sites may reason that this is also good rationale to limit the password's length; make it short enough and no dangerous amount of script can be inserted.) They don't allow “:” for a similar reason, because after all, the really clever hacker may instead try to use “javascript:insert_evil_javascript_here for their password.

Or you might find that the developers disallow characters like '$' or '@' or '|'. They may do this because they have implemented their web site in Perl and their password handling is using either 'eval' or `` or system() or pipes to pass your password string through to some other back-end system for the actual authentication processing and those characters are problematic in such cases. (If so, they possibly have worse issues, like command injection, but this is only a contrived example, so we'll go with it, okay?)

Pretty soon, these developers have had so much trouble with so many different special characters that they simply decide to disallow all special characters and instead they just check to make sure that you use one of their benign characters in your password, such as alphanumeric. (On the bright side, at least this approach leads itself to white-listing rather than black-listing.)

Implications

But, “why is this a problem?” you ask. Well, because it greatly reduces the number of possible passwords of a given length. If N is the number of characters in the permitted password “alphabet” and L is the maximum length of the password, then there are only O(NL) possible passwords with that “alphabet”. (The exact number of possibilities of passwords with up to L characters chosen from an alphabet of size N is a bit larger since obviously we can chose passwords that are between the minimum length m and maximum length L. Working out the exact number of possible passwords is left as an exercise for the reader.)

If alphabetic (both upper and lower case), numeric, and special characters on the typical QWERTY keyboard are permitted, the size of the “alphabet” is 95 characters (including space, but excluding tab and newline which are very difficult to enter from a web browser and assuming I counted correctly :). If you exclude all the special characters, you are left with only 62 alphanumeric characters. If you use a minimal length password, which many of you probably do (and for many sites, that is perfectly reasonable; using 'HuH75^mn43,1@#' is probably fine for my bank, but a bit overkill for the NY Times site, where clearly all passwords should be either “WSJ_rules” or “WashingtonPost”, just to protest such nonsense), then an 8 character password works out on the order of 628  or 218,340,105,584,896 possibilities. By comparison, if we were to allow special characters in the password, then an 8 character password has 958 or 6,634,204,312,890,625 possibilities, which, if I've done my math right, is about 30.4 times more. This means, for instance, if an adversary were able to brute force all the possible 8 character alphanumeric passwords in one day, it would take that adversary roughly a month to brute force a password comprised of all possible alphanumeric and special characters. (In reality, if off-line dictionary attacks are viable at all, these numbers are not too far fetched using a fairly cheaply built high-end farm of GPUs. But that's a topic left for another day.)

Remediation

So we see that restricting which characters may be used in a password is another red flag for authentication. But now you ask, "how do we fix this?". Well, first of all, we make one very important design decision. Once a user's password is submitted, we decide we will never, ever attempt to display it again. This is good on many different levels (especially for privacy reasons), but another major benefit is that we never have to worry about issues like XSS.

I will outline one simple scenario that I prefer, but obviously there are several variations of this that will work as well.

In your password handling code, which would cover not only your login page, but also your change / reset password pages as well, you immediately convert the password string to a byte array. (A char array will do as well [in languages, such as Java where these types are different], but byte arrays usually interface more easily with message digests or symmetric encryption APIs.) You should do this conversion as earlier as possible and you should use a specific—rather than the default—character set for the conversion. (Aside: I'd recommend UTF-8 since it is so widely supported; using the native default encoding is asking for trouble because if you ever change deployment architectures you likely could find yourself with a user store where older passwords no longer work on the new system.) Once you have converted it to a byte array, either hash the byte array (ideally using a suitably sized random salt) or encrypt it. Then encode it in some standard format for storing...base64 encoding is typical, and finally store it.  Then all subsequent operations with the user's password is done via this (say) base64-encoded hashed or encrypted password which you have secured stored somewhere.

Final Word

One final word on this. A colleague and I have been experimenting with using a  Web Application Firewall (WAF) to monitor some web sites. One thing that we noticed is that occasionally, the WAF will flag a password containing certain special characters (usually, single quote (') or hyphen (-), but occasionally '<') as an attempted SQLi or XSS. In almost all cases, these are false positives where end users are innocently trying to use these characters. For example, a user may try to enter "d0n't-ever-d0-th4t!" as her password, but the WAF thinks that this is an attempted SQLi attempt because of the presence of the single quote and/or hyphen. Unfortunately in the case of the WAF that we've been experimenting with, the default action of this particular WAF is to block such requests. Such default behavior, if allowed, is likely to frustrate users and your help desk, so it is easy to see how the site's developers might respond by simply not allowing such troublesome characters in passwords to start with.

But IMHO, this is the wrong tact. Rather than trying to work around the symptoms, fix the problem where it is...the broken WAF rules. A wacky WAF is no excuse for dumbing down user's passwords!

Next time I will discuss a much worse variant of this red flag that this particular blog post examined as well as the failure for authentication systems to provide automatic account lockout.

Until then,
-kevin

Thursday, March 10, 2011

Signs of Broken Authentication (Part 1)


First a warning...this topic is one of my hot buttons issues, so my tone may be a bit more over-the-top and sarcastic than usual. (But for those of you at home with small children, I promise to refrain from cussing.)

OK, quick quiz... have you ever visited a web site where they want you to register with a user name and password, and when you finally get to the password field to choose your password you find that they really make you dumb down your password choices by restricting what characters you are able to use and/or how long you are allowed to make your password?

Or maybe you have seen web sites that have don't use https for their login pages???

Unless you've just crawled out from under a rock and have just discovered web browsers, you've all been there.

However, unless you suffer from security-paranoia like me (it's all the fault of those NSA folks and their blasted little black helicopters; now where'd I put my tinfoil hat), these things probably don't bother you. Over the period of the next few days in a multi-part blog post (thanks Matt! ;-) I'm going to tell you why these things should concern you and how to spot "broken" authentication. (But until then, just remember, us security professionals are paranoid so that the rest of you don't have to be.)

Why This Is Important

Broken authentication on a web site is akin to a bank whose vault door has rusty hinges. Your money may be secure there, but you might want to think about taking your cash to the bank down the street. Likewise, if the authentication on a web site has poor security, it is likely that the security for rest of the site is even worse. That's because authentication is probably the major area that most developers try to pay very close attention to with respect to the security details. It is generally one of the few spots for which they may even have specific security requirements.

Red Flag #1: Maximum Password Length

Let's start with a restricted password length. Why do developers do that? The most obvious reason is that they are storing your password in some database somewhere in some fixed-size VARCHAR column. So they choose some minimum size size to make their SOX-compliance people happy (typically 6 or 8 characters) and choose a maximum length based on the maximum size that their database can accommodate. Not uncommonly, that size might be something like a power of two, so maximum lengths like 16 or 32 characters are common.

“Isn't 16 or 32 characters long enough for a secure password?”, you ask. Of course it is, but that's not the point. The point is, if they are imposing a maximum length, I can almost certainly guarantee that they are not running your password through a secure, one-way hash such as SHA1 or SHA-256 before they are storing it. Most likely, the are just storing your password as plaintext, although if you are lucky, they may be encrypting it and storing the ciphertext for your password. (Probably not, but one can always hope.) Not hashing passwords is bad for a lot of reasons, but the biggest reason that you as an individual should be concerned about is someone stealing your password. Minimally, an attacker who captures your cleartext password can use it to access this particular site whenever he or she wants. And how many of you reuse your same passwords on multiple web sites? Uh huh. I thought so.

Well, when your passwords are stored as cleartext, if their web site is susceptible to SQL Injection (SQLi)–a fairly common vulnerability, some hacker just might make off with their entire User table along with all of its cleartext passwords which some of you (you know who you are!) might just be using for your bank or 401(k) or your Facebook page, etc. (I can hear all of you now saying “Not me; I would never do such a thing.” Liar!) And of course, even if their site is safe against SQLi because their developers were smart enough to have read the OWASP SQL Injection Prevention Cheat Sheet, your passwords are still lying there in wait for a rogue DBA to grab them. (It happens; trust me. Google it if you don't believe me. I'll wait...)

On the other hand, if one hashes passwords, preferably with a suitably sized salt (we won't be covering the reasons for that here, but Google for "salt" and "rainbow tables" if you are interested), then the hash (or hash plus salt) will always be some maximum fixed size regardless of the length of the original password. In such cases, there is no reason to impose a maximum length on the user's password because the hash that's stored is of fixed length.

So imposing a maximal length on a user's password is generally a good indication that that web site is not hashing your password and generally, that is a bad thing.

(to be continued...)
-kevin

Friday, March 4, 2011

ESAPI and the Padding Oracle Attack

For those of you who don't read the OWASP blog (you should), the link below is, in part, what started all of this. I started out responding in an email to Jeremiah Grossman to a tweet to he had made to Jeff Williams about how quickly the ESAPI team had responded to the Padding Oracle Attack that Juliano Rizzo and Thai Duong had discovered in ESAPI. What started out a private email to a Jeremiah and a select few of the ESAPI team shortly thereafter ended up on the OWASP blog.

Anyhow, without further ado, here is the link to the OWASP blog post:
ESAP and the Padding Oracle Attack

Thursday, March 3, 2011

Answers to Hard Questions


There have been a few FOSS and work colleagues who have commented that I should start a blog. (You know who you are, so I won't embarrass you in public.) Each time, I have thought about it, I found it difficult to get started, but I figured that I'd better stop procrastinating lest I become extinct. (I hear it's much more difficult to blog once you're on the other side of the grass.)
At some point, I promise to discuss ESAPI (and particularly the crypto-related packages), but I didn't want to dive right into that without some other lighter weight (says the old fat guy without any sense of irony—or perhaps just without any sense, period) material to start things off.
Well, if you are reading this blog, you either probably have insomnia and are in desperate search of a cure (look no further!) or are someone who wanted me to embarrass myself in public (likely those same “someones” who wanted me to start a blog in the first place), or you were pointed here by a link that someone sent to you. (In the latter case, I apologize for your misguided friends. But like I always say sometimes, “With friends like them, who needs enemas?”)
Anyway, if you really want to know about my work history, etc., then you can find out by following the links to my LinkedIn, OWASP, and Google Code pages from my Personal Identity Portal at https://kwwall.pip.verisignlabs.com/. But if you get really bored, don't blame me; sleeping pills for all my insomniac friends.
So, after babbling for almost ¾ of a page, let me finally cut to the chase. What I really decided that I wanted to discuss was to attempt to answer the penetrating questions posed by Michael Kassner on his IT Security blog over at TechRepublic. His blog, “Brave new world: Asking hard questions about security, user rights, and trust”, posses the five following questions, which I will attempt pontificate and wax philosophical about without sounding too much like an @hole:
  1.   Should access to the Internet be a privilege, a right, or…
  2.   Should qualified organizations be allowed to remove malware from Internet-facing computers without the owner’s permission or knowledge?
  3.   What guarantee do I have that a piece of open-source software has been adequately vetted by qualified and honest reviewers?
  4.   Should a digital/electronic signature carry the same weight as a written signature?
  5. How are we to be assured that electronic voting is trustworthy?
So ready? Here goes... 
Question #1: Should access to the Internet be a privilege, a right, or...

Yes.
 
OK, seriously. Anyone who has read my any of my emails right now are probably saying to yourself, “I don't know who you are, but you can't be the Kevin Wall that I know, because he can't possibly answer an open-ended question with a one word answer. For that matter, he can't even answer a T/F question with a single word.” OK, so you got me. But that's all I'm going to say about it, for now at least. Maybe more in a future blog if people are foolish enough to encourage this old dinosaur.
Question #2: Should qualified organizations be allowed to remove malware from Internet-facing computers without the owner’s permission or knowledge?
To answer a question with a question, “Who selects the 'qualified organizations'?”.
Or to answer like a typical engineer or lawyer (NB: IANAL): "It depends". In most cases, I would answer “no”, if for no other reason that accessing someone's system without the owner's permission is unauthorized access by definition, which constitutes a federal offense. (Since IANAL, I won't bore you with the exact laws. I'll let any attorneys reading this bore you with those details as I have plenty of other things to bore you with.)
However, the more important reason that I would answer “no” is that there is no organization that is likely to be competent to do this (especially without significant human guidance) and there is a probably as much of a chance that they would hose your personal system as they would repair it. Plus, if they do screw it up, who is liable? In the general sense, this is just stupid and we shouldn't even be discussing it. In most cases, the cure is worse than the disease.
So when might I answer “yes”? Well, I would answer yes, if all your personal computing system and it's malware is in the “cloud” so that in reality you are using someone else's CPU resources and storage. I would also answer “yes” if the computing equipment was donated as part of a service / use contract. This is not as far-fetched as it first may sound. Wireless providers are already giving away free, or almost free, smart phones with a 2 year contract. (At some point, the same will likely be true for tablet computers.) So if part of the contractual EULA stipulates that the service provider is permitted to attempt to remove malware from the device without prior notification, at least legally it would be OK. But in that sense, the user's has already signed over their permission; they just not may have granted permission or given notice for some specific incident. (Note that in this regard, it is not unlike the AV vendors who will attempt to remove malware, or if that is not possible, to put the infected files into a quarantine area.) But even then, I think that the “qualified organization” needs to be ready to accept liability if their means of removal makes some other piece of unrelated computing equipment (e.g., a router, a printer, another computer on the user's LAN, etc.) inaccessible.
Question #3: What guarantee do I have that a piece of open-source software has been adequately vetted by qualified and honest reviewers?
HA! HA HA HA HA HA!!! Come on man; are you serious??? Have you not ever read the warranty disclaimers and limitation of liabilities that come in some variety that with pretty much every piece of software that you have ever used, with it be proprietary or open source. You know, the sections that ARE WRITTEN IN ALL UPPERCASE SO THAT YOU WILL FEEL INTIMIDATED BY THE ATTORNEYS SHOUTING AT YOU?
If any software was properly vetted, it might be argued that developers need not have to hide behind the cloak of such legal gibberish. (And, BTW, my personal attorney told me that in the state Ohio, one cannot simply waive tort regardless of what the contract says. But YMMV, so talk your own attorney.)
Personally, I think it is sad that we are in this state (no, not Ohio!). It started out as well-intentioned attorneys trying to protect the interests of their clients, which is what we should expect, but somewhere has gotten off-track to the extent that in some cases I believe it has encouraged sloppy software development practice.
[Aside: There is some push back against this over at ruggedsoftware.org. (That's “rugged” as in “sturdy and strong”, and not “rugged” as in in “roughness of surface”, although there's way too much of that sort of software out there too.) But frankly, no one (myself included!) yet has the balls to renounce the software warranty disclaimers and liability restrictions placed in every software license known to FOSS. In fact, when I even suggested it to one of the Rugged Software co-founders, he looked at me like “Are you nuts???”.]
Software correctness is one of the most difficult things to get right. In a sense, I believe programmers make a comparable number of errors (as measured by some means of “error rate”) as humans do in any other complex endeavor which they attempt. However, at this point, computers are very different than humans. For the most part, humans grok the semantic meaning based on the surrounding context and thus get beyond the mistakes. As an example, if I wree to witre tihs setnecen lkei tsih yuo cna pborlaby sitll uendrsdtna it even though you find it a bit rough going. But if programmers were to make similar corresponding mistakes in their programs and that portion of the code is executed, it almost always results in disastrous consequences. We have yet to invent a computer or operating system that is based on “the do what we mean, not what we code”. In that sense at least, humans are much more resilient than software. (We do not want our "rugged software" to become "brittle software".) However, it is because of this complexity, most serious attempts at vetting software by qualified and honest reviewers using best practices still falls seriously short of hopes and expectations. This is not an indictment against open source developers; those developing proprietary commercial software have the same issues. There are some major exceptions to this rule (such as the space shuttle software that has been constructed in such a way to be have its correctness proven), but these exceptions are generally not cost effective and are several of orders of magnitude more costly to develop than does conventional code. So such vetted software is rare in commercial sectors and even rarer in open source development where almost all of the developers contribute their efforts for free.
In part, this lack of quality in software—caused by many things, including not properly vetting—is “demanded” by consumers. For better or worse—after being trained by years of crapware from major software vendors—consumers don't scream for correctness; they scream for features. The same is true with respect to security vulnerabilities. As security researchers Ed Felten and Gary McGraw once said,
“Given a choice between dancing pigs and security, users will pick dancing pigs every time.”
Question #4: Should a digital/electronic signature carry the same weight as a written signature?
Hmmm...a trick question? In some sense in many countries, they already do, as there are circumstances where both are recognized as legally binding. For example, in the USA, congress signed the “Electronic Signatures in Global and National Commerce Act (E-Sign)” act into law in June, 2000. See the Duke Law & Technology Review article “Are Online Business Transactions Executed By Electronic Signatures Legally Binding?” for a better explanation than I could ever hope to give. So in that case, it is irrelevant as to what my opinion about it is (i.e., what it should be); it's the law of the land.
[Note: I am only going to address "digital" signatures, not "electronic" ones, which could mean many different things, depending on whom you ask.]
 But, if what you are asking is one's professional opinion (dumb looks and all), as a information security professional, my answer is that dsigs should NOT carry the same weight as handwritten signatures. I will try to explain my reasoning for this.
A cryptographic digital signature requires a cryptographic key pair. One key is the “public” key and is used [generally by others] for validating signatures and the other key is the “private” key and is used for creating (i.e., signing) signatures. (Public and private keys are also used for asymmetric encryption, but if you are aware of that, try to forget about that for now as it will only confuse you. And also don't think about pink elephants either. ;-)
The cryptographic keys are always created as a key pair and one cannot be changed independent of the other without causing the whole signature process to fail in a detectable manner.
Now, enter our cast of characters, Alice and Bob. Alice has generated a key pair and has her private key on her PC or other personal computing device. Alice has also made her public key accessible to Bob. For simplicity, let's assume that she hands it directly to him on a flash drive. (Alice might also make her public key available to others via a public key server. It is, after all, a public key.)
Alice then writes a contractual document that she digitally signs using her private key and sends both the contract and the digital signature to Bob, where Bob validates Alice's signature via the public key that Alice provided him. If the document's signature is valid, Bob knows that 1) the document has not been tampered with after it had been signed, and 2) it was signed with Alice's private key, and hence Bob has reason to believe that Alice "signed" the contract
So far, so good.
Now here is the crux of the problem. Can we state without question (say to the same degree that someone witnessing a person's handwritten signature can attest to) that any document signed by Alice's private key was signed by Alice herself?
The answer is “no, we cannot”. All we really know that it was Alice's private key that signed said document, but we don't know if it was Alice who actually initiated the signing. That is, we have a gap between Alice's identity as an individual and Alice's private key.
The usual and often implicit assumption is that, since Alice's private key is supposed to be private, Alice is the only one having access to it. Therefore if a document is found signed with Alice's private key (as verified by Alice's corresponding public key), then by golly, it must have been Alice that signed it.
But this assumption is erroneous. For example, what if Alice's PC were infected with malware and some evil black hat H4><0r used that malware to steal her private key. (What? You say it was her fault because she didn't keep the private key in an encrypted key store file that was protected by a pass phrase? My Lord! Have you never heard of keystroke loggers?)
So the digital signature analogy with handwritten signatures is actually closer to one witnessed indirectly, say via a delayed video feed, rather than one that was witnessed in person in real-time. Just like a delayed video field can be doctored to misrepresent someone, a private key can be stolen and abused by malware (completely unbeknownst to the user by the way) to sign any arbitrary document.
Now it's a whole lot easier to infect an ordinary citizen's PC / MAC / miscellaneous computing device with malware than it is to doctor a delayed video feed. Indeed, the number of malware infected PCs in the world at any given time clearly is in the millions and probably even higher by an order of magnitude or so. Compare that to how many videos (or even still photographs for that matter) are doctored each year, and you will see my point. So, IMNSHO, there is no way that we should hold an ordinary citizen, who has not been formerly trained in security practices, accountable for digital signatures that she may or may not have created. For B2B, maybe, but for C2B, no.
Question #5: How are we to be assured that electronic voting is trustworthy?
Your last question (#4) seemed like a trick question (or possibly just an imprecisely worded one), so how about a “trick answer” for this one.
Answer: If the person whom you voted for won, you can trust it; otherwise, you cannot.
Seriously, in all honesty, it depends on what level of trust that you are looking for. If you are one who believes in conspiracy theories, then nothing that can be done will provide you with the assurance that you are seeking. If you are one who simple prefers the convenience of electronic voting and are not that into politics, it probably won't take too much to provide you with the warm fuzzies. The issue is that “trust” is not black or white, but rather shades of gray. Furthermore, different people prefer / demand different levels of grayness. So we can probably provide assurance for a select few, but never for all.
So having spouted that pompous sounding bullshit, let me tell you my opinion (since you asked; beware of “dumb looks” though) of what I think it requires to get at least moving in the right direction.
First of all, having a tangible, unalterable physical audit trail is mandatory. Most security folks, myself included, believe that this needs to be something like a digitally-signed paper trail where two identical copies are printed. (Or one copy and a trusted photocopier.) One copy goes into a lock-box at your voting precinct and the other copy, the voters take home with them. In the case of a contested election, this is the chain of evidence that is relied upon and is what eventually is recounted.
Secondly, I believe that the whole electronic voting system, from voting machines to tabulators and everything in between, needs to be developed as open source. It does not have to be open source in the sense that the license provides a right to modify and alter nor need it even be free to use, but it is important that anyone who wishes must be allowed to examine the source code of the electronic voting system whenever they wish. And for good measure, I think that whatever particular software versions are used should be digitally signed by at least two independent escrow agencies who will hold the digitally signed source code (as well as the signed compiled binaries) in safekeeping.
And lastly, I think there needs to be an open review process..an open RFC period if you will, where everyone has an opportunity to comment on the requirements as well as the design.
If we do all those things, then perhaps some day we will get to a point that the majority of people will trust it. But unfortunately, the losing minority almost never will.

FWIW, -kevin