There have been a few FOSS and work colleagues who have commented that I should start a blog. (You know who you are, so I won't embarrass you in public.) Each time, I have thought about it, I found it difficult to get started, but I figured that I'd better stop procrastinating lest I become extinct. (I hear it's much more difficult to blog once you're on the other side of the grass.)
At some point, I promise to discuss ESAPI (and particularly the crypto-related packages), but I didn't want to dive right into that without some other lighter weight (says the old fat guy without any sense of irony—or perhaps just without any sense, period) material to start things off.
Well, if you are reading this blog, you either probably have insomnia and are in desperate search of a cure (look no further!) or are someone who wanted me to embarrass myself in public (likely those same “someones” who wanted me to start a blog in the first place), or you were pointed here by a link that someone sent to you. (In the latter case, I apologize for your misguided friends. But like I always say sometimes, “With friends like them, who needs enemas?”)
Anyway, if you really want to know about my work history, etc., then you can find out by following the links to my LinkedIn, OWASP, and Google Code pages from my Personal Identity Portal at https://kwwall.pip.verisignlabs.com/. But if you get really bored, don't blame me; sleeping pills for all my insomniac friends.
So, after babbling for almost ¾ of a page, let me finally cut to the chase. What I really decided that I wanted to discuss was to attempt to answer the penetrating questions posed by Michael Kassner on his IT Security blog over at TechRepublic. His blog, “Brave new world: Asking hard questions about security, user rights, and trust”, posses the five following questions, which I will attempt pontificate and wax philosophical about without sounding too much like an @hole:
- Should access to the Internet be a privilege, a right, or…
- Should qualified organizations be allowed to remove malware from Internet-facing computers without the owner’s permission or knowledge?
- What guarantee do I have that a piece of open-source software has been adequately vetted by qualified and honest reviewers?
- Should a digital/electronic signature carry the same weight as a written signature?
- How are we to be assured that electronic voting is trustworthy?
So ready? Here goes...
Question #1: Should access to the Internet be a privilege, a right, or...
OK, seriously. Anyone who has read my any of my emails right now are probably saying to yourself, “I don't know who you are, but you can't be the Kevin Wall that I know, because he can't possibly answer an open-ended question with a one word answer. For that matter, he can't even answer a T/F question with a single word.” OK, so you got me. But that's all I'm going to say about it, for now at least. Maybe more in a future blog if people are foolish enough to encourage this old dinosaur.
Question #2: Should qualified organizations be allowed to remove malware from Internet-facing computers without the owner’s permission or knowledge?
To answer a question with a question, “Who selects the 'qualified organizations'?”.
Or to answer like a typical engineer or lawyer (NB: IANAL): "It depends". In most cases, I would answer “no”, if for no other reason that accessing someone's system without the owner's permission is unauthorized access by definition, which constitutes a federal offense. (Since IANAL, I won't bore you with the exact laws. I'll let any attorneys reading this bore you with those details as I have plenty of other things to bore you with.)
However, the more important reason that I would answer “no” is that there is no organization that is likely to be competent to do this (especially without significant human guidance) and there is a probably as much of a chance that they would hose your personal system as they would repair it. Plus, if they do screw it up, who is liable? In the general sense, this is just stupid and we shouldn't even be discussing it. In most cases, the cure is worse than the disease.
So when might I answer “yes”? Well, I would answer yes, if all your personal computing system and it's malware is in the “cloud” so that in reality you are using someone else's CPU resources and storage. I would also answer “yes” if the computing equipment was donated as part of a service / use contract. This is not as far-fetched as it first may sound. Wireless providers are already giving away free, or almost free, smart phones with a 2 year contract. (At some point, the same will likely be true for tablet computers.) So if part of the contractual EULA stipulates that the service provider is permitted to attempt to remove malware from the device without prior notification, at least legally it would be OK. But in that sense, the user's has already signed over their permission; they just not may have granted permission or given notice for some specific incident. (Note that in this regard, it is not unlike the AV vendors who will attempt to remove malware, or if that is not possible, to put the infected files into a quarantine area.) But even then, I think that the “qualified organization” needs to be ready to accept liability if their means of removal makes some other piece of unrelated computing equipment (e.g., a router, a printer, another computer on the user's LAN, etc.) inaccessible.
Question #3: What guarantee do I have that a piece of open-source software has been adequately vetted by qualified and honest reviewers?
HA! HA HA HA HA HA!!! Come on man; are you serious??? Have you not ever read the warranty disclaimers and limitation of liabilities that come in some variety that with pretty much every piece of software that you have ever used, with it be proprietary or open source. You know, the sections that ARE WRITTEN IN ALL UPPERCASE SO THAT YOU WILL FEEL INTIMIDATED BY THE ATTORNEYS SHOUTING AT YOU?
If any software was properly vetted, it might be argued that developers need not have to hide behind the cloak of such legal gibberish. (And, BTW, my personal attorney told me that in the state Ohio, one cannot simply waive tort regardless of what the contract says. But YMMV, so talk your own attorney.)
Personally, I think it is sad that we are in this state (no, not Ohio!). It started out as well-intentioned attorneys trying to protect the interests of their clients, which is what we should expect, but somewhere has gotten off-track to the extent that in some cases I believe it has encouraged sloppy software development practice.
[Aside: There is some push back against this over at ruggedsoftware.org. (That's “rugged” as in “sturdy and strong”, and not “rugged” as in in “roughness of surface”, although there's way too much of that sort of software out there too.) But frankly, no one (myself included!) yet has the balls to renounce the software warranty disclaimers and liability restrictions placed in every software license known to FOSS. In fact, when I even suggested it to one of the Rugged Software co-founders, he looked at me like “Are you nuts???”.]
Software correctness is one of the most difficult things to get right. In a sense, I believe programmers make a comparable number of errors (as measured by some means of “error rate”) as humans do in any other complex endeavor which they attempt. However, at this point, computers are very different than humans. For the most part, humans grok the semantic meaning based on the surrounding context and thus get beyond the mistakes. As an example, if I wree to witre tihs setnecen lkei tsih yuo cna pborlaby sitll uendrsdtna it even though you find it a bit rough going. But if programmers were to make similar corresponding mistakes in their programs and that portion of the code is executed, it almost always results in disastrous consequences. We have yet to invent a computer or operating system that is based on “the do what we mean, not what we code”. In that sense at least, humans are much more resilient than software. (We do not want our "rugged software" to become "brittle software".) However, it is because of this complexity, most serious attempts at vetting software by qualified and honest reviewers using best practices still falls seriously short of hopes and expectations. This is not an indictment against open source developers; those developing proprietary commercial software have the same issues. There are some major exceptions to this rule (such as the space shuttle software that has been constructed in such a way to be have its correctness proven), but these exceptions are generally not cost effective and are several of orders of magnitude more costly to develop than does conventional code. So such vetted software is rare in commercial sectors and even rarer in open source development where almost all of the developers contribute their efforts for free.
In part, this lack of quality in software—caused by many things, including not properly vetting—is “demanded” by consumers. For better or worse—after being trained by years of crapware from major software vendors—consumers don't scream for correctness; they scream for features. The same is true with respect to security vulnerabilities. As security researchers Ed Felten and Gary McGraw once said,
“Given a choice between dancing pigs and security, users will pick dancing pigs every time.”
Question #4: Should a digital/electronic signature carry the same weight as a written signature?
Hmmm...a trick question? In some sense in many countries, they already do, as there are circumstances where both are recognized as legally binding. For example, in the USA, congress signed the “Electronic Signatures in Global and National Commerce Act (E-Sign)” act into law in June, 2000. See the Duke Law & Technology Review article “Are Online Business Transactions Executed By Electronic Signatures Legally Binding?” for a better explanation than I could ever hope to give. So in that case, it is irrelevant as to what my opinion about it is (i.e., what it should be); it's the law of the land.
[Note: I am only going to address "digital" signatures, not "electronic" ones, which could mean many different things, depending on whom you ask.]
But, if what you are asking is one's professional opinion (dumb looks and all), as a information security professional, my answer is that dsigs should NOT carry the same weight as handwritten signatures. I will try to explain my reasoning for this.
A cryptographic digital signature requires a cryptographic key pair. One key is the “public” key and is used [generally by others] for validating signatures and the other key is the “private” key and is used for creating (i.e., signing) signatures. (Public and private keys are also used for asymmetric encryption, but if you are aware of that, try to forget about that for now as it will only confuse you. And also don't think about pink elephants either. ;-)
The cryptographic keys are always created as a key pair and one cannot be changed independent of the other without causing the whole signature process to fail in a detectable manner.
Now, enter our cast of characters, Alice and Bob. Alice has generated a key pair and has her private key on her PC or other personal computing device. Alice has also made her public key accessible to Bob. For simplicity, let's assume that she hands it directly to him on a flash drive. (Alice might also make her public key available to others via a public key server. It is, after all, a public key.)
Alice then writes a contractual document that she digitally signs using her private key and sends both the contract and the digital signature to Bob, where Bob validates Alice's signature via the public key that Alice provided him. If the document's signature is valid, Bob knows that 1) the document has not been tampered with after it had been signed, and 2) it was signed with Alice's private key, and hence Bob has reason to believe that Alice "signed" the contract
So far, so good.
Now here is the crux of the problem. Can we state without question (say to the same degree that someone witnessing a person's handwritten signature can attest to) that any document signed by Alice's private key was signed by Alice herself?
The answer is “no, we cannot”. All we really know that it was Alice's private key that signed said document, but we don't know if it was Alice who actually initiated the signing. That is, we have a gap between Alice's identity as an individual and Alice's private key.
The usual and often implicit assumption is that, since Alice's private key is supposed to be private, Alice is the only one having access to it. Therefore if a document is found signed with Alice's private key (as verified by Alice's corresponding public key), then by golly, it must have been Alice that signed it.
But this assumption is erroneous. For example, what if Alice's PC were infected with malware and some evil black hat H4><0r used that malware to steal her private key. (What? You say it was her fault because she didn't keep the private key in an encrypted key store file that was protected by a pass phrase? My Lord! Have you never heard of keystroke loggers?)
So the digital signature analogy with handwritten signatures is actually closer to one witnessed indirectly, say via a delayed video feed, rather than one that was witnessed in person in real-time. Just like a delayed video field can be doctored to misrepresent someone, a private key can be stolen and abused by malware (completely unbeknownst to the user by the way) to sign any arbitrary document.
Now it's a whole lot easier to infect an ordinary citizen's PC / MAC / miscellaneous computing device with malware than it is to doctor a delayed video feed. Indeed, the number of malware infected PCs in the world at any given time clearly is in the millions and probably even higher by an order of magnitude or so. Compare that to how many videos (or even still photographs for that matter) are doctored each year, and you will see my point. So, IMNSHO, there is no way that we should hold an ordinary citizen, who has not been formerly trained in security practices, accountable for digital signatures that she may or may not have created. For B2B, maybe, but for C2B, no.
Question #5: How are we to be assured that electronic voting is trustworthy?
Your last question (#4) seemed like a trick question (or possibly just an imprecisely worded one), so how about a “trick answer” for this one.
Answer: If the person whom you voted for won, you can trust it; otherwise, you cannot.
Seriously, in all honesty, it depends on what level of trust that you are looking for. If you are one who believes in conspiracy theories, then nothing that can be done will provide you with the assurance that you are seeking. If you are one who simple prefers the convenience of electronic voting and are not that into politics, it probably won't take too much to provide you with the warm fuzzies. The issue is that “trust” is not black or white, but rather shades of gray. Furthermore, different people prefer / demand different levels of grayness. So we can probably provide assurance for a select few, but never for all.
So having spouted that pompous sounding bullshit, let me tell you my opinion (since you asked; beware of “dumb looks” though) of what I think it requires to get at least moving in the right direction.
First of all, having a tangible, unalterable physical audit trail is mandatory. Most security folks, myself included, believe that this needs to be something like a digitally-signed paper trail where two identical copies are printed. (Or one copy and a trusted photocopier.) One copy goes into a lock-box at your voting precinct and the other copy, the voters take home with them. In the case of a contested election, this is the chain of evidence that is relied upon and is what eventually is recounted.
Secondly, I believe that the whole electronic voting system, from voting machines to tabulators and everything in between, needs to be developed as open source. It does not have to be open source in the sense that the license provides a right to modify and alter nor need it even be free to use, but it is important that anyone who wishes must be allowed to examine the source code of the electronic voting system whenever they wish. And for good measure, I think that whatever particular software versions are used should be digitally signed by at least two independent escrow agencies who will hold the digitally signed source code (as well as the signed compiled binaries) in safekeeping.
And lastly, I think there needs to be an open review process..an open RFC period if you will, where everyone has an opportunity to comment on the requirements as well as the design.
If we do all those things, then perhaps some day we will get to a point that the majority of people will trust it. But unfortunately, the losing minority almost never will.