Recently, RSA Distributed Credential Protection (DCP) was announced by RSA Security. I’ve read the literature, sat through a presentation by an RSA sales representative, and watched the YouTube videos. And most of all, have formed an opinion.
Being a crypto geek of sorts, I’ll be the first to admit that this seems like a really cool and interesting application of secret splitting. But, as much as RSA makes it sound like the most innovative thing since sliced bread, I believe that is fundamentally a solution to the wrong security problem. Let’s have a look at why.
As I’ve written many times, security is fundamentally about ensuring trust and managing risk. When attempting to lower risk, there is always a cost / benefit balance that needs to be studied. Just as one would not spend $10,000 for a home safe only to store $1000 in it, IT is not going to spend six figures on a solution that will not reduce the perceived cost of the risk at least that much. (Note that while I am not certain of the pricing of RCA DCP, it is not farfetched to think that an enterprise rollout would be near the six figure range.)
RSA advertises DCP to be transparent to the end users, and so it is. However, that is not the only thing that is important here. Another major factor that only came out during the live presentation by RSA was that any application wishing to take advantage of DCP needed to change their application to adapt to its API. This means of course if you have N applications all authenticating users to an SQL database (or perhaps an LDAP directory store, if the API works with that), all N of those applications need to be changed. If you fail to change one application, then you still need the user’s stored in a single data store where DCP is not applied and consequently still remain unprotected. So take the cost of licensing the RSA DCP software and add to it the cost of each of your N applications integrating the DCP API and you will have something closer to the total cost of deployment. Of course, the operational costs are also likely to increase somewhat as well. Whereas before you had but a single data store for said credentials, now you have two. The end result is that the total cost to incorporate RSA DCP into your environments is likely to exceed the six figure level even if the software licensing costs is nowhere near that amount.
Well, still, that might be OK, right? After all, if the perceived benefits greatly exceed the total costs of mitigation, we still have a security win.
So what benefits does RSA DCP bring to the enterprise? According to the RSA press release as well as this YouTube video, the threat that RSA is trying to prevent is the “smash-and-grab” of credentials by an attacker. Specifically, DCP is designed to make it more difficult for an attacker who has infiltrated your company network and has managed to get direct access to your database server to obtain credentials (either plaintext or hashes). DCP also would likely mitigate a rogue DBA doing a “smash-and-grab” of your company’s credential data as well, as long as care was taken to provide separation of duties and not give a single administrator a DBA role on both DCP servers.
So we still need to answer this question: Is this a common way for an attacker to gather user credentials? In my opinion, it is not. By far, the biggest attack vector for adversaries stealing credential material is via SQL (or possibly LDAP) injection attacks. Will DCP do anything to mitigate SQLi attacks? The answer would appear to be “no” (at least according to the RSA sales rep that we talked to). In fact, given that one has to bolt new DCP API code into one’s application to use DCP, there is a chance that new SQLi vulnerabilities may be introduced as developers change the application code.
So is there a place where using RSA DCP would make sense? I believe so, but I think it is a niche market rather than the broad market RSA Security would like it to be. RSA DCP could be very valuable where you have an extremely high-value target (credentials or otherwise) that are difficult to upgrade. The perfect example that comes to my mind is protection of the RSA Security SecurID seeds. Compromise of those SecurID seeds required RSA to replace all the hard token SecurID devices. In fact, it is not unreasonable to speculate that this product came directly out of researching ways to protect those high-value credentials from the smash-and-grab type of direct attacks resulting out of that breach. If RSA Security wishes to broaden the market for their new DCP product, I believe that the best approach is for them to integrate DCP seamlessly in with their other products, starting with RSA Access Manager. If you are going to make a believer of us security folk, you first have to be willing to eat your own dog food.
However, in the meantime, for your regular user passwords, salting with a sufficiently long random salt, enforcing password complexity rules when users select their passwords, and enforcing account lockout are likely to be sufficient protection for your customer passwords. If doing those things is not sufficient, you seriously need to consider whether passwords are a strong enough form of authentication for your users.
Note that the views expressed herein are wholly my own and do not represent those of my company, of OWASP, nor any other organizations with whom I am associated.
Monday, November 12, 2012
Friday, January 20, 2012
For now, the immediate battle seems to have been won, but you can be sure that the MPAA and their cohorts will be back soon.
However, I wanted to point everyone to this USACM blog post that provides their policy statement regarding SOPA and PIPA. If you don't want to read their full policy statement, I would at least encourage you to take a quick look at their very approachable (by non-Geeks ;-) "Analysis of SOPAʼs impact on DNS and DNSSEC".
However, I wanted to point everyone to this USACM blog post that provides their policy statement regarding SOPA and PIPA. If you don't want to read their full policy statement, I would at least encourage you to take a quick look at their very approachable (by non-Geeks ;-) "Analysis of SOPAʼs impact on DNS and DNSSEC".
Friday, January 6, 2012
BackgroundLast July, I blogged about “Understanding Trust”, in which I attempted to describe several properties of trust. Because I thought that most of these properties were obvious, I was somewhat surprised to see someone with an interest in security authoritatively quote a well-known Microsoft software developer in post on a cryptography mailing list that “trust is not transitive”.
Of course I strongly disagreed. If you are interested in the specific context, you can find the full text of my post in the crypto mailing list archives. However, based on the research that I did and this specific post made me aware that there are still several software and security engineers who still have a misunderstanding of trust. So I decided that perhaps I should attempt to clear up this misunderstanding.
Is Trust Transitive or Isn't It?The post to the cryptography mailing list that I attempted to refute started out by citing Microsoft developer, Peter Biddle, stating “More fundamentally, as Peter Biddle points out, trust isn't transitive”.
So, before writing a rebuttal to his response, I thought it would be a good idea to track down the source of Peter Biddle's comments. I eventually found the source in Peter Biddle's blog post titled “Trust Isn’t Transitive (or, 'Someone fired a gun in an airplane cockpit, and it was probably the pilot')”.
Myself and I think most security pundits really believe that Peter Biddle is wrong about trust not being transitive. If you read carefully through Peter Biddle's blog on this topic, you will see (as Keith Irwin so aptly pointed out in a reply to the Randombits.net cryptography mailing list) that Biddle is mixing contexts here. In a nutshell, in Biddle's blog, he is making the argument that trust in two completely different contexts equates to trust in general (i.e., any context) and therefore concludes trust is not transitive.
However, trust clearly is context dependent and when considering whether or not trust is transitive, we need to consider the same context.
Specifically, if C1 and C2 are two different contexts, it does NOT logically follow that:
There exists a context C1 such that “Alice trusts(C1) Bob”
There exists a context C2, where C1 != C2, such that “Bob trusts<C2> Carol”
Alice trusts<C> Carol for all contexts, C.
where trusts<C> means “trust in context C”.
That seems to be the way that Biddle is arguing about trust not being transitive. Well, if that's the way he's defining it, then of course it's not transitive.
If it is just that...well, that's the WRONG way to reason about transitivity in general, and trust being transitive in particular.
Transitivity is a mathematical property of some relationship R and says for x, y, and z are members belonging to some well-defined set, then we call relationship R transitive if:
( xRy ˄ yRz ) → xRz
for all x, y, and z elements of some set S. (See the Wikipedis article on transitive relationships for more thorough, but very comprehensible treatment of this.)
However, in Biddle's blog where he gives his examples, all the examples that he mentions is is talking about two different contexts (e.g., flying planes and handling firearms, or working on cars and taking care of kids).
That is, Biddle is really discussing two different relationships
and what he is then trying to conclude is that
( x trust<flying planes> y ) AND ( y trust<handling firearms> z ) IMPLIES ( x trust<C> z )
for any context C. Well, duh! If you make a fallacious straw man argument about trust being transitive in this manner, of course your conclusion this going to be that "trust is NOT transitive". But you would also, IMHO, be wrong. If we stick to a specific context / attribute however, then I think you will find the logic concludes that trust is transitive. (But, I'll show later it's not really quite that simple.)
Here's a really nutty case restricted to a specific context that I hope will make the point. Let's conjecture that both
Passengers trust<flying planes> Pilot P
Pilot P trust<flying planes> Chimpanzees
are true. (That is, “passengers trust some specific pilot P in flying planes” and “some (same) specific pilot P trusts chimpanzees in flying planes.) So, some pilot P brings his trusted chimpanzee into the cockpit and shortly after takeoff, he decides to take a little nap so handles the controls over to his chimp pal. And all this occurs unbeknownst to the passengers. So what do we conclude? Well, logic dictates that based on the premises, we may conclude:
Passengers trust<flying planes> Chimpanzees
But wait! That's absurd you say. Well, perhaps. But then again, whether the passengers know it or not, the Chimp who is supposedly flying the plane is pretty much holding the lives of the passengers in his hands (or is that paws?).
On one hand, these passengers are literally (unbeknownst to them) trusting that chimp to safely fly that plane. (Or course, on the other hand, if there where a dozen parachutes on the plan, there would be a blood bath seeing who would get them. ;-)
Now lets make a little change in the premise. Let's substitute 'Auto Pilot System' for 'Chimpanzees'.
The conclusion is now:
Passengers trust<flying planes> Auto Pilot System
All I've done is exchanged one symbol (Chimpanzee) with another (Auto Pilot System), but all of sudden most of us feel a whole lot better.
So what does that tell us about 'trust'? Well, for one, the human concept of trust is much more complex than some simplistic quantifiable mathematical property as we have been trying to model it thus far. And herein a big problem in security. Why? Because the software systems that we construct can no way approach the complexity of all these nuances. (Not that it matters a whole lot. History has shown that we can't even get the simpler model correct, but I digress.)
But Wait, There's MoreIn the post that I responded to where the poster was arguing that trust was transitive, they continued with this example:
When CAs [Certificate Authorities] get in the habit of delegating their power, that process is at risk of being bypassed and in any case starts to happen much less transparently. There are plenty of cases in the real world where someone is trusted with the power to take an action, but not automatically trusted with the power to delegate that power to others without external oversight. And that makes sense, because trust isn't transitive.
This statement makes sense, but NOT because 'trust isn't transitive'. Here the mistake in reasoning is not in trying to equate two different contexts. Rather, it makes sense because of another aspect of trust that I have discussed before on my “Understanding Trust” blog post. Specifically,
Trust is not binary.
Trust is not black or white; it is shades of gray. As humans, for a given context, we "assign" more trust to some and less to others. This "level of trust" is largely based on our perception of experience and reputation, the latter which we sometimes try to model in reputation-based systems.
An example...unfortunately, you need brain surgery. (If you are reading this blog, that should be proof enough. I rest my case. ;-) You have two surgeons to choose from:
Surgeon #1: 10 years of experience and over 300 operations.
Surgeon #2: 1 year of experience and 6 operations.
All other things being equal, who you gonna choose? Surgeon #1, right? (Well, unless in those 300 operations, s/he has had 250 malpractice results. ;-) And at least by comparison, you probably do NOT trust Surgeon #2.
So, with that in mind, let's get back to the transitivity part:
You trust<brain surgery> Surgeon #1
Surgeon #1 trust<brain surgery> Surgeon #2
You trust<brain surgery> Surgeon #2.
Whoa! Wait a minute. Didn't we just say that we did NOT trust Surgeon #2. Yep!
So what went wrong here? Well, that went wrong is that we are assuming that trust behaves as a binary relationship...that I either have complete trust or zero trust. But trust is not binary. It is shades of gray. That means that to more accurately model trust in the real world, we need some property for that relationship that indicates a level of trust, rather than trust just being T/F. So we need that in addition to a context.
So now we see we need (at least) something like:
to model trust. Where before we just were (implicitly) using something like
(which allowed us to model only complete trust or no trust), we find we now need something more like:
That is, we model level as a real number in the range 0 to 1, inclusive.
Cryptographer and software engineer Ben Laurie pointed out that trusted modeled in this way is very similar to KeyMan, a piece of software that he and Matthew Byng-Maddick developed back in 2002 to facilitate the management of keys, certificates and signatures used in signing software in a distributed and exportable network of trust.
So... we're done now, right? Well, not so fast Sparky. There are other important properties of trust that I already covered in my “Understanding Trust” blog post last July. If you have not already done so, I would encourage you to go back and read it.
Recasting TrustThe term “trust” is overloaded with several meanings and therefore causes a lot of confusion. On the Randombit.net crypto mailing list, Marsh Ray suggested that we use the term “relies on” as suggested by his former colleague Mark S. Miller.
I think in general, this is a great idea. If we say that “A relies on B” and “B relies on C”, then it is intuitively obvious that “A relies on C”, and hence transitivity immediately follows.
Using “relies on” works in many situations when we normally might use the word “trust” as a verb. I for one intend on starting to use it much more often than I do, because you have no idea how many times I almost accidentally made an embarrassing typo and misspelled “trust” as “tryst”. But perhaps that's the true hidden cryptographic meaning of cryptographers using Alice, Bob, and Carol in their discussions. As with many things cryptographic, maybe there's more going on there than is apparent. (I'll kindly spare you the obvious pun in this case.)
P.S.- I promise I will try to be a little more consistent with my blogging in 2012. (Did I just make a New Year's resolution? ;-) But thanks to all of you who have been faithful in reading and haven't completely given up on me.
Sunday, August 21, 2011
On Thursday, 2011/08/18, I made a presentation about OWASP ESAPI and my thoughts about it to the Columbus, OH local OWASP chapter. Bill Sempf, one of the chapter leaders, was kind enough to put the slides up to make them available to everyone.
If you are interested in what I presented, the slides are up on the main OWASPI wiki, at:
If you are interested in what I presented, the slides are up on the main OWASPI wiki, at:
Wednesday, July 20, 2011
It's often been said that Confidentiality, Integrity, and Availability, the so-called CIA triad, are the core principles of information security. However, I want to examine even something more fundamental than these principles; I want to look at the goals of information security. And not just goals such as preventing unauthorized access, disclosure, disruption of use, etc. which really are just extensions of the CIA triad, but the core, essential goals that are at information security's foundation.
At its core, information security is largely about the two goals of “ensuring trust” and “managing risk”. We may deal with managing risk some other time, but today I want to focus on ensuring trust.
In order to ensure trust, we first must understand not only what it is, but what its properties are.
1 a : assured reliance on the character, ability, strength, or truth of someone or something b : one in which confidence is placed
2 a : dependence on something future or contingent : hope b : reliance on future payment for property (as merchandise) delivered : credit <bought furniture on trust>
3 a : a property interest held by one person for the benefit of another b : a combination of firms or corporations formed by a legal agreement; especially : one that reduces or threatens to reduce competition
4 archaic : trustworthiness
5 a (1) : a charge or duty imposed in faith or confidence or as a condition of some relationship (2) : something committed or entrusted to one to be used or cared for in the interest of another b : responsible charge or office c : care, custody <the child committed to her trust>
One thing that I'd like to draw your attention to is that none of these definitions (with the possible exception of #3) implies that any sort of qualitative measure for trust exists. Trust is one of those things that we think that we completely understand when we talk about it, but when we explore it a bit deeper, we discover that it has some properties that are not all that intuitive, at least not in the normal manner that we refer to trust. My intent is to help us gaze at some of these properties and in so doing, see the disconnect of how we use the word “trust” in everyday usage versus how we use it in the security world. As we will discover, it is some of these very properties that make trust so difficult to ensure in the world of information security.
Properties of Trust
Trust is not commutative
If Alice trusts Bob, it does not follow that Bob trusts Alice. To any parent, this seems pretty obvious, at least when your children are of a certain age where they still trust you. You trust you doctor, but they likely do not trust you in a similar manner. You trust your bank, but they don't trust you in the same way. Trust is not symmetrical in this way.
Trust is transitive
This means “If Alice trusts Bob, and Bob trusts Carol, then Alice trusts Carol”. This one doesn't seem quite so obvious to us. That's because in the real world where we interact with other people, we don't treat trust in this manner. If I trust my wife and my wife trusts her friend, I don't usually automatically extend my trust to her friend. But therein lies the problem. In the information security context at least, trust implicitly extends in this way.
Think of the analogy where Alice, Bob, and Carol are all names of servers and there are trust relationships established by virtue of a /etc/hosts.equiv on each of these servers. When described this way, it is easier to see how trust extends to be transitive even though Alice may not be aware of Bob's trust of Carol. (In a latter blog post, I hope to illustrate how this affects trust boundaries.) Alice (usually implicitly) extends here trust to Carol via Bob's trusting Carol. The big issue here is whether or not Alice's trust of Bob is warranted in the first place and that is muddied by the fact that rational humans (at least those who are worthy of that trust) will act in a moral way so as not to abuse it. But in computer security, we need to think beyond this. The big issue in making such trust decisions involving transitivity is that Alice may not be aware of any of the trust relationships that Bob has with any other parties. For example, if Alice knows Carol to be untrustworthy, she may rethink her position on trusting Bob. In other words, Alice's trust in Bob may be misplaced. The bottom line is that there's little that Alice can do to tell, except to go on Bob's known reputation. Where this really gets sticky of course is that it can extend indefinitely. Alice trusts Bob, Bob trusts Carol, Carol trusts David, David trusts Emily, so... transitivity implies that Alice trusts Emily. We can quickly get lost. Of all the aspects of trust, I think that it is this disconnect that humans have with trust being transitive that makes securing computers so difficult. We simply don't think this way intuitively when it comes to securing our systems.
Let's try to clarify this with an example. Let's say that you (Alice) trust a merchant (Bob's Bakery) with your credit card number. You do this by paying for your weekly doughnut supply with your credit card (online or in person, it really doesn't matter). Bob's Bakery relies on a credit agency (Carol's Credit Check) to do credit checks on its customer's credit cards before accepting each payment.
No problem so far, right? Now you (Alice) may or may not be aware that Bob's Bakery does a credit check. As long as they accept your payment and you get your sweets, you probably don't care. You probably are not consciously thinking about any credit agency or credit card payment service. Most people are not even aware of the connection. Regardless, you are implicitly extending your trust to this credit agency when you you complete such a transaction.
No problem, still, you say. My credit card issuer will cover any fraudulent charges. But note that is a red herring. That is why you trust your bank and your credit card, not why you trust Bob's Bakery or Carol's Credit Check.
So let's change up the scenario a bit. Let's assume that Carol's Credit Check is really run by organized crime and the purpose of their existence is to perpetuate credit card fraud. Does this change your trust of Carol's Credit Check (assuming you new of the relationship with Bob's Bakery)? Probably. Does it change your trust in Bob's Bakery? It may, if Bob's Bakery were aware that Carol's Credit Check was run by crooks. But why does you trust change? Nothing has changed except your awareness / perception. In abstract terms, it's still the same Alice trusts Bob, Bob trusts Carol, therefore Alice trusts Carol.
Trust is not binary
Trust is a matter of degree; it is gray scale, not something that is merely black or white, on or off. If this were not true, there would be no way for us to state that we trust one party more than some other party. If trust were only a binary yes/no, if Alice separately trusted Bob and Carol, Alice would never has any reason to trust Bob more than she trusts Carol. Obviously that is not how we act in the real world. Often times, even when we do not personally know someone, we grant different levels of trust based solely on reputation.
Trust is context dependent
Alice may trust Bob in some situations, but not in other situations. While this is true in computer security, it is a rather difficult property for us to model. But there is no doubt that we believe this in the real world. In real life, Alice may trust Bob to cut her hair or work on her car, but not to baby sit her kids, and almost certainly not to preform brain surgery on her (even if Bob has taken the time to read Brain Surgery for Dummies).
In real life, we humans are quite adept at switching between these various contexts. So much so, that we hardly give it conscious thought. However, when we try to codify these contexts into a programmatic information model, we realize that this is a very difficult concept to formalize.
Trust is not constant over time
Alice may trust Bob at time t, but not at some later time t+Δt. In real life, Bob may do something to screw up. For instance, Alice may trust Bob while she is dating him, but after she sees Bob chasing after Carol, not so much. In computer security, we have similar analogies. For instance, we trust a session ID until after it has expired, or we trust a certificate until it has expired. It is at the basis of why we recommend that passwords expire after a time. This property of trust is one reason that authentication protocols use things like expiration times and/or freshness indicators.
These last two properties (trust is context dependent, trust is not constant over time) mean that when we discuss ensuring trust, we must do so within a specific context and time-frame. (Sometimes the time-frame is explicit and sometimes it is implied.) Generally, in computer security, we should strive to make all trust relationships explicit and leave nothing to chance or misinterpretation. That's one key step in defining a trust model, which I hope to discuss in a future blog post. In the meantime, I hope you will keep in mind some of the properties we discussed today when you are trying to secure your systems.
And one more thing, my apologies for not being consistent as of late posting to this blog. Not only have I been busy for with ESAPI for C++, but I've also been at loss for interesting topics. So if you have something that you'd like to see me ramble on about, please shot me a note. (But don't be surprised if I recommend that you get so serious counseling if you do so. ;-) Thanks!
Wednesday, April 6, 2011
Phishy or Not?OK, what's wrong with this picture... you receive an unsolicited email from a financial institute with whom you have a prior existing business relationship. The email sounds very official and has none of the usual sloppy spelling or grammatical errors that are the usual tip-offs to most phishing attempts. (Thank God that the most of the idiots constructing these phishing attempts to exploit the naïve don't know how to use spelling and grammar checkers in their word processing software! ;-)
However, the email is also written in HTML1 and it obfuscates the URL they wish you to visit with the ubiquitous “Click here to activate your account” link. Furthermore, it states that once you login with your user name and password, you will be requested to enter your Social Security Number. So the email is starting to smell somewhat phishy to me.
This particular email was purportedly from Morgan Stanley Smith Barney. (I'm sorry, but they deserved to be called out and shamed by this!) Naturally, I checked out the 'Received' email headers and note that they in fact do come from Citigroup, who indeed now owns MSSB. (I had to run a whois on a few of the domains I wasn't sure about, but in the end the all the domains listed in the Received headers checked out as being associated with Citigroup.)
Still, I wasn't about to just hand over my SSN without being a bit more cautious. After all, one never knows if Citigroup introduced a recent misconfigured open mail relay somewhere that some phisher was trying to exploit. So I send the email with all email headers to our company spam police. After several hours, they got back to me and assured me that the email was legitimate and that it had to do with the conversion of Qwest shares to CenturyLink shares,which I thought it might and was why I didn't completely dismiss the email outright.
Do As We Say, Not As We DoSo here is the issue. Financial institution of all kinds—whether it be banks, broker agencies, credit card companies, etc.—are constantly reminding their patrons to “not click on links from suspicious emails that appear to come from us”. If anything, they tell you to manually enter their URL into your browser's Location / Address bar. They tell us not to provide our SSN or user names and passwords to such sites. But then they turn right around and send their own customers out HTML emails with obfuscated links asking them to do the very thing that they spent several earlier emails and newsletters repeatedly educating their users not to do.
I ask you, is this practice not insane? While I'm focusing on financial institutes here because of the MSSB email from them today, they are by no means the only culprit. I've seen this very same practice done by Microsoft in some of their emails. In fact, I'm sure that I've seen one or two that were security related emails (in HTML naturally, even though I've signed up for the plain text variety) to their subscribers where in one paragraph they warn you about clicking on links provided in suspicious emails and a few paragraphs latter they are advertising a link where you can click on it to download some free Microsoft software like Silverlight or Security Essentials or Windows Defender. And often those links are redirected through some 3rd party mass email distribution company—just for that extra secure feeling I guess.
Argh!!! Please stop it already! Don't the people composing these emails out realize that they are reinforcing the very practice that they claim to be trying to educate users to stop doing? Oh, the irony of it all. Are they trying to lampoon themselves?
So please don't be telling me how we ought to require that regular users be better educated about security matters so as to make the rest of us all safer if this is the best that we have to offer in user education. Some have gone as far as suggesting that ordinary citizens be required to get some sort of license before they are allowed to connect to the Internet. Yea, right. Like we're doing such an excellent job educating folks now. This is like politicians talking out of both sides of their mouth at once. We can't have it both ways, so let's get our own house in order first before we start pointing out how clueless everyone else is. Perhaps, just perhaps, its been because we've been sending them mixed messages.
Let me hear your rants and/or reactions on this topic and thanks for your time.
1. Eye candy is way more important than security, because after all, who wouldn't leave their bank or broker if they didn't send out slick looking emails. Sigh...
Sunday, April 3, 2011
"Those who cannot remember the past are condemned to repeat it." – George Santayana
During the past month or so, I have moved into the “modern” era. I stopped using my 3 year old cell phone and started using a smart phone (Droid X). During the same time, I also purchased a Barnes & Noble NookColor eBook reader. Of course, being the security geek that I am, I immediately rooted1 each of them to see what makes them tick as well as to make them more useful to me. Here are my initial security thoughts on this technology.
Since both of these mobile devices are Android based, I can't speak with any degree of experience for other devices based on Apple's iOS or Microsoft Windows, but I would be surprised if things are that much different there either.
In the early days of personal computing (and I'm collectively including all personal computers here, everything from Commodore 64 to Apple Macs, not just IBM PCs and their clones), none of these were originally designed with the concept of really being multi-user devices. Instead, it was assumed that either these early computers were either used only by a single person, or perhaps shared by the entire family with the assumption that there was no need of privacy or separation.
Jump ahead about ten years, to the mid-1990s, and by then all the major vendors who were left (basically Microsoft and Apple) realized that this was the wrong assumption. So, for example, we see in Windows 95, the concept of a login screen (if one desired to use it), but under the hood, still no real integrated multi-user concept—a legacy that lived on until Windows XP. However, by then, the “damage” was done; parents had been “trained” that only a single login was required and for their children's PCs, they would just provide one login for their son or daughter. More often than not, that user account was one that had an administrator role.
Lessons from the Past
The result, at least in most of the households that I observed, was chaos. Teenagers—who were frequently more technically adept than their parents, but lacking the general wisdom of adults—would download and install various malware-infected games, P2P software, etc. In short, personal computing platforms of many households were so mired in malware as to be rendered completely unusable.
During the ten year period from about 1995 until up 2005, I assisted numerous friends with malware removal. More often than not, the attack vector was from something that a teenage son or daughter deliberately had downloaded to share music, pictures, or videos with their friends. (Note: nothing magical happened in 2005, other than I switched almost exclusively to using Linux so I had an “excuse” when someone asked for assistance with the latest Windows OS. In reality, I simply got wise enough to graciously say “no, I won't fix your computer” without friends taking offense. I wonder if surgeons are ever asked to perform free appendectomies by their friends. ;-)
But I think that the lesson learned by Microsoft and Apple was that compartmentalizing user data by user accounts was important, not only from a privacy perspective, but also from a security stability perspective. Of course, Linux growing out of UNIX roots had the multi-user concept from the start, but in a few cases even some of the Linux distros attempted to “dumb things down” a bit (e.g., via automated login) to appeal more to the casual users more familiar with Windows and MacOS environments.
Flash forward to the present. We see mobile devices—namely smart phones and tablet PCs—that are being treated by the manufactures / distributors as single user devices.
From a vendor perspective, this makes sense...there simply is less code to develop. From a user interface, it simplifies things there as well. But remember this single user approach originally seemed acceptable, for a short while at least, for OS vendors of early personal computers. They too approached computing platforms from a single user approach, only later to realize it proving detrimental. In the long term, going back and having to redo things soon after they've originally been implemented improperly always is a disability because of the dreaded “backward compatibility curse” and the additional complexity required for the retrofit.
So a fair question to ask is “Is industry missing the mark with their assumption that these mobile devices will be exclusively used by a single individual?”. I'm willing to concede that for cell phones this may be a reasonable assumption, but based on how I've seen tablet PCs being used, I'd have to say the answer there is definitive “yes, vendors have missed the mark”. My son has already used my rooted NookColor tablet and I have a friend whose entire family shares their iPad. I don't see it being that much different in other families unless those families are sufficiently well-to-do to by each of the family members their own tablet device.
The code used to root the NookColor seems to be influenced heavily by the Motorola Xoom system, so maybe I'm jumping to conclusions here. But for the both the original B&N NookColor as well as the rooted versions, there is no concept of access by different user accounts. The closest I see to a login screen for either is a 4-digit security code and once the device is unlocked, the user is able to do anything.2 According to may friend, a similar situation exists with the iPad. It only supports one user account that connects to Apple's iTunes.
You might ask, why is sharing a tablet PC with your family members a problem? Well, if you are willing to risk the security of your tablet PC and all the data on it to your teenage son or daughter, it isn't. (Not to mention that a compromised tablet PC may also provide a jumping off point inside your router's firewall that allows easier access to your other computers and their data.) Most kids that I know are not too discriminating about what they might download. It's doubtful that most of them would even take the time to read through a list of permissions that an Android app is requesting let alone fully understand the implications involved. While Apple and Google do their best to prevent malware and spyware on from their respective official download sites, there are other sites that one can download iPad and Android apps from might not do this at all. And while you may never visit these sites, your kids might. (You might argue that this is not possible if your device has not been jail-breaked or rooted, and you may be correct. But this is an almost trivially accomplished feat well within the technical skills of today's youth. Furthermore, returning it to the stock OS—after one downloads and installs that favorite free warez version of Angry Birds—is also fairly easy.)
The question is why does it have to be this way? Android OS not only already supports multiple user ids, but is uses them so that each of the different apps runs using a different user account. (I can't speak as to iOS as I've not yet researched it. As for Windows OS for mobile devices, I suspect that the stock OS also supports multiple user accounts under the hood although it may not support multiple end user accounts.)
If we have learned anything over the past 30 years, it is that there are inherent dangers associated with having a user run with a single, all-powerful account. At the minimum, there should be two accounts supported and presented to end users—one an administrative account used only for installing, upgrading, and deleting apps and other systems management functions, and one a limited-user account that has no special privileges at all. Ideally, for mobile devices likely to be shared, there also ought to be separate limited user accounts at as well. With multiple limited end user accounts you won't have situations where 13 year old Bobbie makes unauthorized posts of embarrassing pictures to his 16 year old sister's Facebook page. The only alternative that I see is for families not to use the automated sign-on into social networking sites. (But convenience wins over security almost every time; recall McGraw's and Felten's “dancing pigs” comment—well, at least until Bobbie posts a picture of his sister wearing her ratty bathrobe, hair up in curlers, and face covered in acne cream. ;-)
Even more importantly, consider the idyllic vision that some mobile device moguls seem to dream about where your mobile device (usually cell phone, but perhaps small tablets as well) contains all your credit information enabling you to make automatic payments using with your mobile device through the use of a Near Field Communications (NFC) chip. We need multiple end user accounts even more there. Surely one doesn't want your children to have access to use Dad's credit cards simply by waving a mobile device near the POS device. True, one could protect such electronic payments with a PIN as well, but most would find this an inconvenience and would either likely disable it or choose the same PIN that they use for the device itself to avoid committing yet another PIN / password to memory.
The Not-Too-Distant Future
The good news is, I think manufacturers and sponsors of mobile devices still have time to get this right. Currently, I don't think that the target is lucrative enough to entice all the malware writers to switch gears and immediately begin targeting mobile devices rather than PCs and Macs. I predict this will change within a few years, especially if the vision of making e-payments from mobile devices comes sooner rather than later. So if mobile device manufacturers are going to do something, the time is short.
The bad news is there is absolutely no consumer outcry to entice this to happen. Neither do I see anyone in the security community discussing it either. So perhaps it never will change ha until consumers have suitably suffered enough from resulting security breaches to get angry. Until then, we will have to make do with a patchwork of AV and other bolt-on security solutions.
So, the question is what should we do as security professionals? Should we call this out as an issue, or am I just a misguided old fool telling people that the sky is falling?3
You decide and let me know what you think.