Saturday, October 28, 2023

Threat Modeling for Software Development Kits (SDKs)

I generally don't ask for help in my blog, but writing up my questions here is the easiest way to share via social networks.

I was wondering if anyone could point me at an example or 2 off a formal threat model that was done on some Software Development Kit (SDK), that is a software library. Not a web service API, where the attack surface is more obvious, but a traditional library.

I would like to do one for OWASP ESAPI. The closest I’ve gotten to this in the past was this document I wrote in 2010: OWASP ESAPI for JavaEE 2.0: Design Goals in OWASP ESAPI Cryptography, which had at its roots lessons learned from a proprietary cryptographic key service that I lead my AppSec team to design for a former employer and for which I did a formal design goal, but only a minor part of that was an SDK.

A major reason why I’ve been thinking about it recently is because just last week, I published a new ESAPI Security Bulletin and an accompanying GitHub Security Advisory that had to do with File Uploads and in it, I tried to explain why ESAPI couldn’t do more. In that simplified explanation I tried (and most likely largely failed) to convey some of the major DoS related threats associated that we couldn’t account for unless ESAPI were to enforce particular use cases on developers. And therein lies the problem. I thought, it I had threat model to hold up and share with developers around various file upload use cases, I would have had a shot. But when you have a general purpose library that tries not to impose any specific use case, how do you go about describing a threat model for that? Even drawing a DFD for that is problematic as it likely would only be one of several possible ones. (E.g., for file uploads, authenticated vs anonymous file uploads are 2 very different use cases that in my opinion would have 2 very different threat models.)

I think it is obvious from a library of several general security controls like ESAPI which tries to provide controls that can be used to address most of the OWASP Top Ten, that one would first have to start by decomposing it into basic components such as authentication, authorization, data validation, output encoding, safe logging, cryptography, etc. and likely have to address it as a collection of several separate, much smaller threat models. Otherwise, you would never be able to deal with all the different abstractions in a context level DFD that would be simple enough and concrete enough for it to be comprehensible to anyone, much less developers. Part of the problem in fact is that several of the components (e.g., data validation, output encoders, safe logging to name just three) already deal only with “strings” as input and output and it is extremely difficult to relate a general string to usual concrete asset.

An example of this is ESAPI’s Logger component which attempts to provide “safe logging”, where “safe logging” in meant to 1) prevent “Log Forging” (aka, “Log Injection”), that is CWE-117 as well as 2) provide some level of basic attention against XSS attack strings inserted into logs that may be viewed in via HTML browser. However, one thing that ESAPI’s Logger and its concept of “safe logging” does not address at all is to prevent sensitive / confidential information ending up in log files. Given that the tainted input to the methods are general Java Strings, it is not possible to look at those in the context free manner that ESAPI sees them to know if it is in fact sensitive data that should not be placed into a log. That potentially could be don’t with a slightly different interface or customization by application developers, but seems a stretch for a general purpose security library.

However, I think that a well-developed overall ESAPI threat model would help the ESAPI team communicate to developers to let the know them where the security guardrails are that ESAPI is able to handle and where they are on their own. I have communicated with many application developers using ESAPI who seem to struggle with these basic concepts. Sometimes it is because our Javadoc is sorely lacking or we have little design documents. (I think we only have 2. One for the ESAPI cryptography that I mentioned earlier and a much more basic one that is a half-page covering the Logger that was originally just an email between ESAPI team members.) Similarly, we only have 1 high level user guide for any of the components (for the symmetric cryptography introduced in ESAPI 2.0). I think developers can learn a lot by reading through well-constructed threat models.

But that brings me to main question. How does one construct a threat model for a general library? What does a good one look like and how would one for a library (which may have no specific use cases in mind) different from threat models from applications or overall specific systems.

And my second question (assuming someone can answer the main questions above), is would anyone be willing to assist me in developing a threat model for OWASP ESAPI?

Thursday, February 16, 2023

 

Why I don’t Myth the Old Days (or there’s no accounting for bug fix costs)


This is a commentary on a portion of Mark Curphey’s blog post “On the left, on the right, and wiggle in the middle”.

It’s not that I disagree with Curphey’s overall message that “shift-left is a dangerous urban myth”, but I think perhaps both he and Lauren Bossavit, from whom he quotes from Bossavit’s Gist post might not be considering the proper context of the presumed myth “it's 100x cheaper to fix bugs in development than it is in production”. Specifically, I think the citation that Pressman used in his 1981 book was for a completely different era of software development and I think that makes a significant difference that is not being accounted for and ever since then—because it benefits ROI marketing hype for certain companies--it’s been constantly pulled out of context and taken on a life of it’s own.I

However, in a nutshell, when Pressman originally wrote it—for the worst case scenarios at least—it probably was close to being in the right ballpark. That doesn’t excuse companies from still repeating it (whom Mark rightfully calls out) but I don’t doubt those figures were close to being correct back in 1981. So I think there’s a bigger picture that is being missed here.

Let me explain. And for those of you who have not as old as dirt, as am I, let me give you some history.

In the early 1980s, the hardware was archaic by today’s standard and the waterfall methodology for software development was the only game in town. If you were lucky, you got to work on a machine that had 16-bit addressing and either Version 7 or PWB Unix. At (then AT&T) Bell Labs at the time, it was not uncommon for 20+ developers to share a Unix system on a DEC PDP-11/70 (or worse) and connect to it at 9600 baud using DEC VT-100 terminals. Programming was generally done in C or assembly language (or often a mixture) using the ‘ed’ text editor. Compiling the Unix kernel from scratch would take 3 or 4 hours in single-user mode, but much longer on fully loaded system. The only debugger at the time was ‘adb’ and it only displayed the code in the native PDP-11 assembly language. But most importantly perhaps was that all software was distributed by 9-track tape.

If you put all those things together, it’s not hard to see how the cost of catching a bug in early in the development process could easily be a factor of 10 or more than catching it and fixing post release.

The first project I was on at Bell Labs was called AMARC. (I think AMARC was an acronym for “Automated Messaging and Accounting Center”.) Like most other projects at the at Bell Labs, AMARC was on a two-year release cycle. When we did a regular AMARC release, the would be written on 9-track tapes and delivered, often by special courier, to most of Baby Bell (officially known as Regional Bell Operating Company (RBOC)) central offices. (AMARC collected long distance billing information and charging for long distance at that time was one of the primary ways that AT&T funded Bell Labs R&D.)

But occasionally, when a bug surfaced, it was serious enough that AMARC had to ship emergency patches. I remember one such patch. AMARC was a proprietary custom duplex real-time operating system (this was before DMERT for those of you old enough to remember that) that ran on dual PDP-11/70s. The was a bug that was causing AMARC to crash and when its paired mate rebooted the crashed processor, it got stuck in an infinite cycle of reboots: when the initially rebooted mate came back up, it caused the other one to crash and then it would reboot. So there was this wicked cycle of endless reboots until each were shut down. But the bigger problem is that AMARC recorded (of to 2 hours worth) of its billing data on 9-track tapes. When the machine would crash, that tape would get trashed. So that required an emergency patch. (If I recall correctly, the Bell System called those emergency patches Broadcast Warning Messages or BWMs for short. Whatever the actual term was, I will henceforth refer to them as BWMs in this post.)

The BWMs were often generated using ‘adb’ in write mode (-w) to allow for patching binaries. Back in the day, AMARC would allocate patch space by writing large sections of NOP instructions which would latter be filled in with the actual fix and then a jump instruction would jump to the proper spot. So even if the bug was in some C code (which it often was) the patch would be in assembly code and then adb would be used to patch the actual AMARC baseline binary (all the patches were done in octal by the way!) to create a new point release of AMARC that became part of the BWM package.

The creation of a BWM patch alone was a very error prone and tedious process. Depending on how complex the accompanying BWM installation instructions were, AMARC management would actually put several Bell Labs engineers on a flight, along with their precious 9-track BWM cargo which they would hand carry to the RBOC central offices and assist the RBOC in installing.

So it wouldn’t surprise me in the least if those worst case scenarios wouldn’t add another cost factor of 2 or 3 at a minimum. There was a lot of labor costs involved with those BWMs as well as travel-related expenses. It was a different day than it is today where software is delivered online. Most people forget how much of a pain it was to install software from DVDs and some may even remember floppy disks, but 9-track takes were worse. There were cases reported where they would get to a central office and couldn’t install a tape because it turned out the read/write heads on the TU10 tape drive were out of alignment so they hard to send a new tape. (They did try reading the tape on a different one than the TU10 that had recorded it to ensure that those heads were within spec.)

Now if there were a major design error that would have caused a problem like the one I described, instead of some coding error, it would have been even more costly, especially if you would account for all the lost revenue because AMARC crashed and lost about 2 hours of long distance billing data.

I can’t speak for the cited IBM figures that Roger Pressman cited because I’m not aware of their development processes or how their revenue stream worked. But if you consider something like all the additional expenses and all the lost revenue in these worst case scenarios, it’s certainly plausible that a bug found early in the design process could cost 100x or so less to fix than one discovered post-release. It’s a very different world that we live in.

So, in my mind, if Pressman is to be blamed for anything, it’s that he’s never bothered to update those figures for common day development processes. Unfortunately, once outdated hyperbole like this gets started, they take on a life of their own, so yeah, there’s definitely damage done.

One last thing...I remember reading and discussing a proprietary Bell Labs Technical Memorandum with a different project team somewhere around 1985-1987 or so. The supervisor of the test team for the Network Control Point (NCP) project, Dennis L. McKiernan [yeah, the same one who has authored many fantasy series novels] brought it to our attention. The details are fuzzy, but I think the author of that Technical Memorandum studied the #5ESS project (which was Bell Labs biggest project at the time in terms of lines of code) and reported a more modest factor of 20 in bugs found and fixed early in the software development projects vs those fixed post-release. I don’t recall the author of that TM’s name, but if you know any old hats from Bell Labs (besides me), they might remember. (Some of the surviving dinosaurs from that era at Bell Labs had much larger brains than I. :) I don’t think the report made it into the BSTJ, but it might have; if we could find the author’s name, we may be able to dig it up especially if it was ever published in the BSTJ.


-kevin “Mr. TL;DR Dinosaur” wall

P.S.- I wish I could say that the situation of intellectual laziness has improved for those citing statistics in IT / computer science projects, but even papers written by CS professors in academia frequently omit so many details to make any experimental results hard to verify because the experiments are seldom repeatable. (There are some, but it is far from the norm, especially when it comes to software engineering practice and metrics. Maybe that would make a nice topic for a future Crash Override blog post. I would attempt it, but this one completely wore me out and it takes a lot longer for dinosaur batteries to recharge. ;-)

Friday, February 8, 2019

Self @deprecation -- My life as a Javadoc comment

My current job for the past 5+ years involves doing security code reviews.

During the past 2 days, we had been having a lengthy conversation of how we map a third party assessment finding for Server-Side Request Forgery (SSRF) to one of our team's categories...essentially a task of pounding a square peg into a round hole. A mini-debate ensued when a colleague asked for an example or two of SSRF, which I offered. That colleague then decided to write up a small code snippet to test one of our internal proprietary tools to see if he could get it to recognize SSRF. One of the lines from his example code snippet had this gem in it:

     request.getFromKevin("url");

My Reply

For some reason—perhaps simply in an attempt to put the seemingly endless email thread to bed—I decided to poke fun at myself in a self deprecating way. Here was my response to the email. (It's probably too long, and no one will read it though. :)

Wait, what? HttpServletRequest.getFromKevin(String) ???  I want to see the Javadoc for that one.

It probably reads something like:

getFromKevin
String getFromKevin(String url)
A promising sounding method that in fact does nothing, much like Kevin. In fact, the url parameter is completely ignored and the contents of /dev/urandom are read from for 3GB or until the application crashes, whichever comes first. This is method is used to simulate reading Kevin’s random babble that he posts to simple Yes/No questions and instead makes you forced to drink from a fire hose until your insides burst.
Parameters:
url - a String which is ignored, just like we try to do with Kevin
Returns:
a String containing random babble or a PleaseMakeHimStopException is thrown if the application runs out of memory trying to process the request

Anyway, let me know what you think. For those who are familiar with my TL;DR tendencies, you're probably thinking this fits me to a tee.

-kevin
P.S.- Follow me on Twitter @KevinWWall and RT if you enjoyed this. (Of course, others are saying "No, no. Don't encourage him or he will never shut up.")

Saturday, October 15, 2016

Crypto Humor


On October 6, 2016, I presented a talk at the Rochester Security Summit titled "Common Developer Crypto Mistakes".

When I found out that my time slot was going to be the one right after lunch, I thought perhaps a little relevant humor would be good to help wake up the audience.

So, I did a bunch of research (okay, okay, this was way too much fun to qualify as “research”, but hey) and searched the Internet for jokes related to cryptography. (Using crypto-related cartoons / drawings—of which there are a lot more--was basically out because of my company's legal department's concern with potential copyright issues.)

The favorite joke that I found that I really wanted to use was this one, but since I was presenting the slides from a PDF slide deck, it was a bit hard to do without prematurely revealing the punch line:

Q: How many cryptographers does it take to change a light bulb?
A: ^T2u#�5�e|�Z�Lj�lz�jC#M

So instead, I ended up going with this joke:

I was going to start off by telling you a couple of good cryptography jokes, but unfortunately you can't tell the difference between them and random gibberish, so I decided against it.

Here are a few others that I found somewhat humorous that I was not able to fit into the prezo or considered ill-suited for the audience. You may or may not enjoy these, depending on how warped your sense of humor is and how much of a background in crypto you have:

Have you heard about the cryptographer who replaced his door with one that is 3 feet thick?
The lock on the old door could only take short keys.


Two hashes walk into a bar, one was a salted.


I was nearly arrested for SHA1 checksumming a doctor’s prescription. Luckily the hash was for medicinal purposes.


I also ran across also this long(ish), but rather humorous discourse by John Gordon
that you might enjoy.

And lastly, there's my email .sig that I've been using ever since the Snowden revelations:
NSA: All your crypto bit are belong to us.
which many people like, but I didn't use in the presentation because some also apparently find it offensive.

Anyhow, thanks for smiling!
-kevin

Friday, March 28, 2014

ESAPI No Longer an OWASP Flagship Project

I read the news today oh, boy…

By now, you’ve probably heard about several of the OWASP board members (and perhaps, some bored members as well) coming to the conclusion that several of the current OWASP flagship code projects should be demoted and others should take their places. (If you’re not up on the news, you can read about it in these 3 email threads archived here, here, and here.)

Among the flagship projects suggested for being demoted is OWASP ESAPI, of which I am the project owner for the Java implementation.  After hearing about the recent ESAPI Hackathon, you may be puzzled or perhaps even surprised by this news. You shouldn’t be. While it may sound like heresy coming from an ESAPI believer, although I am not happy about it, I think this action is long overdue and is best for OWASP. So I guess I’m sorry if I disappoint those of you who are ESAPI fans because I’m not standing up for ESAPI and defending it, especially since it's not yet a done-deal.

I’m not, because I can’t. I, for one, can see the writing on the wall. (Pun intended.) All of the allegations that are being made against ESAPI are spot-on:
·         Only one minor point release in since July 2011.
·         164 open issues, including 4 marked Critical and 11 marked as High.
·         Far too many dependencies, something that has never been addressed despite being promised for almost 3 years.
·         Wiki page still in the old OWASP format.
·         Minimal signs of life of for ESAPI 3.0 in GitHub and ESAPI 2.x for Java on Google Code. Zero signs of life for implementations in other programming languages. [Note: Discounting the SalesForce one as I’ve not kept track of it.]
·         For ESAPI for Java, a boogered up architecture where everything is a singleton making some things such as mock-testing all but impossible. Less than 80% test code coverage, in part, because of that.
·         Lack of any significant user documentation outside of the Javadoc and the ESAPI crypto documentation.
·         Disappointing participation at the ESAPI Hackathon.

I could go on, but I won’t.

ESAPI has dropped the baton and I’ll take as much blame for that as anyone. I have neither the contacts nor the people skills needed to entice developers to participate in ESAPI. Of the 3 or 4 people that I have recruited at one time or another, they’ve not contributed to even a single bug fix. And it goes beyond that. I’ve still not yet put together a release based on the few updates (a couple minor bug fixes and a small enhancement in handling configuration files) that were done by those participating at the ESAPI Hackathon. I have also still yet to release a fix for CVE-2013-5960. (Aside: Hey! Could use a little help here! Ideally someone who knows a thing or three about crypto.)

It’s time that ESAPI yield the baton to some of the other worthy candidates like OWASP HTML Sanitizer, OWASP Dependency Check, and OWASP Java Encoder to name just a few.

What does this mean for ESAPI support?

Well, probably nothing. I mean, honestly, the support really hasn’t been that great in the past 3 years. I do suspect that it might radically give those thinking about adopting it for a new project a reason to rethink that commitment though.

I will personally commit to trying to fix any known bugs in the ESAPI crypto (including a few you didn’t even know were there; well, okay, maybe “annoyances” would be a better tem), including a fix to CVE-2013-5960. But I am likely through with any planned feature enhancements other than the minor ones that I’ve been working on. If I do any further enhancements for the crypto features, I will split off ESAPI Crypto into its own new incubator project. No more promises of time frames though. I work a 40+ hour job like most of you and the ESAPI work is volunteer hours and I have family and other commitments as well.

As for ESAPI 3, will it flourish or die out before it even gets started? Well, that’s up to you, OWASP Community. I certainly am a believer in the ESAPI concept of a common set of interfaces for common security controls, but the problem has always been the implementation, not the concept. As they say, the devil is in the details.

So if you want to volunteer to work on ESAPI, give us a shout out on the ESAPI-Dev list. If you aren’t already signed up, you can sign up here.

Best regards,
-kevin

Sunday, June 23, 2013

Appalachian Security

This is just too funny to keep to myself. This was written about 3 years ago by a former colleague of mine who was the PM for our Application Security group. He wrote it when I announced that I was leaving the Application Security team to join the Information Security team under Corporate Security.  I happened to run across it again as I was cleaning out stuff in preparation for my last day at CenturyLink (which was Friday, 6/21/2013).  It was originally posted along with a photo of The Beverly Hillbillies' character, Jed Clampett. Out of respect for Buddy Ebsen, who played Jed Clampett, I've chosen not to include the photo so as not to diminish Ebsen's legacy by being associated with me.


Naturally, this was meant to be accompanied by the original theme song from the Beverly Hillbillies. Now maybe if I can just get Gary McGraw and Where’s Aubrey to record it... :)


Enjoy,
-kevin
Appalachian Security
by Mark Hersman (July 2010)


Come and listen to a story about a man named Kev
A poor engineer, barely kept his family fed,
Then one day he was "working" on an app,
His pager started beeping, nearly woke him from his nap.


Awake that is , Consciousness.


Well the first thing you know ol Kevs got a scare,
His judgement says "Kev get away from there"
Says "The Lavratory is the place you ought to be"
So he slips Hanbin the pager, and he goes to take a ……...


A break, that is. Quiet time.


Well now its time to say goodbye to Kev and all the guys.
And they would like to thank fer usin ClearTrust APIs.
You're all invited back again to this locality
To have a heapin helpin of app security


Kevin style that is. Set a spell, Take your shoes off.


Y'all come back now, y'hear?

Monday, November 12, 2012

RSA Distributed Credential Protection: Solving the Wrong Problem?

Recently, RSA Distributed Credential Protection (DCP) was announced by RSA Security.  I’ve read the literature, sat through a presentation by an RSA sales representative, and watched the YouTube videos. And most of all, have formed an opinion.

Being a crypto geek of sorts, I’ll be the first to admit that this seems like a really cool and interesting application of secret splitting.  But, as much as RSA makes it sound like the most innovative thing since sliced bread, I believe that is fundamentally a solution to the wrong security problem. Let’s have a look at why.

As I’ve written many times, security is fundamentally about ensuring trust and managing risk. When attempting to lower risk, there is always a cost / benefit balance that needs to be studied. Just as one would not spend $10,000 for a home safe only to store $1000 in it, IT is not going to spend six figures on a solution that will not reduce the perceived cost of the risk at least that much. (Note that while I am not certain of the pricing of RCA DCP, it is not farfetched to think that an enterprise rollout would be near the six figure range.)

RSA advertises DCP to be transparent to the end users, and so it is. However, that is not the only thing that is important here. Another major factor that only came out during the live presentation by RSA was that any application wishing to take advantage of DCP needed to change their application to adapt to its API. This means of course if you have N applications all authenticating users to an SQL database (or perhaps an LDAP directory store, if the API works with that), all N of those applications need to be changed. If you fail to change one application, then you still need the user’s stored in a single data store where DCP is not applied and consequently still remain unprotected. So take the cost of licensing the RSA DCP software and add to it the cost of each of your N applications integrating the DCP API and you will have something closer to the total cost of deployment. Of course, the operational costs are also likely to increase somewhat as well. Whereas before you had but a single data store for said credentials, now you have two. The end result is that the total cost to incorporate RSA DCP into your environments is likely to exceed the six figure level even if the software licensing costs is nowhere near that amount.

Well, still, that might be OK, right? After all, if the perceived benefits greatly exceed the total costs of mitigation, we still have a security win.

So what benefits does RSA DCP bring to the enterprise? According to the RSA press release as well as this YouTube video, the threat that RSA is trying to prevent is the “smash-and-grab” of credentials by an attacker. Specifically, DCP is designed to make it more difficult for an attacker who has infiltrated your company network and has managed to get direct access to your database server to obtain credentials (either plaintext or hashes).  DCP also would likely mitigate a rogue DBA doing a “smash-and-grab” of your company’s credential data as well, as long as care was taken to provide separation of duties and not give a single administrator a DBA role on both DCP servers.

So we still need to answer this question: Is this a common way for an attacker to gather user credentials?  In my opinion, it is not. By far, the biggest attack vector for adversaries stealing credential material is via SQL (or possibly LDAP) injection attacks. Will DCP do anything to mitigate SQLi attacks? The answer would appear to be “no” (at least according to the RSA sales rep that we talked to). In fact, given that one has to bolt new DCP API code into one’s application to use DCP, there is a chance that new SQLi vulnerabilities may be introduced as developers change the application code.

So is there a place where using RSA DCP would make sense? I believe so, but I think it is a niche market rather than the broad market RSA Security would like it to be.  RSA DCP could be very valuable where you have an extremely high-value target (credentials or otherwise) that are difficult to upgrade. The perfect example that comes to my mind is protection of the RSA Security SecurID seeds. Compromise of those SecurID seeds required RSA to replace all the hard token SecurID devices.  In fact, it is not unreasonable to speculate that this product came directly out of researching ways to protect those high-value credentials from the smash-and-grab type of direct attacks resulting out of that breach. If RSA Security wishes to broaden the market for their new DCP product, I believe that the best approach is for them to integrate DCP seamlessly in with their other products, starting with RSA Access Manager. If you are going to make a believer of us security folk, you first have to be willing to eat your own dog food.

However, in the meantime, for your regular user passwords, salting with a sufficiently long random salt, enforcing password complexity rules when users select their passwords, and enforcing account lockout are likely to be sufficient protection for your customer passwords.  If doing those things is not sufficient, you seriously need to consider whether passwords are a strong enough form of authentication for your users.

Note that the views expressed herein are wholly my own and do not represent those of my company, of OWASP, nor any other organizations with whom I am associated.

Regards,

-kevin