Sunday, May 22, 2005

Software Magazine - Software Security Code Review: Getting it Right Before You Release

Software Security Code Review:
Getting it Right Before You Release
by Mark Curphey and David Raphael

Code reviews help in two ways: development teams determine how hackers might break their code, and they learn ways to build more robust applications

In the world of security there tends to be two camps; those that break things and those that build things. It is often said that people that are good at building things are not good at breaking things and vice versa. While this may or may not be true, it is certainly true to say that if you understand how people break things you can usually figure out how to build things better. Software security code reviews have two functions. They allow development teams to determine how malicious hackers might break their code and they also help development teams learn ways to build more robust software.

The majority of security testing has been and continues to be what is referred to as black box testing. The name describes a type of testing where the test team does not know how the system works on the inside. This black box approach to testing is the same technique as that of a hacker; the tester must deduce how the system functions before deciding how to attack it. In reality this process is heavily driven by guesswork. While today the majority of people use this approach, there is a better way.

The systems source code is the DNA, describing exactly how the software works. The advantages that the system DNA provides in diagnosing software issues should be obvious. Even hackers now favor the advantages code analysis afford, typically decompiling security patches released by vendors and determining the original area of weakness in the system the patch fixes.

In this article we describe the approach to software security code reviews we have developed at Foundstone working with hundreds of clients and numerous types of systems. Our approach has two main phases and should be able to be applied to most development processes and to most technology with little customization needed.

The two key phases are;

* Threat Modeling
* Code Inspection

In the first part of this article we will look at threat modeling and in the second part we will delve into code inspection.

Threat modeling is a technique that was pioneered by Microsoft and is essentially a pragmatic twist on a classic security technique called risk analysis. The idea of threat modeling is to systematically explore the application while thinking like an adversary. Thinking like an adversary (attacker) is a key element. The process forces users to explore weaknesses in the architecture and determine if adequate countermeasures are in place. Building a threat model is best conducted in a workshop format with a team that includes the system architects and system developers all gathered in one place (physically or virtually). The team should appoint a scribe to document the model and a lead who is an experienced threat modeler to lead the process. It is worth noting that workshops work well only when people’s cell phones are turned off, PDAs shutdown and the team is able to exclusively dedicate their thought process. The tasks we have to complete are as follows;

* Describe the System
* List and Tag Assets
* Define System and Trust Boundaries
* List and Rank Threats
* List Countermeasures

It is rare that these steps are conducted in a strict incremental manner. As with software development, iterative processes tend to be more pragmatic and more complete.

Describe the System

The first session of the modeling process starts with a brainstorming session in which the objective of the team is to describe the complete system both graphically and textually. Experienced modelers will ensure that the team describes the system as it is going to be or has been built and not as the team would ideally like it to be built! In general you should not focus on the style or layout but on the content itself. Heavy usage of white-boards or flip-charts is common and effective. We have found that UML models such as use case diagrams and sequence diagrams to be very useful starting points but certainly not essential. Microsoft recommends data flow diagrams but our experience is that specific formats are not important; developing an accurate and realistic description of the system is. Graphical descriptions work well as they are unambiguous and easily refactored. As well as architectural descriptions (usually graphical), the team should describe the functionality and features such as “Login” or “Obtain Balances”. An acid test of this task is for a lay person to be able to understand what the system does and how it does it.

List and Tag Assets

Using the drawings, annotations and lists that describe the system the team should then create a list of the assets that make up the entire system. Assets generally can be split into tangible assets and intangible assets. Tangible assets may include things like password files, configuration files, source code, databases and network connections. Intangible assets of systems are very difficult to describe and many people waste an unnecessary amount of time attempting to list them. The team should list the assets in a table and attempt to group them into common asset types if possible. The next step, tagging the assets, is extremely important. There are several ways to do this and several tags worth considering adding to assets to help with the code inspection.At the highest level you should consider determining whether the asset is architecturally significant from a security perspective. That is to say does the asset play a role in enforcing the security model? If it does, it is highly likely that you will want to perform a security code inspection on it. Other things to consider, could be security mechanisms such as authentication, authorization, data protection, and data validation.

Define System and Trust Boundaries

Next leveraging the architecture system model and the asset list, the team must define system and trust boundaries. System boundaries are places where data can flow in or out of the system and or its various components. For example if a B2B web service is the evaluation target, the system would have a system boundary where the service is exposed to the public network. These system boundaries are later used to explore requirements for things like authentication, authorization and data validation. As well as socket connectivity system boundaries, include things like access to registry keys or file systems. UML sequence diagrams have proven to be very valuable in defining system boundaries and should be used if available.

At the same time as the team defines the system boundaries they should also define what are called trust boundaries. These are virtual domains of components or parts of the system that implicitly trust each other in some shape or form. Typically in systems groups of components or pieces of code place a degree of trust in each other. By describing these trust boundaries we can explore the path of least resistance an attacker may travel to exploit the asset in the next task. In complex models we can tag trust boundaries with a level of trust in such as way that we can model implied trust and explicit trust.

Checkpoint: At this point of the process we have gathered a great deal of information. We know exactly what the system does, the components that it comprises of and how it all functions from a security perspective. We will have spent approximately 4-6 hours so far.

List and Rank Threats

Now the fun starts and the team turns to the creative portion of the threat modeling exercise. The idea is to develop a list of realistic threats to the system. Confusion at this point is one of the most common mistakes made by inexperience modelers. To be clear, a threat is not a vulnerability (a weakness in a system). We define threats and vulnerabilities as:

* Threat: capabilities, intentions, and attack methods of adversaries to exploit, or any circumstance or event with the potential to cause harm to, information or an information system. [NIS]
* Vulnerability: a feature or bug in a system or program which enables an attacker to bypass security measures or [CERT 1993]: an aspect of a system or network that leaves it open to attack.

As example of a threat may include “steal the password file” or “modify the configuration file to obtain unauthenticated access”.

Listing threats is both creative and fun. The team is attacking the system on paper and tends to become engaged very quickly. But a word of warning; this part of the process can be enlightening, competitive and difficult to constrain. Enlightening because it quickly becomes obvious if the system could be compromised by certain common threat vectors. Competitive because team members and human nature always kicks in as people try to develop “cool” threats. Last but not least, this process can be difficult to constrain. The leader must ensure that the team remembers that this is a model and not an attempt to develop a list of every possible threat. The leader must also (often delicately) constrain the imagination of some individuals who decide that alien hackers from Mars melting the electrons in transport is a threat!

Each threat must then be categorized and ranked according to criteria that enable the team to prioritize them. Microsoft has developed several schemes for this ranking. The two well discussed schemes in the Microsoft world are STRIDE and DREAD (although I am reliably informed they are moving away from DREAD). STRIDE is a classification scheme standing for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of Privileges. DREAD is a ranking model and an acronym for Damage potential, Reproducibility, Exploitability, Affected users and Discoverability.

Our experience at Foundstone is that each customer can usually develop very effective ranking models that work best for their own business extending or modifying the basic principles of STRIDE and DREAD.

List Countermeasures

For each threat the team must now decide if there are adequate countermeasures in place to prevent or mitigate the attack from being successful. Countermeasures may include things like the fact the password file is encrypted or that access control is in pace on the database only allowing certain users to access the password file. Any threat without a countermeasure is by definition a vulnerability and this is, in many respects, the end game of the threat modeling; discovering the vulnerabilities by a system modeling process.


Threat modeling seems such a simple concept; and it is, however there is a definite skill that teams will develop through practice and experience. It is also an incredibly powerful, effective and valuable technique that in our opinion, can be applied to software both during the development process and after software has been built. We have saved customers literally millions of dollars by helping them avoid committing thousands of development hours to designs that were fundamentally flawed. You too can leverage the process to pragmatically analyze your software and build better software.

Part Two

In part two of this article we will be describing the code inspection part of the code review process. We will show you how to take the security significant parts of the system and find both flaws and bugs in the code. Flaws are issues in the code due to poor implementation choices such as storing cryptographic keys in configuration files or choosing weak random numbers to seed key generation. Bugs are issues in code usually due to incorrect semantic constructs such as buffer overflows. We will release a free tool to accompany part two of this article called .NETMon, part of the upcoming Foundstone S3i .NET Security Toolkit (free and open source). It watches the .NET Common Language Runtime (CLR) and watches how security is enforced by the .NET framework by bespoke code. This tool is one of many that have been developed at Foundstone to help code review teams and software security professionals.

Mark Curphey is the founder of the Open Web Application Security Project (OWASP), and is director of software security consulting at Foundstone, now a McAfee company. OWASP, created to help organizations understand and improve the security of their Web applications, publishes a list of Top 10 Web application security vulnerabilities each January. Curphey is the former director of software security for Charles Schwab and has a Masters Degree in cryptography.

David Raphael is a Senior Software Security Consultant at Foundstone. As an expert in both J2EE and .NET, David performs software security reviews and provides training to both J2EE and .NET developers on writing secure software.
Post a Comment