Thursday, April 29, 2004

Developing Secure Software

Noopur Davis spoke to DC SPIN about developing secure software. Most of her presentation concerned the connection between software quality and security.

The CERT Coordination Center defines a security vulnerability as something that:
- violates an explicit or implicit security policy
- usually caused by a software defect
- similar defects are classified as the same vulnerability
- often causes unexpected behavior

Specifically excluded from this definition of vulnerability:
- Trojan horse programs (evil email attachments)
- viruses and worms (self propagating code)
- intruder tools (scanners, rootkits, etc.)

Vulnerabilities are the defects that permit evil email attachments, viruses, worms, et al, to exist.

Davis’s presentation concentrated on application development, because that is where most of the security vulnerabilities occur. Common security defects include:
- failure to authorize and authenticate users
- failure to encrypt and/or protect sensitive data
- improper error handling
- improper session management

Buffer overloads, which Davis defined as data written beyond the memory capabilities of an application, are by far the most common vulnerability. Software defects that can produce a buffer overflow include:
- declaration error
- logic defects in loop control or conditional expression
- failure to validate input
- interface specification error

Buffer overflows are very serious, as when a malicious hacker modifies a data pointer to seize control of a system.

Sample code was used to illustrate software defects:
void myFunc(int a, char *buf)
char str1[10];
char str2[50]

In this string copy example, the lack of a check would permit an override of the return address.

void myfunc(char **strNames)
char *tmpBuf[MAXNAMES];

for(int i=0; i <=MAXNAMES; i++)
tmpBuf[i] = (char *) malloc(MAXNAMELEN);
tmpBuf[i] = strNames[i];

In this loop example, the loop count is off by one.

Code analysis tools can help; but will not find most defects, and often give error message that are unhelpful. (Technoflak thinks you could devote an entire blog to unhelpful error messages.) Some of the better known code analysis tools are:

Davis drew attention to the Fluid project, which has shown promising results.

She referred to the design principles that Saltzer and Schroeder laid out in 1974, and stressed the importance of “separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.” She offered user name/password as an example of separation of privilege.

Thus far software vendors have shipped product, and waited for attackers to exploit vulnerabilities before developing fixes. Customers were expected to apply those patches to prevent further damage. Clearly this approach is not working. The BITS organization estimates software vulnerabilities cost its members $400 million annually and the financial sector more than $1 billion.

Testing is not enough; you cannot test quality into software. Inspections and reviews are not enough. Use of software testing tools is not enough. Design principals are not enough. Risk management is not enough. What is needed is a secure software development strategy.

First, there is an urgent need for education, education about the most common security vulnerabilities and the sort of software quality defects that create them.

Second, there is a need for a process that combines the best practices in software engineering, security, and management. The process must be constantly measured to see if it is working well.

Davis began to explain the Team Software Process and how it dramatically increases software quality and reduces security vulnerabilities. Software produced by the Team Software Process averages .06 defects per 1,000 lines of code, versus the industry average of 1 to 7.5 defects per 1,000 lines of code.

Developers using the Team Software Process manage and remove defects throughout the development lifecycle. They use measurement and quality management to monitor and control process. This same process could be used to address security issues.

Implementing best practices is often impeded by scheduling and people issues. The Team Software Process helps to build self-directed teams that make their own plans and commitments and track and manage their work.

Clearly the role of management is crucial. Management must establish organizational policies for secure software development, set measurable goals, provide resources, funding and training. Davis emphasized the need for someone at the project level to focus on security, not simply a chief security officer.

Davis spoke about the measurement framework: schedule, time, size, defects, process quality measures and product quality measures. She said “We don’t know if these are the right measures for security, so we are gathering data.”

The Team Software Process project is researching practices that will reduce the will reduce the software design and implementation defects that produce software vulnerabilities. It is also working to provide the capability to predict the likelihood of latent vulnerabilities in delivered software. The goal is to achieve .06 security vulnerabilities per 1,000 lines of code.

A pilot project is currently underway, with promising results, 96% of the security vulnerabilities were discovered and eliminated.

Davis encouraged attendees to visit the TSP web site and read The Team Software Process in Practice: A Summary of Recent Results.

During the question period one attendee pointed out that it was perfectly possible to write high quality software that is insecure. Davis conceded that software quality does not always equal security; but the opposite is invariably true, poor quality software is rife with security vulnerabilities.

No comments: