It depends on who you ask, but Codenomicom of Finland, a provider of software testing tools, reckons that 80% of security problems are caused by programmers writing insecure code. Much of this is the result of development schedules being too tight. Given that software applications contain thousands or even millions of lines of code, it is more than likely that some programming errors are made that could leave the application vulnerable to attack.
The fact that software applications can contain flaws is nothing new. But a recent survey by Quocirca of 250 organisations shows that not only are businesses increasingly relying on bespoke or modified software applications, which they see as critical for their business, but not enough of them are employing automated tools for testing those applications for security vulnerabilities.
Another practice that is increasingly being seen is the outsourcing of code development to third parties. This can be a less costly option than developing the code in-house, especially where a business does not have sufficient resources of its own. But when such an essential process is placed in the hands of a contractor, extra care must be taken to ensure that secure coding practices are used and that applications are thoroughly tested.
Failure to thoroughly police the software development process has far reaching consequences. If a hacker is able to exploit a flaw in the software, they could use it to attack the application in order to steal sensitive personal or organisational information. In today's increasingly regulated world, this is something that can cost organisations dear, not just in terms of the price tag for cleaning up after the attack, but in lost business owing to the negative publicity that is likely to ensue.
But if a flaw is found in software that was developed by a third party, who should bear the responsibility for fixing those errors? If an organisation bought an appliance such as a printer that was found to be faulty, resulting in a fire on its premises, it is in a position to sue the manufacturer and claim compensation. In the same way, liability for faulty software should be pushed to the contractor to which it was outsourced and should be written into the contract.
When outsourcing any business process to a third party, it is essential that a good contract is written and that a watertight service-level agreement is put in place. This is something that some regulations actually mandate when outsourcing application development. For example, the FFIEC (Federal Financial Institutions Examination Council) implementation guide for GLBA (Gramm-Leach-Bliley Act) states that organisations must establish a vendor management programme that includes "establishing security requirements, acceptance criteria, test plans, and reviewing and testing source code for security vulnerabilities." This may be a US regulation, but its impact is being felt by some European organisations as well, and there are a host of other regulations demanding higher levels of security.
Technology vendor Ounce Labs has been advising organisations since 2002 on how to work with outsourcers to ensure that code is developed with security in mind and that the appropriate testing tools are used. It has worked with lawyers to develop suitable contracts for organisations to use, which it makes available on its website. According to Ounce Labs, the following are some best practices that organisations should follow when outsourcing software code development:
* Define upfront what is meant by security, including the security environment in which the application is to be used and what other resources could be exposed by a security vulnerability, and include the definition in the contract put in place
* Validate the security mechanisms to be used upfront and set requirements for their use
* Ensure that the third party is using software coding best practices and that they are documented and validated
* Demand proof of the level of training, skills and security awareness among the third party's development staff
* Ensure that expectations are laid out in the service-level agreement, including milestones and deliverables
* Define acceptance criteria for the security of applications delivered
* Provide a list of the most critical flaws that are deemed unacceptable
* Mandate measures for certifying that code is secure, including the use of automated testing tools
* Define steps required in the audit process and ensure that all code is audited and certified before payment is made
* Ensure that the right to audit code and perform security checks is written into the contract
* Define processes for remediation by the third party and ensure that responsibility for bearing the costs of remediation or legal liability, even after the application has been delivered, are written into the contract
* Specify in the contract that security checks and monitoring will be continued throughout the lifecycle of that application and lay out the third party's responsibility for fixing flaws found at a later date.
Such practices will ensure that the most secure code possible is delivered, leaving organisations less vulnerable to security incidents. But, given the size of most software applications and the fact that it is almost impossible to write prefect code, however small the program, organisations that follow the practices outlined above will also have covered their backs be ensuring that the responsibility for fixing vulnerabilities lies firmly in the hands of the outsourcer—something that is essential since flaws discovered once an application is in use are the most expensive to fix.
The fact that software applications can contain flaws is nothing new. But a recent survey by Quocirca of 250 organisations shows that not only are businesses increasingly relying on bespoke or modified software applications, which they see as critical for their business, but not enough of them are employing automated tools for testing those applications for security vulnerabilities.
Another practice that is increasingly being seen is the outsourcing of code development to third parties. This can be a less costly option than developing the code in-house, especially where a business does not have sufficient resources of its own. But when such an essential process is placed in the hands of a contractor, extra care must be taken to ensure that secure coding practices are used and that applications are thoroughly tested.
Failure to thoroughly police the software development process has far reaching consequences. If a hacker is able to exploit a flaw in the software, they could use it to attack the application in order to steal sensitive personal or organisational information. In today's increasingly regulated world, this is something that can cost organisations dear, not just in terms of the price tag for cleaning up after the attack, but in lost business owing to the negative publicity that is likely to ensue.
But if a flaw is found in software that was developed by a third party, who should bear the responsibility for fixing those errors? If an organisation bought an appliance such as a printer that was found to be faulty, resulting in a fire on its premises, it is in a position to sue the manufacturer and claim compensation. In the same way, liability for faulty software should be pushed to the contractor to which it was outsourced and should be written into the contract.
When outsourcing any business process to a third party, it is essential that a good contract is written and that a watertight service-level agreement is put in place. This is something that some regulations actually mandate when outsourcing application development. For example, the FFIEC (Federal Financial Institutions Examination Council) implementation guide for GLBA (Gramm-Leach-Bliley Act) states that organisations must establish a vendor management programme that includes "establishing security requirements, acceptance criteria, test plans, and reviewing and testing source code for security vulnerabilities." This may be a US regulation, but its impact is being felt by some European organisations as well, and there are a host of other regulations demanding higher levels of security.
Technology vendor Ounce Labs has been advising organisations since 2002 on how to work with outsourcers to ensure that code is developed with security in mind and that the appropriate testing tools are used. It has worked with lawyers to develop suitable contracts for organisations to use, which it makes available on its website. According to Ounce Labs, the following are some best practices that organisations should follow when outsourcing software code development:
* Define upfront what is meant by security, including the security environment in which the application is to be used and what other resources could be exposed by a security vulnerability, and include the definition in the contract put in place
* Validate the security mechanisms to be used upfront and set requirements for their use
* Ensure that the third party is using software coding best practices and that they are documented and validated
* Demand proof of the level of training, skills and security awareness among the third party's development staff
* Ensure that expectations are laid out in the service-level agreement, including milestones and deliverables
* Define acceptance criteria for the security of applications delivered
* Provide a list of the most critical flaws that are deemed unacceptable
* Mandate measures for certifying that code is secure, including the use of automated testing tools
* Define steps required in the audit process and ensure that all code is audited and certified before payment is made
* Ensure that the right to audit code and perform security checks is written into the contract
* Define processes for remediation by the third party and ensure that responsibility for bearing the costs of remediation or legal liability, even after the application has been delivered, are written into the contract
* Specify in the contract that security checks and monitoring will be continued throughout the lifecycle of that application and lay out the third party's responsibility for fixing flaws found at a later date.
Such practices will ensure that the most secure code possible is delivered, leaving organisations less vulnerable to security incidents. But, given the size of most software applications and the fact that it is almost impossible to write prefect code, however small the program, organisations that follow the practices outlined above will also have covered their backs be ensuring that the responsibility for fixing vulnerabilities lies firmly in the hands of the outsourcer—something that is essential since flaws discovered once an application is in use are the most expensive to fix.