Bestehen Bedenken im Hinblick auf VPN-Sicherheitslücken? Erfahren Sie, wie Sie von unserem VPN-Migrationsangebot inklusive 60 Tagen kostenlosem Service profitieren können.

Zscaler Blog

Erhalten Sie die neuesten Zscaler Blog-Updates in Ihrem Posteingang

Abonnieren
Security Research

SANS/CWE Top 25 Programming Errors

image
THREATLABZ
Januar 19, 2009 - 4 Lesezeit: Min

Recently Mitre and SANS teamed up to produce a list of the top 25 security errors that programmers should focus on. The errors were selected from the Common Weakness Enumeration (CWE) collection, which looks to enumerate all the different ways a piece of software can create a security vulnerability. For those familiar with CVE (Common Vulnerabilities and Exposures), the difference between CVE and CWE is the former tracks point instances of vulnerabilities in specific applications, while the latter tracks the different ways a vulnerability can manifest in any arbitrary application.

The list does cover the gamut of heavy-hitting vulnerability types that traditionally plague applications (SQL injection, XSS, command injection, buffer overflows, file name manipulation, sending data in the clear or with weak crypto, etc.), but nothing on it is particularly earth-shattering. I always chuckle to myself when I see entries like CWE-20 (Improper input validation) and CWE-116 (Improper output validation) as they are a bit of a catch-all and, from some viewpoints, subsume all of the more specific input/output validation errors such as SQL injection, XSS, etc. A programming shop that validated all of their incoming data would likely be able to cross multiple items off the list...but it's actually a fairly tall task. I've dealt with many application audits and security code reviews in my past, and I've never encountered a programming shop who felt it was realistic to go back and audit/adjust all inputs for validation on a moderate sized app (or larger). These large applications—many being web-based—sourced data from too many places to feasibly and accurately account for them all. Which goes along with something me and many other security professionals have always said: it is easier to design security in, rather than retrofit it after-the-fact. Having and mandating the use of a core set of available global validation functions will (hopefully) keep programmers tagging inputs with a base level of validation as the application grows and sprawls. Once a large application is written, trying to find and evaluate all inputs is like trying to find needles in a haystack. In my experience, most organizations change tactics at that point and look to just ensure they are not vulnerable to specific errors. In other words, rather than ensure all inputs are validated, they will instead just review the areas around their SQL calls to ensure there are no SQL injection issues. This essentially employs validation at 'time of use' rather than 'time of reception.' On the surface this seems like a good strategy--after all, there are likely few times of use for any given input that could result in a security vulnerability. But the long-term problem with his approach is that it creates a patchwork effect where validation is not done consistently. If input validation is done at point A (immediately up receiving), then all subsequent uses of that data (i.e. points B, C, and D) are in the clear. But if you instead employ validation at point D (i.e. time of use), then you're OK for now...until a programmer decides to branch C to call point E. At that point, E will inherit dirty data, but the programmer might not suspect it because they were mentally complacent with the idea that the data was previously clean at point D. Any direct or derived uses of the data at points A, B, C will still be susceptible to vulnerability.

But input validation aside, let's look at the list as a whole. It's meant for application developers, and addresses programming issues. These are definitely worth fixing, but at the same time, they can only go so far; in particular, this list does not address operational security issues. Here are some examples of security incidents that the list would not address:

  • High-profile Twitter accounts were compromised due to an admin Twitter account having an easily-guessed password
     
  • In-session phishing has malicious sites showing fake session expiration popups to users, encouraging them to supply login credentials to re-login
     
  • 30 million AOL account records were sold to spammers by an internal AOL employee
     
  • Spammers are still regularly abusing any open SMTP relay found on the Internet; open HTTP proxies are no better off
     
  • Laptops (and backup tapes) containing confidential records are being stolen at what now seems like regular intervals; the largest to date was US Veteran Affairs, having 26.5 million records compromised
     
  • Part of the CardSystems fiasco involved having unencrypted copies of consumer data laying around for 'test/research purposes'
     
  • Consumer gadgets are now shipping with viruses/malware (Samsung digital photo frames, ASUS Eee Box PC, TomTom navigator, etc.)

Overall the CWE/SANS list effort is good, because everyone could definitely benefit from a higher level of security in application software. But a look at the recent high-profile security incidents shows that humans are still a very real and very weak link in the security chain. So it's important to keep the list in perspective; even if you tackle all 25 items on the list, you may still be exposed by weak operational practices, user oversights, or what would have been item 26 on the list. It is worth noting that SANS also has a broader Top 20 Security Risks list, which does encompass a lot of operational security issues.
Update: also check out the How to suck at Information Security list, also posted at SANS.

- Jeff

form submtited
Danke fürs Lesen

War dieser Beitrag nützlich?

dots pattern

Erhalten Sie die neuesten Zscaler Blog-Updates in Ihrem Posteingang

Mit dem Absenden des Formulars stimmen Sie unserer Datenschutzrichtlinie zu.