I'm not a security person. I'm a programmer who has to maintain secure code. This is what I call a "brittle" practice. Entry points are scattered all over a typical project. Finding and sanitizing all of them is a lot of work to address only a single problem, a lot of careful maintenance and hassle to ensure it remains effective as the code changes, and full of assumptions which render it ineffective.
Instead use practices which are easier to maintain, layered, contextual, and solve a broad swath of problems. Then you don't need expensive, overly-broad filtering.
You can't secure input if you don't know how they will be used.
Let's say you've "secured" your system by stripping out all single quotes from all input across the board. Great, you're safe against one type of SQL injection attack. What if that input is used in a...
- MySQL query which allows double quotes
- Filesystem operation
- Shell command
- Network query
- Method name
- Class name
eval
Each of these have different special characters, escape sequences, quoting rules, and security practices. You can't possibly predict how your input will be used when it comes in. Trying to strip out all special characters is madness and only "solves" one class of attack.
Or what if the user is allowed to enter a page limit. That limit is dutifully used in a parameterized query; no SQL injection, yay! The user enters 9999999999
and now you're open to a DOS attack.
You must apply the appropriate security measures at the point where the potentially insecure operation is performed. This takes into account many factors unique to the operation; sanitizing input characters is just one.
And as long as you're doing that, you might as well also parameterize your queries. Then there's no longer a need to do all the work and damage of blanket stripping quotes.
Filtering all input is hard.
There's many, many, many ways to get and pass around input in a given project:
- form inputs
- urls
- file names
- file contents
- database queries
- network reads
- environment variables
These are typically pretty free form and can use many different libraries. I'm not aware of any static analysis tools which verify all potentially vulnerable input has gone through filtering. Some languages have a taint
system, but they're difficult to use effectively. Even if you filter all inputs, without a static analysis tool unfiltered inputs will leak back in as development goes on. It's a lot of effort for an incomplete, expensive to maintain result which hampers functionality.
In contrast, there's typically only one way to execute SQL in a project. Static and runtime tools exist to automatically detect potential SQL injection. You can even disallow strings altogether and require that all queries be SQL query objects. These good practices are easy to maintain and increasingly baked into tools and SQL libraries.
"Firewalls" lead to lax security.
Similar to how some office networks have very insecure practices because "we have a firewall", there is a risk of the team becoming lazy about securing their code because "the input is safe". The input is most definitely not safe.
Some might say "why not both?" You only have so many hours to work on a project. A low efficiency, high maintenance practice is a time suck. Implementing and maintaining it will take your limited time away from more efficient, easier to maintain practices. In the worst case you'll spend so much time playing whack-a-mole with inputs, and the subsequent problems caused by the too aggressive filtering, that you'll never get time for proper security measures.
In short, input filtering is expensive, leaky, difficult to maintain, cannot solve the problem, and might make it worse.