Apparently having 100% prevention of SQL Injection and XSS attacks is easier said than done, but why?
Can't static code analysis tools ensure that all user supplied input vectors (including user tainted variables) are sanitized? Or enforcing it with a restrictive programming language or framework?
Wouldn't the below rules make XSS and SQL Injection impossible?
- SQL Injection immunity - enforce every query with the database is done with prepared statements and bind variables.
- For rare circumstances where prepared statements harm performance in an unacceptable way, enforce strong validation and/or character escaping rules for the context of the underlying database technology.
- HTML Injection immunity - enforce all HTML generated pages to be done with templates (much like prepared statements with bind variables) where every user supplied variable, or user tainted variable, is put into slots with context specific rules for escaping or stripping special characters.
- Encoding Mixup immunity - enforce encoding consistency between all inputs and outputs.
- E.g. Requiring
<meta charset="utf-8">
as a header on all HTML templates and ensuring all data is processed with that same encoding.
- E.g. Requiring
So what's the problem with this idea?
- Is this just not feasible for all use cases? If so, what are some example use-cases where these rules are not feasible?
- I suppose this might not be feasible for already existing web applications that were developed without following these rules, and re-designing them might not be cost-effective from a business standpoint. But what about new web-app development projects? Can these rules be programmatically enforced before allowing it to enter production
- Is my assumption just wrong about these rules preventing 100% of these attacks?
References: