Oh yeah, I literally just finished triaging through a report full of false positives (I raised one legitimate ticket out of a 20 page DAST report). And by that I mean that, like @BrianWilliams' example, the tool is correct to report them because they may be real in some scenarios, but it takes a skilled analyst to decide whether or not the apply in this scenario. Some examples that I saw in the last hour:
- Default umask too wide. My analysis: irrelevant because this is an IoT device where the app runs as root anyway.
- X-Frame ClickJacking protections not set. My analysis: irrelevant because it's on static, non-interactive web pages.
- Missing CSRF Protection. My analysis: incorrect, once you're logged in, the (non-cookie) auth token counts, but the tool is blindly looking for a header called "CSRF".
- TLS 1.0 is enabled. My analysis: Yup, we need it for compatibility!
- Partition mounted with weak options My analysis: irrelevant because if you can get inside the device to infect the partition, then you're already inside the device...
Yeah, we could (and probably should) fix them just to be less sloppy, but they are not exploitable in this application, so it's no longer a security issue and gets lumped in with other technical debt / code cleanup tasks.
To your question:
Does DAST needs the same time and effort from developer and security analyst to filter out false positives?
You want developers to be part of the process so that they are constantly improving their security skills and will write more secure code! The amount of hand-holding depends on the skills of the developer.
With some teams I need to do 100% of the report triaging and open tickets for specific things that need to be changed. For other teams I triage with the developers, and for one team of veteran developers, they do almost 100% of the triaging and I sign off on their analysis prior to each release.
To me, a security analyst who blindly demands that everything in a SAST/DAST report be fixed is either lazy or has no understanding of the technologies they are supposed to be securing.
It sounds like you need to put a bit more effort into your triaging process and, as the security analyst, dig a bit more to understand what each of the reported vulnerabilities is about to decide if it really needs to be fixed (which will have the side-effect of expanding your own skills).
To paraphrase the great Tanya Janca:
"The easiest way to create hostility between Dev and Security teams is to blindly throw automated reports at the developers and say FIX IT!" - Tanya Janca