Most appsec missions are graded on fixing app vulns, not finding them.
If Fortify SCA can be put into a pipeline, it can also be hooked to fix issues automatically (although care must be taken to avoid situations like the Debian OpenSSL PRNG vulnerability, which was not a vulnerability until a security-focused static code analyzer suggested a fix that ended up being the vulnerability).
Once you are fixing issues automatically (not all issues will be like this, so focus on certain always-true positives with standardized remediation that can be code generated through high-fidelity qualities), then you can turn your attention towards trivial true positives. For trivial true positives, these are ones that just never need to be fixed. They are real issues, but you just don't care because you don't have time or energy to fix them.
After you deal with trivial true positives, focus on false negatives. These are more elusive than false positives and will help you understand your false positive problem better. To avoid false negatives, tune the rules so that the named sources, passthrus, and sinks fit your app portfolio, and vice-versa. This could require renaming functions, variables, methods, classes, and the like -- or it could be structuring JSON and/or XML rules files that link up the right sources to the right sinks, and vice-versa. It could also mean elimination of code indirection properties, such as unnecessary Dependency Injection or similar patterns that affect leaving app portfolios as single, in-place architectures. Did you discover new findings that can be automated by eliminating false negatives? Good, then automate their fixes as well. Even better, you can get app developers to write new unit tests (or component or system tests, depending on the layer) that assert the behavior of each defect's fix -- and this could happen way before the code is scanned by Fortify SCA.
Finally you are left with false positives, which can also be tuned (and probably most already were automatically tuned out by concentrating on tuning in false negatives to true positives).
One of the best ways to determine if a finding category or subtype needs to be manually escalated to an app developer (or an appsec analyst, vendor, etc) is to lever a supervised machine learning algorithm such as BlazingText. You'll need to train-test split your data, and sometimes there are existing data sets available for this purpose -- other times it's something you have to build yourself. You may even want to use unsupervised machine learning for that early process, such as clustering with a variety of text mining techniques (i.e., you'd be text mining on the words, their structure, and their relation to OWASP and your own Application Security standards, penetration tests, and other data), but TF-IDF, LDA, and HMM come to mind (although there are many iterations and play on these). BlazingText is more of a word2vec algorithm, so ultimately you are looking for it to be used to determine a path towards automatic fixing, automatic escalation, et al. GPUs or moving the algorithms to RNNs could possibly provide improvements, but perhaps at a significant cost (not in time, but in GPU usage and power). Supervised BlazingText is sort of the sweet spot at the moment, but understanding and evaluating your model(s) is a part of any machine learning process.