Tony Cox and Jeff Lowder have provided some excellent commentary on CVSS (Indeed SIRA includes lots of good discussion). Their goal is bit broader than yours, but I think that the referenced article provides an index to commentary about CVSS. @Metahuman's post points out that CVSS can be supplemented.
The Department of State's iPost system found that CVSS scoring over estimated the value of unimportant risks; if I recall correctly they simply cubed the values to emphasize the serious stuff. State's iPost is the model (albeit a flawed model according to the Department of State Inspector General) for the DHS CAESARS - but both of those are more architectural than your goal.
CVSS is a flawed standard - I know that there are active efforts to revise/reform it. There is considerable subjectivity in the ratings - I can't find the reference at this moment, but someone ran a test where they gave several experts the same information about a vulnerability and they used the CVSS process but came up with very different answers. But it is a standard. It is an excellent place to start for a project such as yours where you're looking for a reference standard without the effort of creating your own methodology. You can use CVSS (and CWE) as starting points, and then do what @Colin Cassidy calls "magic maths".
CVSS is system-centric; it ignores architectural security features, and it probably undervalues vulnerabilities where the web/cloud is the delivery vector. I'd want to look at OWASP and Veris for more statistical information about real world exploits rather than on theoretical models.
I probably wouldn't ignore the environmental metrics; indeed in the medium term, I'd use the components of CVSS to begin to roll your own vulnerability score that is more closely fitted to your needs.