Unfortunately, CVSS ratings are always more individual than desired. See my question CVSS Score Remote or Local Scenario for an example of such a discussion.
I am responsible for the CVSS rating on VulDB.com and we face similar problems like NVD. In some cases the exact details are not known which would lead to partial vectors. And partial vectors can't be used to calculate any scores. In this case we try to complete the scores as good as possible. In case of Internet Explorer we tend to use C:P/I:P/A:P
because nowadays the default browser is usually used by standard users and not administrators.
But in some other cases we complete to the possible worst case scenario. For example if a vendor claims there is a buffer overflow but he claims it may only lead to denial of service which would be at least C:N/I:N/A:P
. If we doubt the impact we then complete to C:P/I:P/A:P
. We always flag the vectors+scores with a confidence level which indicates the accuracy of the rating. Some vendors try to tune their scores. If you take a look at scores by some vendors you will see that they try to avoid some upper thresholds to keep their stats good looking.
Many of our visitors are not happy with the different scores from different sources. This is why we add all available scores (e.g. VulDB, vendor, researcher, NVD) and show them if possible. See ID 95550 for example. If there are big differences in rating we are going to explain them. This makes it possible for users to take the score which covers the individual use-case they have to deal with.