This is the really old topic on how to disclose vulnerabilities.
(note on terminology: I am using 'company' for the one that made the software, 'researcher' for the one who finds the vulnerability and 'client' for the person or business that installed the software and requested the pentest)
The company that made the software may prefer not to patch anything (less work), and that you don't tell anyone that their software has defects (in this case, a vulnerability)
As a researcher, you consider this to be important and that it deserves to be fixed promptly.
Other customers using this software are sometimes argued to be secure if you don't disclose the details. However, it doesn't avoid that someone else (perhaps with more nefarious purposes) finds the same vulnerability (there are enough examples of concurrent discoveries, several people finding the same vulnerability with no prior knowledge of the work of the other). Not letting those other clients know that there is a vulnerability in that software also puts them at risk by denying them the ability of taking protective measures that they might have used had they known about it.
Each security researcher/team has its own policy, but the usual compromise between those two positions is that the company is notified of the security vulnerability, with a notice that it will be made public after a fixed time (usually 90 days even if it's not fixed by then).
This should be enough time for the company to assess the problem and fix it. Sometimes the company requests a larger embargo period, to which the researcher may agree or not (at this point they will probably think if the request is reasonable based on their interactions on this period).
(If the company releases a fix earlier, the researcher publishes their discovery at that point)
These are obviously generalizations: There are companies striving to fix security vulnerabilities in their products in much shorter timeframes, and researchers that advocate for immediately publishing all vulnerability details, with no margin for the companies.
In this case, as there was no prior timeline proposed, that would need to be stated: "It has already been a year with no fix on the horizon, we are concerned that your users are at risk, and we plan to publish the vulnerability at X date (e.g. January 15th 2023)". It's then up to the company to decide if the development of the fix should be given more priority or not.
So, what happens if the embargo period passes, there is no fix and the researcher decides to publish it?
Advisories can be shorter or longer, but there are a number of things that should be present:
- The product and version(s) affected
- A brief summary of the issue (e.g. "there is a SQL injection", "a local user could escalate to admin)
- In which version it is fixed (if there is none, that there is none)
- Available mitigations and workarounds (e.g. the device should only be placed on an isolated network before this is fixed)
- Timeline (the different dates in which you contacted the company, or they contacted you back)
Plus any other relevant information you might want to include. Some people include the PoC (or publish it but after a further delay). Others publish a video. Some vulnerabilities are even the basis of papers later presented in conferences.
Then, the readers will reach their own conclusions.
A company not fixing a serious issue for a year won't look good. Whereas it may be more amenable to a minor issue, or one not affecting those interested in the advisory.
At the same time, if your report is flawed, you won't have any credibility. For instance, you can expect it to be received with a smile if it described as a vulnerability that the system is unbootable if the user logs in with the Administrator account and deletes C:\Windows
.
Note that a good report doesn't need to be "big". It should just be truthful. There are big vulnerabilities and small vulnerabilities. And the same vulnerability will have different impacts on several clients.
Also, in some cases the "fix" might even end up just as a documentation update explaining that the system MUST NOT be setup in certain way because that would be insecure.
Regarding their explanation, I would prefer not to weigh in without knowing the specifics (that you obviously won't be able to share). In some cases it does make sense to treat the system as a whole (for instance the contract between the frontend and the backend of certain software might state that the validation is done by the frontend classes), and in others it's completely unsustainable. I have also heard that "It's not an issue if everything else is set up correctly." argument in cases that I didn't agree with.
Still, you shall admit that if the issue is not exploitable due to other measures they have set up (and, as they expect everyone would configure the system "correctly"…), that they consider this security problem a minor issue, and haven't prioritized it.
Finally, there's another point to take into account in this specific case. So far, we have considered the company making the product and the researcher finding the defect. However, in this case there is a third party which is the client that tasked you with performing the pentest and -likely- owns the results and has a say on how they can be used. So far, they seem to be using this for negotiating the license renewal.
The argument «Well, that's their own problem» is a risky one. On the one hand your client has taken the monetary cost of performing a pentest of the application, why should it be benefiting the company or other customers (which might even be their competitors!) with their own funds? On the other hand, cooperation is a better strategy for obtaining secure systems. How do they know that other client didn't find another vulnerability (one you missed) and is sitting on that as well?
would you talk to their other customers
No. You don't publish the results without your client permission. If your client gets a hefty discount in exchange of never telling anyone of the vulnerability, you would have to get your mouth shut (I guess, review the provisions of the contract with your client for the actual details).
If you want to do this in the future, you would include in the future contracts with your clients some provisions for that, stating that you are allowed to communicate any vulnerabilities you find to the vendor, that after X months (or earlier if authorized by your client) you can publish the details, that your client must credit you as the one that found the vulnerability… what you deem fit (and your clients accept).
Even then, assuming your contract said that, contacting out of the blue other companies like that would be a bad idea. It would be far better to publish the results in your blog, then the advisory itself to the usual lists, referencing the blog post. And you should really get a CVE assigned to it.
Once your vulnerability is listed with a CVE, it should appear on the radar of all security teams using that software (with a proper process for vulnerability handling). If you were inviting enough in your blog post (offering to provide additional advice to affected customers, perhaps even including a bit of publicity at the bottom remembering that you are available for hire if they need to pentest a setup of that software) you may receive some queries.
Had you worked with other companies using that software, a quick note to your contact there pointing out to your new post could be adequate, but I wouldn't cold-email those companies.