40

I work as a consultant for a large corporation that uses some software, in which I have found a security vulnerability. I notified both my client and the software vendor about a year ago. They referred the case to their account manager (!), who (in a polite way) said: "Your consultant is full of shit." Luckily, the client got my back, and after a little back and forth they reluctantly agreed to release a patch. Just for my client, not for anyone else. It turned out to be a patch only for the client code, not fixing the underlying vulnerability on the server side. When I pointed that out, they said: "Yeah, sorry, but that's too much work. We consider that a customization request. If you pay for it, we can fix it."

I believe my client is currently negotiating license renewal with them and I'm assuming they are using this as a bargaining chip (the details are above my paygrade). However, I was asked for input, and it seems like, after pressure from my client, and just one year after being made aware of the issue, they have agreed to add it to their backlog as a "feature request", finally admitting it's actually a security vulnerability. My client is currently pushing for an implementation date, but I'm not getting my hopes up.

Their argument for delaying the implementation? "It's not an issue if everything else is set up correctly." Which is true, but that's a bullshit argument. That's like saying "You don't have to validate parameters on the backend because the frontend does the validation" or "You don't have to encrypt passwords because unprivileged users shouldn't be able to read the password database anyway." And I have already demonstrated how it can be exploited ("Well, then you need to fix your settings.").

I am really annoyed that instead of thanking me for letting them know there's a problem, they first deny it, then acknowledge it, but downplay the risk, and then don't even fix it. I'm sure they haven't notified any of their other customers of the issue. My client says "Well, that's their own problem", but I really feel the other customers should know, so they can make an informed decision whether to keep doing business with this company. Also, they would probably be able to pressure them into fixing the issue.

So what can I do? I guess I should first wait for the vendor to get back to my client with a timeline for fixing the issue (as if one year of doing nothing wasn't bad enough). And then? I don't want to disclose it publicly, because I really care about their customers. I also don't want to give them a deadline for fixing it, because I'm worried it could be interpreted as a threat.

But, if they don't fix this, would you talk to their other customers (large corporations)? I think that would probably put pressure on them to fix the problem. But it would also be immediately clear that the info was passed on by me. And I don't want to get into trouble with my client.

psmears
  • 900
  • 7
  • 9
TravelingFox
  • 433
  • 2
  • 7
  • 29
    It is not hard to come up with a list of applications that are not secure if you don't follow recommended settings for the configuration. It doesn't follow that the application is flawed when you don't follow their requirements. Is there a good reason not to use their settings? – doneal24 Sep 06 '22 at 17:41
  • This question is off topic on Security SE. I suggest to move it to the [Law SE](https://law.stackexchange.com). – mentallurg Sep 06 '22 at 17:54
  • 3
    @doneal24 I'll try to come up with a different example: Imagine there was a bug in a browser that made it possible for a malicious website to get write access to a system file (say, the computer's startup script) - but it would still require the current user to have write access to that file. Do you think it would be OK for the browser vendor to say: "Well, you should never run a browser with administrator permissions" and refuse to fix the issue? – TravelingFox Sep 06 '22 at 17:54
  • 15
    You can always do what Moxie did, when Microsoft refused to do anything after he reported a vulnerability to them with the way early versions of IE validated SSL certificate chains. After Microsoft's unresponsiveness, he publicized the vulnerability, and released a tool to exploit it, basically forcing them to fix it. See https://security.stackexchange.com/questions/249347/what-differentiates-a-ca-cert-from-a-server-cert for more info. – mti2935 Sep 06 '22 at 18:09
  • 5
    @TravelingFox If the user has to go out of their way to run the browser with administrator permissions, it may be acceptable, yes. Especially if there's a warning on startup saying "You shouldn't run this in admin mode". That doesn't mean it shouldn't get fixed, but it certainly makes it less urgent. On the other hand, if the _default_ behavior was to install so it ran with admin permissions, that'd be a much bigger deal. – Bobson Sep 07 '22 at 05:51
  • 1
    One thing you can do is build a CVSS value for it: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator. From what you said, it sounds like the attack complexity would be "High", since the attacker doesn't have any way to force the misconfiguration. If it comes out with a high score, that's an argument you can use. If the score is relatively low, then maybe their deprioritization makes sense. – Bobson Sep 07 '22 at 05:57
  • @TravelingFox It depends really. Would you say that MS SQL is insecure and has unfixed exploits, because you can go out of your way to configure the database in such a way that anonymous users have write permissions? – Voo Sep 07 '22 at 08:16
  • 1
    @Voo I gave a better analogy below. Think of it as a software that stores passwords in clear text. "That's not a problem because the password columns are protected if you set the proper permissions on the database level" Which is kind of true, but at the same time a crazy approach to security. – TravelingFox Sep 07 '22 at 08:22
  • 3
    @TravelingFox While that's certainly not a great implementation, if the defaults are secure and it is obvious that the change in configuration is risky (changing ACLs? dangerous. changing the name of the folder? surprising) it's not exactly a vulnerability in my mind. It's very hard (or impossible) to ensure that customizable software cannot be misconfigured to become unsafe. – Voo Sep 07 '22 at 10:45
  • 2
    i think you overargue this, like kind of justifying yourself for going into full-engagement mode. unsafe functions are found all the time, but exploiting these is a whole different topic. just because you could, with kernel-priviliges, write into the hosts-file of windows, should they change the system-architecture? it is YOUR task to make sure that no malicious code gets executed on kernel-level, not theirs. also, you could easily just patch the code for your customer without anybody noticing, OR you could exploit your customer's system and help them file a lawsuite against the vendor – clockw0rk Sep 07 '22 at 12:20
  • 9
    This is a perfect illustration of why hanging a business off of third-party closed-source software is problematic. – T.E.D. Sep 07 '22 at 14:41
  • Anonymously post enough details about the company, software, and the nature/severity of the exploit on Reddit (without actually saying what the exploit is) such that other customers of the software have a chance to find it and become concerned. Making your report follow all the parts of a real CVE will ensure it is technical and informative enough to not be easily shot down. – ErikE Sep 07 '22 at 18:39
  • @T.E.D. On the other hand, we have an open source library using an unsafe version of another library only during build time. The comment from the current developers is that the attack surface of a library only used during build time is so small that they won't be patching it, and a suggestion that we could patch it ourselves causes our management to suddenly become very busy elsewhere. Given limited resources you can't have everything, and security isn't very useful if what you're securing is a brick because there wasn't time to implement any features. – user3067860 Sep 08 '22 at 16:49
  • @user3067860 - If you don't think its a big enough problem to commit any of your own resources to fixing, that's simply your business decision. If you have a problem with code for which you have the sources, you always have the option of fixing the problem yourself (or paying someone else to do so). However, if you have a problem with software from a vendor who is hiding the sources from you, you're SOL. – T.E.D. Sep 08 '22 at 18:07

6 Answers6

31

It sounds like you're insisting on an issue being treated as high priority, but there is little evidence for this. In your own words from comments,

Think of it like a really bad vulnerability (like SQL injection) that can only be exploited if some software is run under an admin account (made up example). SQL injection is something that just shouldn't be possible in 2022, but they refuse to fix it because "Well, don't use an admin account". SQL injection is a serious flaw, it should be fixed out of principle, IMHO

Few companies will agree to make changes to working code "out of principle". If a bug can only be exploited in case of misconfiguration, they are more likely to add a warning to the manual and leave it at that. And it's not like they are objectively wrong. Based on what you've said, this vulnerability doesn't affect your client, you don't know whether there are any affected customers at all, and it requires unsafe configuration to be exploitable. Setting your aesthetic preferences aside, this doesn't sound high-priority.

Answering your question "what to do" - nothing. You've informed your client (and made sure they aren't affected), they've informed the vendor, it's their responsibility now. Surely your desire to rid the world of this security threat is commendable, but I'm afraid this isn't the greatest vulnerability in the systems any of the other customers are using.

IMil
  • 1,081
  • 1
  • 7
  • 7
  • 1
    Maybe SQL injection was a bad example. It's difficult to describe as I don't want to give too much information. Think of it of something like the software storing passwords in clear text and the vendor saying: "That's not an issue, the fields can only be read with admin access to the database, which regular users don't have." – TravelingFox Sep 07 '22 at 08:09
  • Also I'm pretty sure the vendor did NOT inform any of their other customers. And based on what I've seen in the wild, I bet several of them are vulnerable. That's what I'd really like to do something about. They're just sweeping this under the rug. – TravelingFox Sep 07 '22 at 08:13
  • 1
    @TravelingFox Keep in mind that in the process of resolving this issue, they could introduce another problem, possibly worse than the one you have found. If the issue can be mitigated (I'm a little unclear on that from your description but I think it can) then it's reasonable address it in a normal development cycle. – JimmyJames Sep 07 '22 at 14:26
  • 9
    @TravelingFox Unfortunately, IMil is right here. You may feel, with 100% passionate certainty, that SQL injection bugs and others of that ilk simply must be fixed at once, *on principle*, whether they are exploitable or not. I might even agree with you. Hundreds of other like-minded programmers might agree with you. But, sadly, we're not talking about idealism and passion here, we're talking about BUSINESS. If the bug isn't exploitable, or if it's possible to mitigate it via a workaround that's significantly cheaper than a "proper" rewrite, most businessmen will consider that adequate. – Steve Summit Sep 07 '22 at 15:11
  • @SteveSummit It's not only about passion. It's about the fact that THEY ARE NOT TAKING ANY STEPT TO MITIGATE THE RISK. They're saying "Oh, this is not really a problem, so we won't make a patch available anytime soon, but we're also not telling anyone, that would only startle our customers". That's what's pissing me off. It's not a small bug, it's a major design flaw. It's like selling a safe with a lock anyone can open with a paperclip and saying: "Oh, that's not a problem, because our customers should always keep the safe in a guarded place anyway, and the metal isn't affected". – TravelingFox Sep 08 '22 at 18:49
  • Look at it this way: your client is protected, your client isn't acting unethically, and they haven't canceled your contract. You've done what you can to put their best interests forward, which is your primary job. If you pull this thread much further, you could violate an NDA, or cause them trouble with the vendor such that they release you; those could echo negatively on you trying to find future work. And try not to take a job working with that unethical vendor's products ever again. They seem so sleazy that they might sue you. – John Deters Sep 11 '22 at 15:38
27

Things you can do:

  • Go for full disclosure, but as you've pointed out, this will probably do little more than strain the relationship further
  • Do you have a working proof of concept that shows how the vulnerability could be exploited by an unauthorized person? It is one thing to demonstrate a potential vulnerability, but exploiting it is another matter. Until you come with a PoC, the vulnerability is perceived as theoretical and easy to dismiss. If on the other hand, you have such a PoC, then this could instill a sense of urgency that has been lacking so far.
  • Also, if the user base for this particular software is rather small, then public pressure may not be sufficient for them to amend their ways, especially when the customers are more or less "captive" on this particular software
  • I understand the vulnerability is server-side, but if this is a system that is hosted on premises you may have a number of mitigation options available. For example, if you are hosting a vulnerable web application that cannot be easily patched (for example a SQL injection vulnerability), then a WAF loaded with custom rules can make exploitation more difficult if not impossible. Or even a reverse proxy to filter the requests.
  • I am not a lawyer, but if the contract is up for renewal I would want to add a clause about vulnerabilities, known and unknown, and address liability in case of a breach.
  • Or you can try to be creative: ask for a quote for fixing that "bug" (maybe it's not a lot of money), pay up and tell them you're going for full disclosure after giving all clients the time to patch their systems, so that disclosure should not cause harm. Result: your client becomes an IT hero for sponsoring the fix, and the vendor is embarrassed. I am half-joking here, but if you have identified a software flaw that could cause you a lot of damage, paying for a fix now may be a smart move. But if you are going to pay, then you have the right to demand technical details so they'll have to justify the fee. Perhaps it will turn out that it's not that much work after all.
Kate
  • 6,967
  • 20
  • 23
  • 8
    I don't want to go for full disclosure because I'm worried for other customers. I do have a working exploit, but it requires things to be "not perfect". Think of it like a really bad vulnerability (like SQL injection) that can only be exploited if some software is run under an admin account (made up example). SQL injection is something that just shouldn't be possible in 2022, but they refuse to fix it because "Well, don't use an admin account". SQL injection is a serious flaw, it should be fixed out of principle, IMHO. And they have really large corporations among their customers. – TravelingFox Sep 06 '22 at 20:58
  • 1
    Such companies usually offer customizations at a huge price for features that they don't care about so that no customer would agree to pay. So just try not to use proprietary software. If it was open source, then you would be able to fix for your client. – akostadinov Sep 07 '22 at 08:12
  • 2
    Having worked for a software vendor, @akostadinov, and being asked to add features that we knew were bad ideas, we did exactly that. Made _outrageous_ estimates and still, sometimes, they were accepted. – FreeMan Sep 07 '22 at 14:47
  • @FreeMan, I guess it depends on how hungry that vendor is. An established vendor I believe would hardly want anti-features or maintenance bloat for a single customer. – akostadinov Sep 07 '22 at 14:58
  • 2
    We were well established in our industry, but the owner was also very much a "the customer is always right" salesman. We did keep customer specific lines of code, and these pain points were usually kept customer specific, some for _many_ years and versions. Not saying that's the best model, but it was the one we lived in. – FreeMan Sep 07 '22 at 15:01
  • 2
    You can't just go full disclosure. This discovery was made as part of a contract with the client and you are an employee at the consulting firm. The _knowledge_ of this vulnerability is property of the client and your employer. Going full disclosure would be breach of NDAs. Also, you risk damaging the reputation of your client more than of their software supplier. Even submitting this to a bug bounty platform might be a breach of your contract as you use knowledge gained as employee for personal profit (bounty). – BlueCacti Sep 08 '22 at 10:22
13

This is the really old topic on how to disclose vulnerabilities.

(note on terminology: I am using 'company' for the one that made the software, 'researcher' for the one who finds the vulnerability and 'client' for the person or business that installed the software and requested the pentest)

The company that made the software may prefer not to patch anything (less work), and that you don't tell anyone that their software has defects (in this case, a vulnerability)

As a researcher, you consider this to be important and that it deserves to be fixed promptly.

Other customers using this software are sometimes argued to be secure if you don't disclose the details. However, it doesn't avoid that someone else (perhaps with more nefarious purposes) finds the same vulnerability (there are enough examples of concurrent discoveries, several people finding the same vulnerability with no prior knowledge of the work of the other). Not letting those other clients know that there is a vulnerability in that software also puts them at risk by denying them the ability of taking protective measures that they might have used had they known about it.

Each security researcher/team has its own policy, but the usual compromise between those two positions is that the company is notified of the security vulnerability, with a notice that it will be made public after a fixed time (usually 90 days even if it's not fixed by then).

This should be enough time for the company to assess the problem and fix it. Sometimes the company requests a larger embargo period, to which the researcher may agree or not (at this point they will probably think if the request is reasonable based on their interactions on this period).

(If the company releases a fix earlier, the researcher publishes their discovery at that point)

These are obviously generalizations: There are companies striving to fix security vulnerabilities in their products in much shorter timeframes, and researchers that advocate for immediately publishing all vulnerability details, with no margin for the companies.

In this case, as there was no prior timeline proposed, that would need to be stated: "It has already been a year with no fix on the horizon, we are concerned that your users are at risk, and we plan to publish the vulnerability at X date (e.g. January 15th 2023)". It's then up to the company to decide if the development of the fix should be given more priority or not.


So, what happens if the embargo period passes, there is no fix and the researcher decides to publish it?

Advisories can be shorter or longer, but there are a number of things that should be present:

  • The product and version(s) affected
  • A brief summary of the issue (e.g. "there is a SQL injection", "a local user could escalate to admin)
  • In which version it is fixed (if there is none, that there is none)
  • Available mitigations and workarounds (e.g. the device should only be placed on an isolated network before this is fixed)
  • Timeline (the different dates in which you contacted the company, or they contacted you back)

Plus any other relevant information you might want to include. Some people include the PoC (or publish it but after a further delay). Others publish a video. Some vulnerabilities are even the basis of papers later presented in conferences.

Then, the readers will reach their own conclusions.

A company not fixing a serious issue for a year won't look good. Whereas it may be more amenable to a minor issue, or one not affecting those interested in the advisory.

At the same time, if your report is flawed, you won't have any credibility. For instance, you can expect it to be received with a smile if it described as a vulnerability that the system is unbootable if the user logs in with the Administrator account and deletes C:\Windows.

Note that a good report doesn't need to be "big". It should just be truthful. There are big vulnerabilities and small vulnerabilities. And the same vulnerability will have different impacts on several clients.

Also, in some cases the "fix" might even end up just as a documentation update explaining that the system MUST NOT be setup in certain way because that would be insecure.

Regarding their explanation, I would prefer not to weigh in without knowing the specifics (that you obviously won't be able to share). In some cases it does make sense to treat the system as a whole (for instance the contract between the frontend and the backend of certain software might state that the validation is done by the frontend classes), and in others it's completely unsustainable. I have also heard that "It's not an issue if everything else is set up correctly." argument in cases that I didn't agree with.

Still, you shall admit that if the issue is not exploitable due to other measures they have set up (and, as they expect everyone would configure the system "correctly"…), that they consider this security problem a minor issue, and haven't prioritized it.

  ‎

Finally, there's another point to take into account in this specific case. So far, we have considered the company making the product and the researcher finding the defect. However, in this case there is a third party which is the client that tasked you with performing the pentest and -likely- owns the results and has a say on how they can be used. So far, they seem to be using this for negotiating the license renewal.

The argument «Well, that's their own problem» is a risky one. On the one hand your client has taken the monetary cost of performing a pentest of the application, why should it be benefiting the company or other customers (which might even be their competitors!) with their own funds? On the other hand, cooperation is a better strategy for obtaining secure systems. How do they know that other client didn't find another vulnerability (one you missed) and is sitting on that as well?

  ‎

would you talk to their other customers

No. You don't publish the results without your client permission. If your client gets a hefty discount in exchange of never telling anyone of the vulnerability, you would have to get your mouth shut (I guess, review the provisions of the contract with your client for the actual details).

If you want to do this in the future, you would include in the future contracts with your clients some provisions for that, stating that you are allowed to communicate any vulnerabilities you find to the vendor, that after X months (or earlier if authorized by your client) you can publish the details, that your client must credit you as the one that found the vulnerability… what you deem fit (and your clients accept).

Even then, assuming your contract said that, contacting out of the blue other companies like that would be a bad idea. It would be far better to publish the results in your blog, then the advisory itself to the usual lists, referencing the blog post. And you should really get a CVE assigned to it.

Once your vulnerability is listed with a CVE, it should appear on the radar of all security teams using that software (with a proper process for vulnerability handling). If you were inviting enough in your blog post (offering to provide additional advice to affected customers, perhaps even including a bit of publicity at the bottom remembering that you are available for hire if they need to pentest a setup of that software) you may receive some queries.

Had you worked with other companies using that software, a quick note to your contact there pointing out to your new post could be adequate, but I wouldn't cold-email those companies.

Ángel
  • 17,578
  • 3
  • 25
  • 60
  • 2
    Thank you. I feel that you're the only one that really understood my question. Everyone else was debating whether this was only about "misconfiguration" of the system, when I didn't even give any details on the actual issue I found. Nevertheless, I have spent some more time thinking about how this could be exploited and concluded that the practical risk is indeed low if customers have upgraded to the latest version (doubtful) and have implemented good security otherwise (also doubtful). I think the risk at least to my client is rather low at the moment. – TravelingFox Sep 08 '22 at 18:46
7

You could publish a security advisory yourself, without disclosing the details necessary for exploitation. A common place for the publication would be the CVE database, which has an online submission form with instructions.

Include at least the following:

  • Affected product name and versions that you know of.
  • Worst case effect of the vulnerability. Can someone alter or remove data? Download something secret? Execute unauthorized code on the server?
  • Required access for vulnerability. Does it require an user account? Access to local machine or network?
  • You mentioned that some settings affect the vulnerability. Include the settings that can be used to avoid the problem.

It's probably good courtesy to post a draft of your submission to your client and the vendor for preview first, but you don't really need the vendor's permission to publish it.

jpa
  • 951
  • 6
  • 11
  • Instead of going public with this information, I would prefer to inform other customers directly, and hope that they would also put pressure on the vendor to fix this. But I don't know if that's really a good idea. – TravelingFox Sep 07 '22 at 11:45
  • 5
    I think the point, @TravelingFox, is that the CVE is the commonly accepted method of sharing vulnerabilities. Especially those for which the vendor has shown limited interest in fixing. Additionally, do you know all the other customers? Do you have contact info for them? If they're up on security (and one would hope their IT teams are), they should be monitoring CVE and will find your post themselves. – FreeMan Sep 07 '22 at 14:50
  • 1
    "No one knows there is a vulnerability" can be used as an excuse to not fix it. If the public knows, they can't use that as an excuse anymore. – rtaft Sep 08 '22 at 12:26
2

Consider the impact to your personal brand of any action you decide to take.

If you want to earn hero points as a security researcher, the disclosure options might be beneficial.

If your industry is niche and your clients appreciate discretion, you might want to avoid any course of action that makes it look like you are washing dirty laundry in public. Disclosure may be perceived as an admission that the client was using vulnerable software. That could have PR consequences.

As a consultant, you need to think about the impression your next client will have of you. Maybe they will appreciate a hero security researcher who can come in and secure their systems. Maybe the last thing they want is a disruptive influence who makes a lot of noise over a minor issue.

jl6
  • 625
  • 4
  • 9
-1

Did you get permission from the vendor to perform penetration testing?

Read the ToS carefully, you may have breached an agreement in which your employer agreed to basically not pen-test the software in any way.

The main thing that separates a penetration tester from an attacker is permission. The penetration tester will have permission from the owner of the computing resources that are being tested.

Source: https://www.techrepublic.com/article/dont-let-a-penetration-test-land-you-in-legal-hot-water/

MonkeyZeus
  • 507
  • 3
  • 10
  • The OP doesn't have that problem. He was performing the pentesting for a company which had $software installed. Thus presumably with the usual paperwork. – Ángel Sep 07 '22 at 17:03
  • @Ángel Please point out the sentence which backs your claim. The post does not distinguish an on-premise installation. Regardless, the ToS could still have wording to not pen-test their software. – MonkeyZeus Sep 07 '22 at 17:56
  • Wouldn't that be equivalent to the vendor saying that they know their software would be shown to be insecure if pen-testing was performed? And I doubt that bad actors would have agreed to the ToS, so they're going to be "pen-testing" and acting on the vulnerabilities. – Andrew Morton Sep 07 '22 at 18:17
  • @AndrewMorton Excellent observation. Read the ToS. Maybe, just maybe, don't use that vendor. – MonkeyZeus Sep 07 '22 at 18:25
  • @Ángel Source?? – MonkeyZeus Sep 07 '22 at 18:28
  • @MonkeyZeus I would expect OP client to be the one in breach f such ToS, if any. I considered mentioning it, but I find it really odd a ToS which forbids you from finding security defects in the software, do you have examples of those? Not really because the company _wouldn't_ prefer that, but because trying to prevent that in a contract would look bad on them, Such clauses are common when providing _services_, not so much when they are a local install. – Ángel Sep 07 '22 at 22:31
  • As for knowing that it is on-premise, I admit it is not explicit, but I think that the general description supports that this is the case: it is described as a software, the client is pentesting it, there was a patch just for this client... – Ángel Sep 07 '22 at 22:33
  • @Ángel I disagree, the general description makes me think it's not an on-premise software. If the client is more clueless than the consultant then the consultant could be pushing the client into danger. Unless you can get the OP to disambiguate then my answer stands as valid. I don’t appreciate your baseless comment nor the downvote that it attracted just because you don't think such a ToS can exist. – MonkeyZeus Sep 07 '22 at 23:35
  • Sure. I have no problem to acknowledge that you could be reading the situation better than me. I would usually consider irresponsible for the client to launch by their own an uncoordinated pentesting to a service hosted by the software vendor, but there are certainly clueless clients that would do that. @TravelingFox could you please clarify us if the software was hosted in the vendor servers or by your client? – Ángel Sep 07 '22 at 23:41
  • @Ángel Go ahead and Google "You also expressly agree that you will not use any robot, spider, or other automatic or manual device or process to interfere or attempt to interfere with the proper working of our website, nor act as a conduit for others to affect the same result." If you think such ToS don't exist. – MonkeyZeus Sep 07 '22 at 23:54
  • @MonkeyZeus, that would fall under the "Such clauses are common when providing _services_," part. The website is hosted by somewhere else. The example would need to be like "You can install wordpress on your server but you are not allowed to crawl it". I can think on licenses forbidding decompilation and reverse engineering, but I don't think they are a perfect fit, since their main goal is slightly different than what we are talking about. Moreover, you could pentest an application finding security vulnerabilities without opening a debugger at any point. – Ángel Sep 08 '22 at 00:04
  • @Ángel Right, and given the amount of SAAS being peddled to businesses, I wouldn't be surprised if OP is using such a system. – MonkeyZeus Sep 08 '22 at 00:07