Failing PCI compliance scan: Remote Access Service Detected on port 22

0

1

This is an item that I am failing in my PCI compliance scan report:

enter image description here

I use port 22 for SFTP connections to transfer files from my local computer to the server and vice versa. I tried to dispute this failure by sending this email:

Our username is [username]. Our latest scan results ran today for IP [IP address] show that we failed the following item:

Remote Access Service Detected

It doesn't appear this one is actually a failure. If it is, please provide any relevant information as to why it is.

We are using port 22 to connect remotely using SFTP and that is normal.

The reply from the PCI Compliance support team was:

Thank you for contacting PCI compliance support.

PCI compliance requires that all remote access services be turned off when not in use, however, if you require them to be on at all times for the business to be ran, you can file a dispute.

Does this mean that I need to enable port 22 only when I have an SFTP connection request and immediately disable port 22 when that connection is done transferring files? I thought it was a binary behaviour where I either had port 22 enabled on the server, or disabled, but not dynamically enabled/disabled based on the SFTP connection requests. Is that how it works?

UPDATE 1: I found the following information in "Appendix D: ASV Scan Report Summary Example" of the document at https://www.pcisecuritystandards.org/documents/ASV_Program_Guide_v3.0.pdf:

VPN Detected

Note to scan customer:

Due to increased risk to the cardholder data environment when remote access software is present, please 1) justify the business need for this software to the ASV and 2) confirm it is either implemented securely per Appendix C or disabled/ removed. Please consult your ASV if you have questions about this Special Note.

Does port 22 need to be enabled/disabled dynamically only when SFTP remote connections are requested for example, or is it a binary configuration on the server where it is either enabled or disabled?

UPDATE 2: I was reading the following at https://medium.com/viithiisys/10-steps-to-secure-linux-server-for-production-environment-a135109a57c5:

Change the Port We can change the default SSH Port to add a layer of opacity to keep your server safe .

Open the /etc/ssh/sshd_config file

replace default Port 22 with different port number say 1110

save & exit from the file

service sshd restart Now to login define the port No.

ssh username@IP -p 1110

That seems to be "security through obscurity". Is it an acceptable solution or a workaround? Would the PCI Compliance scan pass just by changing the port from 22 to a different random number?

Jaime Montoya

Posted 2018-07-23T16:54:36.510

Reputation: 111

The way I'm reading it, they don't think you should have RAS software at all. the scan will always fail if you do. The "turned off when not in use" part sounds like a compromise they didn't want to make, so the bar for justification may be high. I'd guess it involves physically accessing the server to enable the service, using it, and then disabling it again, rather than any kind of automated response, as those are vulnerable to attack as well (so its just security through obscurity). – Frank Thomas – 2018-07-23T20:02:07.990

@FrankThomas The option of running it on an alternate port is security by obscurity. The requirement to actually physically access the server to get files onto it is not, it's a simple requirement for use of physical access control over digital access control (and it's a lot easier to secure physical access to a system than digital access). – Austin Hemmelgarn – 2018-07-23T20:35:40.233

@AustinHemmelgarn, I was referring to technologies like port-knocking or other firewall magic that might start a service or open a port on some kind of demand. in that case you are obscuring the method of requesting the port become active, which is an improvement, but not secure when under scrutiny. – Frank Thomas – 2018-07-23T20:40:30.133

@FrankThomas How would I transfer files using SFTP? Does this mean, do not use SFTP? Is there an alternative way to transfer files? I do it with FileZilla or with SSH and then SFTP commands from the terminal. – Jaime Montoya – 2018-07-23T21:20:47.730

@AustinHemmelgarn Without using SFTP (no digital access, only physical access), how could I transfer files from the server to my local computer or upload files from my local computer to the server? – Jaime Montoya – 2018-07-23T21:22:52.960

Answers

1

First off, IANAL, and I am not an expert on PCI DSS compliance.

That said, this looks like a simple case of misunderstanding on your part. SFTP is still SSH access, it's just not shell access. It's almost as bad for an attacker to be able to read and write arbitrary files on your system as it is for them to have regular shell access (they can inspect your code, upload malicious payloads, etc). As such, just like with SSH, you should be running it on a different port if you need to be PCI DSS compliant.

Dynamically enabling/disabling SFTP on port 22 still violates PCI DSS compliance. Realistically, you're non-compliant as long as that port is open, regardless of whether or not the compliance scan catches it (just like theft is illegal regardless of whether or not you get caught).

Also, as far as PCI DSS compliance goes, this is an easy thing to fix. Any sane SFTP client will let you access it on a non-standard port, and the same goes for almost every SFTP server in existence.

Now, all that aside, you might still be non-compliant if you have this service enabled on a separate port. As mentioned above I am not an expert on PCI DSS, but given the response you received from an expert, it sounds like you do need this shut off outside of maintenance windows (which translates to not having the SFTP server running at all).

Austin Hemmelgarn

Posted 2018-07-23T16:54:36.510

Reputation: 4 345

In other words, disable port 22 and run SFTP on a non-standard port while making sure that this way to do it complies with the PCI requirements? – Jaime Montoya – 2018-07-23T20:01:32.797

@JaimeMontoya, based on their statement ("all remote access software"), no I would not expect alternate ports to satisfy their intent, though it may pass the scan if it is poorly crafted. – Frank Thomas – 2018-07-23T20:04:20.757

@FrankThomas How is someone supposed to upload files from local computer to server and download them from server to local computer? Isn't it by using SFTP on port 22? SFTP means Secure File Transfer Protocol. Does it mean that SFTP is actually insecure and that we should not use SFTP or find another protocol or way for transferring files? I know FileZilla is a remote software for remote connections. What is an alternative then in order to be PCI Compliant? Not transferring files that way or doing it differently? What is an alternative protocol? I use SFTP either from FileZilla or command line. – Jaime Montoya – 2018-07-23T20:13:25.313

1All software is theoretically insecure, and externally accessible software is particularly so, so the prevailing wisdom is to not have it. Any given implementation of SSH/SFTP (or any other protocol) is only one zero-day away from being wide open. I don't have any answers for you, I'm just reading what they are saying and applying my experiences with security audits. – Frank Thomas – 2018-07-23T20:36:24.363

@JaimeMontoya Not transferring files to or from the server while it's in production is the way of doing it. You should be scheduling proper maintenance windows for things like updating such core components of your site as payment processing systems, and there should be no way to remotely modify them outside of those maintenance windows. This kind of thing is SOP for any reasonable site (not just PCI DSS compliant ones), you DO NOT modify production systems outside of maintenance windows, period. – Austin Hemmelgarn – 2018-07-23T20:36:49.887

@AustinHemmelgarn Wow. I am a programmer and I write code in my local computer. When my work is completed locally, I push the code to a remote repository server by using git push .... Then the code is on the server and I can pull it to have the changes in the production site. I push code from my local computer to the server regularly just in case something happens to my computer, the code is in the public repository on the server and we can move it to the production site when needed. Does it mean that my approach is insecure? Should I write all code and push only during maintenance. – Jaime Montoya – 2018-07-23T22:22:42.583

@FrankThomas I get the point, but it seems to me as if you were telling me that if I do not connect my computer to the internet, I will dramatically improve security and that if I want to have even more security, I should disable USB ports as well. Well, that is true. But it is also true that a computer that is not connected to the internet can become like a car that does not have gasoline. I guess we need to be reasonable and I am surprised PCI Compliance is strict and radical this way. – Jaime Montoya – 2018-07-23T22:31:32.540

@JaimeMontoya If the system that''s non-compliant is not a production system, then that's a rather important piece of information that's missing from your question. It is worth pointing out that leaking source code is really bad. Not quite as bad as an actual applied ACE or customer data leak, but damn close. Such a system should still be kept properly secured, ideally by using physically isolated networks (one for pushing code from workstations, one for pushing code to production servers that is taken off-line when not in use). I have no idea how PCI DSS might handle such a system though. – Austin Hemmelgarn – 2018-07-24T00:07:12.173

@AustinHemmelgarn The system that is non-compliant that I am talking about is a production system or production server. What I am saying is that I work on my local computer and then push code to the server but to a staging area on the server. Once the code is on the server, I can double check that everything is fine in a testing environment on the server and if everything is good, I push it to the production site. – Jaime Montoya – 2018-07-24T00:12:01.300

@JaimeMontoya OK, in that case, the fix is really easy. Don't push code to the production site, pull it there. Instead of using git push or SFTP put command on the repository server, use git pull or the SFTP get command on the production server (or if you're using Windows on the production server, use FileZilla there to copy the data from the repository server). – Austin Hemmelgarn – 2018-07-24T00:16:31.780

@AustinHemmelgarn Just to avoid misunderstandings. "Production site" is one thing and "production server" is not a synonym. In my architecture, I have a server where I have the production site but in that server I have a staging area or testing environment that is not the production website but it is the same server. Once I test on the testing area of the production server, I move the code to the production site that is in the same production server. So from my local computer I push code not to the production site, but to the staging area or testing environment in the production server. – Jaime Montoya – 2018-07-24T00:31:15.407

@AustinHemmelgarn I use git push to move code from my local computer to the staging area that is in the production server but it is not the production site. Then when I test the code and know that it works correctly on the server, I use git pull to move code from the staging area to the production website. Both the staging area and production website are in the same production website. – Jaime Montoya – 2018-07-24T00:33:32.093

1@JaimeMontoya - PCI cares about the entire server that the production code is running on, not just the production website. If you can push to the server, then that's likely the problem. – Bobson – 2018-07-24T11:33:50.533

@Bobson Let's say that I block ports and connections so that I cannot push code to the production server. In that case, let's say that I push code to testing server and when everything is good, from this testing server to the production server. But then the testing server will have to push to the production server and it will be a remote connection again. Does it mean that I would copy code to the physical server by use USB drives or something like that? – Jaime Montoya – 2018-07-24T14:09:53.373

@JaimeMontoya Or you could physically log into the production server, and pull the code over from the testing server. – Austin Hemmelgarn – 2018-07-24T18:48:50.397

@AustinHemmelgarn What do you mean by "physically log into the production server"? I am using a web hosting company so my production server is somewhere in the world not right next to me. Are you saying that this security measure the way you explain it could only by implemented when you have the physical servers in-house in your own company offices for example? – Jaime Montoya – 2018-07-24T18:53:56.743

@JaimeMontoya If they're physical servers, then log in on the server's physical console. If they're VPS nodes, use whatever the VPS provider's method is for getting to the system console (the good ones give you a way to get to it from their management dashboard). If this is just a wholly hosted website (like what GoDaddy provides for web hosting), then you need to talk to your hosting provider about how to make it PCI DSS compliant. – Austin Hemmelgarn – 2018-07-24T19:40:37.560

@AustinHemmelgarn Let's say that I log in on the server's physical console and use git pull to pull code from a remote server or from my local computer. Isn't it still a remote connection? I would be connecting the production server to a remote computer. The only difference would be that instead of pushing code to the production server, I would be using the production server to pull code that is located on a remote server or local computer. Doesn't it still require a connection or access to remote computers? – Jaime Montoya – 2018-07-24T20:08:44.907

1@JaimeMontoya Except it's a remote connection from the production server, not to it. IOW, the production server is connecting to the testing server, not the other way around. This sounds like a pointless and stupid distinction, but it can actually matter a lot (if, for example, your testing server is not accessible from the regular internet but your production server is, then it's safer to do things this way). PCI DSS (and most other security compliance standards) is kind of notorious for making distinctions like this. – Austin Hemmelgarn – 2018-07-24T20:31:29.483

@AustinHemmelgarn Wow, interesting. One more thing. When you say that I "log in on the server's physical console", how does that happen? If my production server is in London hosted by a web hosting company, and I am in Canada for example, then I guess my web hosting company would give me some kind of portal to allow me to "log in on the server's physical console". Isn't that a remote connection itself? Maybe not SSH but some kind of protocol that is also a remote connection, or not? – Jaime Montoya – 2018-07-24T22:26:53.647

1@JaimeMontoya It's also not something any arbitrary person who can access the server over the network can do. At minimum, you need to be logged into your account with them to do it (which can usually be secured much more reliably than an SSH connection). As far as the protocol, usually it's a browser-based VNC client connected over a websocket to the server hosting the VM. – Austin Hemmelgarn – 2018-07-25T00:01:40.597

Let us continue this discussion in chat.

– Jaime Montoya – 2018-07-25T01:27:52.457