0

I'm currently trying to set-up VSFTPD on an ubuntu 16.04 server and I want to use FTPS (Ideally I would use SFTP but unfortunately I'm constrained by a legacy system)

I've managed to set it up using the default config and no TLS and I can connect fine via filezilla.Though for the past 2 days I've been trying to enable TLS and no amount of questions on SE or elsewhere seem to lead to a positive result.

My certificate details in the vsftpd.conf file are like below:

rsa_cert_file=/path/to/fullchain.pem
rsa_private_key_file=/path/to/privkey.pem
allow_anon_ssl=NO
ssl_enable=YES
force_local_data_ssl=YES
force_local_logins_ssl=YES

However I can no longer connect with the following shown in the filezilla console:

Status:         Verifying certificate...
Status:         TLS connection established.
Status:         Server does not support non-ASCII characters.
Status:         Logged in
Status:         Retrieving directory listing...
Status:         Server sent passive reply with unroutable address. Using server address instead.
Command:    LIST
Error:      Connection timed out after 20 seconds of inactivity
Error:      Failed to retrieve directory listing

This is my first time configuring VSFTPD so I've been following some tutorials online. They've also involved UFW and I've opened up the ports as shown.

I've also tried adding in the following lines to my vsftpd.conf file

  ssl_tlsv1=YES
  ssl_sslv2=NO
  ssl_sslv3=NO

  require_ssl_reuse=NO
  ssl_ciphers=HIGH

I've seen other posts mentioning the pasv_address option

so I've tried adding this to my config with the external IP of my server - please note it's hosted on Google Compute Engine and I've also updated my firewall rules in Compute to allow the same ports etc. that were specified in the tutorial. This too though does not work.

I can only assume it's something to do with ports/firewalls or other TLS options but I'm completely stumped. I guess it doesn't help that I have the google cloud network firewall and then ufw (though disabling ufw has no effect)

My ufw rules look as follows:

20/tcp                     ALLOW       Anywhere                  
21/tcp                     ALLOW       Anywhere                                   
990/tcp                    ALLOW       Anywhere                  
40000:50000/tcp            ALLOW       Anywhere 

and if anyone wants to know more the tutorial I followed was here: configuring-ftp-access

There doesn't appear to be any logs in vsftpd.log that would indicate an issue but turning on verbose logging in filezilla reveals the following:

Binding data connection source IP to control connection source IP 192.168.1.100

which I presume might be an issue as that looks like a local IP. Though I'm stumped how to fix this especially with as I also have the following in my vsftpd.conf file :

pasv_address=(EXTERNAL GOOGLE COMPUTE IP)

My Google Cloud firewall rules are :

IP ranges: 0.0.0.0/0
tcp:20-21   
Allow
1000
default

pass-ports
sftp    
IP ranges: 0.0.0.0/0
tcp:40000-50000

(these will be locked down IP wise eventually but even testing with all I can't get this working)

And also in my vsftpd.conf file I believe I added those ports as the ones to use via:

port_enable=YES
pasv_enable=YES
pasv_min_port=40000
pasv_max_port=41000

Update

I can now connect to this from the box itself using lftp and the following arguments

set ftp:ssl-force true

I connect via the domain name rather than IP as the cert is mapped to the domain so it won't work with the IP.

I can then create new directories etc via the command line. However if I try to do ls I get ls at 0 [Making data connection...] and it just hangs there. I also get an error via an external FTP client such as filezilla. This just times out at the LIST command

Command:    LIST
Error:          The data connection could not be established: ETIMEDOUT - Connection attempt timed out
Response:   425 Failed to establish connection.
Error:          Failed to retrieve directory listing
Error:          GnuTLS error -15: An unexpected TLS packet was received.
Status:         Disconnected from server: ECONNABORTED - Connection aborted

The only other info that I can think might be relevant is that the domain is port forwarded by NGINX to a node app. But I assume this should only do this for ports 80 and 443 so shouldn't be effecting port 21.

Does anyone have any ideas?

Taher
  • 195
  • 8
TommyBs
  • 179
  • 2
  • 10
  • **FTPS** (FTP over SSL) can be very difficult to patch through a tightly secured firewall since FTPS uses multiple port numbers. The initial port number (default of 21) is used for authentication and passing any commands. However, every time a file transfer request (get, put) or directory listing request is made, another port number needs to be opened. That said, can you share the firewall rules by running `gcloud compute firewall-rules list` command? You can [use SCP](https://cloud.google.com/compute/docs/instances/transfer-files#scp) to transfer files if that is possible in your case. – Taher Feb 27 '18 at 21:48
  • @Taher I've added in that extra info. Unfortunately I can't use SCP - I'm constrained by another system that I have no access to – TommyBs Feb 28 '18 at 06:59
  • Have you tried to add `allow_writeable_chroot=YES` in your _vsftpd.conf_ file? Seems like this has worked for some users as [stated here](https://askubuntu.com/questions/637810/vsftpd-gnutls-error-15-an-unexpected-tls-packet-was-received/852705#637826) – Taher Mar 01 '18 at 16:12
  • Adding that line actually seems to stop vsftpd from starting due to a bad config. In fact I can't seem to see it listed as an option here http://vsftpd.beasts.org/vsftpd_conf.html – TommyBs Mar 01 '18 at 18:24
  • So I've got closer with this, it seems to be the google compute firewall rules. If I allow everything on that firewall it works. Though I obviously don't want to do that. So I guess my options are to see if the ultimate ftp user will have a static IP and limit it to that, or try and work out what the rules need to be. The weird thing is that UFW has the more restrict rules still but that isn't causing a problem – TommyBs Mar 01 '18 at 20:30
  • Is it possible that some control traffic ports (_990,991,989_) are required to be opened in GCP firewall as [indicated here](https://serverfault.com/questions/10807/what-firewall-ports-do-i-need-to-open-when-using-ftps?rq=1#google_ads_iframe_/248424177/serverfault.com/lb/question-pages_0__container__)? Taking a look at the traffic with protocol analyzer like [tcpdump](https://en.wikipedia.org/wiki/Tcpdump) or [wireshark](https://en.wikipedia.org/wiki/Wireshark) can also give better understanding about which ports to open in GCP FW. – Taher Mar 02 '18 at 15:50

0 Answers0