0

I have installed ProFTPd on an Amazon EC2 running Amazon Linux.

I have enabled SSL (FTPS) on for ProFTPd and set passive ports in proftpd.conf:

port                         21

<IfModule mod_tls.c>
  TLSEngine                  on
  TLSLog                     /var/log/proftpd/tls.log
  TLSProtocol TLSv1.2
  TLSCipherSuite AES128+EECDH:AES128+EDH
  TLSOptions                 NoCertRequest AllowClientRenegotiations
  TLSRSACertificateFile      /etc/proftpd/ssl/proftpd.cert.pem
  TLSRSACertificateKeyFile   /etc/proftpd/ssl/proftpd.key.pem
  TLSVerifyClient            off
  TLSRequired                on
  RequireValidShell          no
</IfModule>

MasqueradeAddress            ip-of-my-server
PassivePorts                 60000 65535

Users can connect via port 21 if I open up TCP Ports 60000 through 65535 on the security group associated with this instance, but this doesn't feel secure to me (I'm not familiar with Passive FTP or opening up a range of ports like this). We whitelist the IPs of all our customers who will connect to this server on port 21.

Questions

  1. Is there a way to open these passive ports without opening them in the security group opn AWS, perhaps using ip configurations on the EC2 server like in this article (http://www.proftpd.org/docs/howto/NAT.html)? I'm not familiar with these types of configurations and am not sure what to do here.

  2. Since a user can't connect to the FTPS server without going through port 21 which is whitelisted to their IP, is it ok to have the TCP ports 60000-65535 open to all IPs within the AWS Security Group? What security concerns should I have here?

  3. Is there some other "best practices" way to configure this?

T. Brian Jones
  • 887
  • 3
  • 17
  • 29
  • "Is there some other "best practices" way to configure this?" Yes, in fact there is. Forget about FTP and use SFTP. – EEAA May 23 '16 at 04:28
  • I can't use SFTP because this system is running on an S3 backend, and there are security-configuration conflicts with SFTP and the S3 file system we're using. – T. Brian Jones May 23 '16 at 04:29
  • Well, that being the case, you unfortunately have much larger problems than getting an old, broken protocol working. – EEAA May 23 '16 at 04:30
  • Do you recommend strongly against using FTPS? Why? – T. Brian Jones May 23 '16 at 04:33
  • Well, you've stumbled right into one of the primary reasons. It's *horrible* from a networking point of view. Active/passive, control port/data port, etc. SFTP uses a *single* port, and you'll never run into networking issues with it. Honestly though, if you're whitelisting your customers' IPs on port 21 already, why do you think doing so for the other ports is problematic? Either you trust your customers or you don't. If you don't then they shouldn't be allowed to contact your server. – EEAA May 23 '16 at 04:35
  • 1
    This page outlines many of the other issues with FTP: http://mywiki.wooledge.org/FtpMustDie – EEAA May 23 '16 at 04:36
  • I also HATE FTP. Do you have an alternate solution you'd suggest here? We have hundreds of customers that seem to still think it's the default way to move files around. We have customers delivering files of 50-250GB regularly. I'm actually building this with an S3 backend, but many of these customers would probably be unable to deliver to S3 directly. – T. Brian Jones May 23 '16 at 04:44
  • Sounds like you're in a great position to advocate for your customers using reasonable technology. Honestly. I also have people asking me regularly for FTP this-or-that. I gently refuse, and then introduce them to scp/sftp, which are now widely-supported, and even easier to use than FTP if you use key authentication (which you should do). – EEAA May 23 '16 at 04:46
  • 1
    You should also know that while the s3-backed filesystems are tempting, they're overly brittle and can fail in some very non-obvious ways. If your customers aren't able or willing to deliver files directly into S3, then just have them SCP them to an EBS-backed volume, and then periodically run a script that moves files into S3 as they become available. In fact, by using incron, you can completely automate this process to kick off the instant the file is written. – EEAA May 23 '16 at 04:50
  • This is a very helpful discussion to have on this question. Perhaps you could help out over here too, and help banish FTP from the world: http://serverfault.com/questions/778406/root-user-permission-issues-with-sftp-and-s3fs-on-amazon-ec2 – T. Brian Jones May 23 '16 at 04:50
  • hmm ... perhaps automating uploads to S3 as soon as they are done is the best move at the moment. – T. Brian Jones May 23 '16 at 04:51
  • 1
    Yep, I think that would be the best, most reliable solution for you. – EEAA May 23 '16 at 04:52
  • What about download? We deliver files back to all these users, and I was hoping to deliver straight out of S3. It's frustrating and expensive to store these files on disk on an FTP server, and then we are often paying for disk space that's not actively being used. – T. Brian Jones May 23 '16 at 04:56
  • Have your application generate time-limited S3 signed URLs on-demand for files that the customer needs to download. Or, just give each customer IAM credentials with limited access to their bucket (or portion of the bucket), and then let them use a S3 client of their choosing (which is honestly how they should be getting the files to you in the first place). – EEAA May 23 '16 at 04:58
  • So, "strongly" suggest they all get an S3 client. That would be nice. Pretty sure it won't fly, but I'll see how it goes with some of them. – T. Brian Jones May 23 '16 at 05:02
  • 1
    @T.BrianJones I'm not sure why you think that opening ports in your Security Groups is "insecure"; if there's nothing listening on those ports, there's nothing "insecure" about it. As for the rest, the [ProFTPD AWS howto](http://www.proftpd.org/docs/howto/AWS.html) might be of interest. – Castaglia May 23 '16 at 05:05

0 Answers0