6

I am in a situation where I must access data via FTP on a regular basis (for argument's sake, updated every 10 minutes). No other protocol is available for this operation. This data needs to be put onto a network share. I do not want to connect directly from an important network share to another server via FTP. The data itself and the username/password are not sensitive from my perspective as they are provided by the server organisation. I am more concerned about Man-in-the-Middle Attacks, Code or File injections and network reconnaissance.

I have two proposed solution. Any feedback or pointing to literature about these solutions would be appreciated as well as a straight answer:

  • Local: Have a server partitioned from the network to handle data transfer. Perform checks (automated such as virus scans, manual such as user file inspection or both) before moving it on to the network. Prevent dedicated server from being able to access the network.
  • Commerical: Use an MFT Product. Many of these products claim to provide additional security on top of FTP. I am struggling to find 3rd party information/analysis of these products however.

Thanks

Stringers
  • 63
  • 1
  • 8
  • Could you specify what you mean by "perform check"? File checksums? (Because checking file integrity would be the way to go after downloading over a connection that is assumed to be insecure.) – Arminius May 01 '17 at 06:32
  • Thanks @Arminius. Unfortunately the server does not provide checksums with files. My initial thoughts were virus scanning or user file inspection.I will update the description to better reflect my question. – Stringers May 01 '17 at 06:50
  • This might be a silly thought, but if you can't encrypt your communication, try encrypting your data. How about using private/public keying, transferring your data and decrypting it on the server? – FMaz May 01 '17 at 08:51
  • @FynnMazurkiewicz unfortunately I can't control the server at all. If I could I wouldn't be using FTP. – Stringers May 01 '17 at 14:35

1 Answers1

2

Setup your own server (A) in an isolated secure enclave. Only allow this system to access the IP address of the remote FTP server (FTP) and nothing else. If the vendor running the FTP server can also restrict access to only your IP that would be ideal.

Then have your internal system (B) connect to this server via a one-way firewall rule using something like Secure Shell.

(FTP) <-------(FW)<---(A)<---(FW)<---(B)

      (A) Polls the (FTP) server every 10 minutes via script
      (B) Polls the (A) server every 10 minutes via script with a 5 min offset.

Note: The (FW) may be the same Firewall but a third network interface

(FTP) <-----(FW)--(B)
             |
            (A)

The important thing is to ensure that (A) cannot access (B) or anything else on your internal networks. Ideally nothing else in the world other than the FTP server. You may also want (B) to run tests on the files from (A) before importing them. There are also tricks with using two directories on (A) for scanned and unscanned files where files are scanned as soon as the FTP GET is completed. Lot's of options with this architecture.

Final note: Some vendors use FTP because they don't know how to use SFTP from the system they are using (most notably this occurs with AS/400's). It is possible to have the AS/400 FTP the files to a loopback address on itself (127.0.0.1) then have you access them via SFTP. This is MUCH safer than using FTP directly. Likewise, you can have the vendor do the reverse of what I wrote above to send these files to an SFTP server on their end instead of forcing you to use FTP at all.

Alternatively, you can run the FTP through a VPN which only allows your organization to initiate connections to the remote vendor but not allowing them to access your network.

Trey Blalock
  • 14,099
  • 6
  • 43
  • 49
  • But how does this solve the problem of a MITM between A and the FTP server? – Arminius May 01 '17 at 06:57
  • @Arminius with the nature of the information and the server being sent I don't think I can avoid MITM completely. Hence the question is aimed at mitigating these effects should it happen. – Stringers May 01 '17 at 07:00
  • @Trey Blalock Thanks for the very thorough response. With this solution, I understand that the point of failure (excluding user error) becomes the firewalls and other scanning performed before moving files from A to B? – Stringers May 01 '17 at 07:05
  • Yes. There are also a LOT of types of scans you can perform depending on the type of data you are receiving. Think Data Loss Prevention, Malware detection, A/V, other types of data leakage, etc. – Trey Blalock May 01 '17 at 07:11
  • 1
    Marked as correct as it is the only solution but is also a very thorough and good answer. Thanks. – Stringers May 04 '17 at 04:29