I have not used MQ File transfer edition so I cannot comment on this. I have done a lot of file transfers including EDI, FTP, AS2, FTPS, SFTP, rsync, SCP, aspera, svn, etc. Ultimately my answer would depend on your exact requirements. From what it sounds like the most important thing you are after is reliability of file transfers.
Firstly I would recommend on some sort of standardization of platforms, maintenance and management, which is what it sounds like you are looking at doing. Make every server regardless of OS/config use the same process to get files to and from nodes. Multiplying the troubleshooting across different configurations can make simple tasks very frustrating. When i think of reliability I do not think of windows, but a lot of the time there just simply isn't a way to avoid it.
While I do not know your exact requirements I will provide some possible solutions for you, if you can clarify on exactly your needs (WAN, LAN, file size, number of transfers daily, importance of transfers, etc) I can provide you with a more accurate answer. The transfers I have setup in the past are anywhere from small <1kb files to hundreds of GBs of data, from people don't get paid if the transfer doesn't happen to data that may never even be used, from open internet transfers to encrypted data, across encrypted transfers across encrypted VPNs.
What you are really after is a semi new term in the industry called Managed File Transfer. http://en.wikipedia.org/wiki/Managed_file_transfer
At the end of the day, get the Gartner Magic Quadrant Report for this, review it and choose a vendor that meets your needs. You'll notice Aspera in the list, but consider CFI for your needs. Considering you are specifically looking for a commercial product this is your best bet. Private message me or comment if you want any more input on my research in this sector.
Here is my personalized input.
Centralized FTP:
This is good because FTP is universal, it's used in so many places and has so much support across systems. A lot of popular FTP servers will provide support for a lot of authentication methods as well as protocols. If you are able to centralize the server for all the nodes then troubleshooting becomes a lot easier, when something goes wrong you check the server log, or ideally have logs auto report to you via email, and if there isn't anything wrong its pretty clear its a client or network issue. The problem is FTP isn't perfect, it can easily fail, and is particularly slow when dealing with large amounts of small files. Across OS you may find file naming issues and more. If you are going to consider this solution use clients and a server that can support simple file verification. http://en.wikipedia.org/wiki/Simple_file_verification. The mechanism used to check the files is as it says, simple, and could be checked across multiple platforms. There are a number of servers that support checking files as they are uploading and can report automatically if a file fails checking, along with checking full file sets rather than individual files, also providing some percentage for the full structure to be uploaded. gltfpd is a popular one, but keep in mind it is a bear for configuring, but once you have it setup you may never need to touch it again. http://www.glftpd.com/. Gene6 is pretty popular as well
Rsync the files
I've used rsync with scripts a fair amount and I found this to be very reliable and pretty robust when accounting for error checking. You'll find rsync popular among backup scripts because of this. I do not know of many off the shelf programs for rsync, so you are looking at coding up a solution for this and once again you will be without centralized logging and you may run in to a lot of the same issues, but honestly I found rsync reliable enough, and with the delta transmissions with large file sets and integrity checking is a pretty quick and dirty way to get things done.
Aspera
Aspera is great technology at it's core for high latency, high bandwidth transfers. If you are not transferring across a WAN, and not transferring large data sets I would not recommend it. I run a large Aspera deployment and it is littered with transfer problems and software bugs. If you are looking for very basic functionality it is a pretty good solution but when it comes down to more advanced processing be prepared to write your own scripts to transfer the data. The software seems to be more focused at a small niche business and they seem to struggle across enterprise deployments. The centralized logging they have with one of their products would solve the centralized logging needs, and their pre and post processing would work for your needs as well but just keep in mind you may end up spending a fair amount of money for a half working solution. I mentioned CFI above, their product is much more enterprise but they struggle to deliver on a single experience. Depending on your needs don't take my word for it, get trials of their products for yourself.
Version Control System
I'll first say that this doesn't seem like it would fit the requirements but is another option. If the files you are transferring are not transactional, consider storing these files in a version control system. In this scenario when a file is needed to be transferred it is checked in to the version repository, and when needed it is synced at the remote end. In an instance where you need version control and files possibly to interact with each other, as well as a centralized server this may be a good option.
As a final side note check out what twitter uses to pass config files across their many, many nodes: http://engineering.twitter.com/2010/07/murder-fast-datacenter-code-deploys.html
Once again I cannot stress enough that the correct answer is based on your exact requirements.
Hope this helps you.