I have a client uploading multiple TB of data to Glacier. They did a snowball that got 65 TB of data, and they are going to do the remaining ~25 via upload. Currently they are uploading directly to Glacier with FastGlacier, but that tool is running on their sole Windows machine (a full Mac shop) and is constantly crashing from queuing so much data. Additionally, this program leave a lot to be desired in regards to searching/browsing the store, as in order to view files in the Glacier you need to download the inventory (with the 4-6 hour lead time).
For consistency, we'd like to upload to the S3 share that we used for the Snowball, with the same 0 day transition to Glacier using a lifecycle management rule, but don't want to incur massive costs of S3 for it. I know S3 costs is based on average usage over the month, but not sure how to estimate this.