I don't believe the S3 API lets you submit multiple files in a single API call, but you could look into concurrency options for the client you are using.
A good starting point would be the official AWS Command Line Interface (CLI) which has some S3 configuration values which allow you to adjust concurrency for aws s3
CLI transfer commands including cp
, sync
, mv
, and rm
:
max_concurrent_requests - The maximum number of concurrent requests (default: 10)
max_queue_size - The maximum number of tasks in the task queue (default: 1000)
multipart_threshold - The size threshold the CLI uses for multipart transfers of
individual files (default: 8MB)
multipart_chunksize - When using multipart transfers, this is the chunk size
that the CLI uses for multipart transfers of individual
files (default: 8MB)
max_bandwidth - The maximum bandwidth that will be consumed for uploading
and downloading data to and from Amazon S3 (default: None)
The AWS S3 configuration guide linked above also includes recommendations around adjusting these values for different scenarios.
For faster transfer you should also create your S3 bucket in a region with the least latency for your Digital Ocean instance or consider enabling S3 Transfer Acceleration. There are additional CLI options (and cost) if you use S3 Acceleration.
Once your configuration options are set, you can then use a command line like aws s3 sync /path/to/files s3://mybucket
to recursively sync the image directory from your DigitalOcean server to an S3 bucket. The sync process only copies new or updated files, so you can run the same command again if a sync is interrupted or the source directory has been updated.