380

I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data.

The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly.

Although Amazon's REST API returns the number of items in a bucket, s3cmd doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack.

The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size.

Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

mob
  • 5
  • 1
Garret Heaton
  • 4,255
  • 3
  • 17
  • 10
  • 28
    I am astonished that Amazon charge for the space, but don't provide the total size taken up by an S3 bucket simply through the S3 panel. – Luke Dec 10 '15 at 10:42
  • I wrote a tool for analysing bucket size: https://github.com/EverythingMe/ncdu-s3 – omribahumi Sep 20 '15 at 07:10
  • For me most of the answers below took quite a long time to retrieve the bucket size, however this python script was way faster than most of the answers - http://www.slsmk.com/getting-the-size-of-an-s3-bucket-using-boto3-for-aws/ – Vaulstein Dec 06 '17 at 07:22

27 Answers27

468

This can now be done trivially with just the official AWS command line client:

aws s3 ls --summarize --human-readable --recursive s3://bucket-name/

Official Documentation: AWS CLI Command Reference (version 2)

This also accepts path prefixes if you don't want to count the entire bucket:

aws s3 ls --summarize --human-readable --recursive s3://bucket-name/directory
Synexis
  • 103
  • 4
philwills
  • 4,854
  • 2
  • 10
  • 5
  • 30
    This is the best and up-to date answer – Tim Apr 05 '16 at 22:26
  • 4
    Agree, this is the best answer. – Luis Artola Jul 05 '16 at 17:28
  • 46
    This is very slow for buckets with many files as it basically lists all the objects in the bucket before showing the summary, and in that it is not significantly faster than the @Christopher Hackett's answer - except this one is much more noisy. – Guss Jul 24 '16 at 23:19
  • Run on a EC2 instance with the same bucket's region to improve the latency – juanmirocks Mar 24 '18 at 14:49
  • 2
    If you are only interested in the summary size, this is the fastest and up-to-date solution and you can simply pipe through tail to find that value. – CharlieH Jun 26 '18 at 15:14
  • This can easily take half a day on a large bucket. – user239558 Jul 04 '18 at 13:26
  • 5
    This will show the size of ALL the individual files in the directory tree. What if I just want to total size for the directory? – Chris F Jul 16 '18 at 19:05
  • Unknown options: --summarize, --human-readable, --recursive – Elia Weiss Dec 03 '18 at 11:18
  • Works great. You get a little summary at the bottom. Not sure about massive buckets, but mine finished the report in about 1 second. Total Objects: 3925 Total Size: 3.4 GiB – danielricecodes Dec 04 '18 at 17:25
  • 1
    Please mind, that it is a lkong operation on huge buckets and consume amazing amount of CPU. – ori0n Jan 29 '20 at 06:38
  • 5
    For large buckets, check out Bahreini's answer. In the S3 console, go to your bucket > Management > Metrics. You can view total usage and filter by prefix, tag, etc. https://serverfault.com/a/981112/226067 – colllin Jan 30 '20 at 17:42
  • this doesnt show the exact file size In bytes – Nathan B Dec 19 '20 at 13:33
  • @NadavB The `--human-readable` option converts sizes from bytes to more "human-brain friendly" values, rounding in the process (for example 36627240 becomes 34.9 MiB). Simply omit that parameter to get the actual byte counts. – Synexis Jan 25 '21 at 00:19
  • This doesn't work, it lists every item. For some reason --summarize isn't working. And even if it did the point is to AVOID going through every object in the bucket to count its size. The console already allows for that. Like colllin said already, the correct answer is in CloudWatch, either through the S3-Management/Metrics screen or in CloudWatch directly. Counting each object is NOT efficient. – eco Apr 26 '21 at 23:06
  • 1
    I would add | tail -2 to avoid flooding. – aerin Aug 07 '21 at 15:40
  • This is the best answer and is more unto date with the CLI. Owner of this ticket should update. – Carl Wainwright Oct 28 '21 at 16:44
210

The AWS CLI now supports the --query parameter which takes a JMESPath expressions.

This means you can sum the size values given by list-objects using sum(Contents[].Size) and count like length(Contents[]).

This can be be run using the official AWS CLI as below and was introduced in Feb 2014

 aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[].Size), length(Contents[])]"
Christopher Hackett
  • 2,249
  • 1
  • 13
  • 5
  • 35
    For large buckets (large #files), this is excruciatingly slow. The Python utility s4cmd "du" is lightning fast: `s4cmd du s3://bucket-name` – Brent Faust Mar 31 '15 at 22:08
  • That's strange. What is the overall profile of your bucket (shallow and fat / deep and thin)? It looks like ``s3cmd`` should have the same overheads as ``AWS CLI``. In the [code it shows](https://github.com/s3tools/s3cmd/blob/e7fad103ca2e485598f67a787c2789eb460dd9cb/s3cmd#L101-L115) ``s3cmd`` make a request for each directory in a bucket. – Christopher Hackett Apr 01 '15 at 15:14
  • 32
    to get it in human readable format: `aws s3api --profile PROFILE_NAME list-objects --bucket BUCKET_NAME --output json --query "[sum(Contents[].Size), length(Contents[])]" | awk 'NR!=2 {print $0;next} NR==2 {print $0/1024/1024/1024" GB"}'` – Sandeep Aug 08 '15 at 23:22
  • @Rubistro for me the aws cli with `--query` is a lot faster than `s3cmd`, but still this solution is not yet ideal... – Sebastien Lorber Sep 17 '15 at 15:56
  • 28
    Now that AWS Cloudwatch offers a "BucketSizeBytes" per-bucket metric this is no longer the right solution. See Toukakoukan's answer below. – cce Sep 24 '15 at 20:42
  • This just gives me the error: "Illegal token value '(Contents[].Size), length(Contents[])]'" – Cerin Jun 07 '16 at 19:16
  • @cce depending on usecase the value of BucketSizeBytes may be too stale to be useful. This method produces a more up to date value :) – Christopher Hackett Jun 27 '16 at 08:56
  • @Cerin are your quote marks matching or are you opening with double and closing with single? – Christopher Hackett Jun 27 '16 at 08:58
  • Beware that this only sums the first 1000 objects if a bucket has more. I highly suggest just using `s3cmd du`, or if you need greater throughput `s4cmd du`. You can even run them without an argument and it will iterate through all buckets. It's sort of insane to me that Amazon doesn't provide a way to get at this information more efficiently, but being that's the case why reinvent the wheel? I think s4cmd is probably as good as you are going to get until Amazon blesses us with a proper solution. – dasil003 Nov 30 '16 at 19:48
  • 3
    `s4cmd du` is wonderful, thank you @Brent Faust! small note (for those concerned) that you need to add `-r` to get the sizes of sub-directories as well. – Greg Sadetsky Jun 30 '18 at 21:06
  • Illegal token value '(Contents[].Size), length(Contents[])]' – Elia Weiss Dec 03 '18 at 11:18
  • (on @cce's comment: Toukakoukan user name is now Sam Martin) – bryant1410 Nov 24 '21 at 23:34
177

AWS Console:

As of 28th of July 2015 you can get this information via CloudWatch. If you want a GUI, go to the CloudWatch console: (Choose Region > ) Metrics > S3

AWS CLI Command:

This is much quicker than some of the other commands posted here, as it does not query the size of each file individually to calculate the sum.

 aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time 2015-07-15T10:00:00 --end-time 2015-07-31T01:00:00 --period 86400 --statistics Average --region eu-west-1 --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=toukakoukan.com Name=StorageType,Value=StandardStorage

Important: You must specify both StorageType and BucketName in the dimensions argument otherwise you will get no results. All you need to change is the --start-date, --end-time, and Value=toukakoukan.com.


Here's a bash script you can use to avoid having to specify --start-date and --end-time manually.

#!/bin/bash
bucket=$1
region=$2
now=$(date +%s)
aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time "$(echo "$now - 86400" | bc)" --end-time "$now" --period 86400 --statistics Average --region $region --metric-name BucketSizeBytes --dimensions Name=BucketName,Value="$bucket" Name=StorageType,Value=StandardStorage
Efren
  • 153
  • 1
  • 11
Sam Martin
  • 1,954
  • 2
  • 12
  • 10
  • 27
    Or in [the CloudWatch console](https://console.aws.amazon.com/cloudwatch/): (Choose Region > ) Metrics > S3 – Halil Özgür Jan 07 '16 at 17:57
  • 4
    This is by far the easiest and fastest solution. Unfortunately the answer is still only in fourth place. – luk2302 Oct 13 '16 at 10:12
  • This worked for my bucket with 10million+ objects. But the bash script didn't return anything, had to go to the GUI). – Petah Mar 06 '17 at 19:36
  • 1
    It should also be noted that you'll have to change the region as well – chizou Feb 05 '18 at 21:21
  • may 2018: the script errors with `Invalid value ('1525354418') for param timestamp:StartTime of type timestamp` – anneb May 03 '18 at 13:36
  • How I can add filters created in metrics tab of above aws cli command? https://docs.aws.amazon.com/AmazonS3/latest/user-guide/configure-metrics-filter.html – Ramratan Gupta Aug 21 '18 at 13:56
  • Note also the [restrictions on period](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-statistics.html): - Start time between 3 hours and 15 days ago - Use a multiple of 60 seconds (1 minute). - Start time between 15 and 63 days ago - Use a multiple of 300 seconds (5 minutes). - Start time greater than 63 days ago - Use a multiple of 3600 seconds (1 hour). – Efren Oct 08 '18 at 02:48
  • @RamratanGupta, use the --query option (see [filters](https://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html#controlling-output-filter)), eg: `--query "Datapoints[*].[Average]` – Efren Oct 08 '18 at 02:58
  • 1
    Also be wary of `Name=StorageType,Value=StandardStorage`: if your bucket has some custom lifecycle (utilizing Inrequent Access Storage Class, for example), replace `StandardStorage` with `StandardIAStorage` or some other value (see [docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudwatch-monitoring.html#s3-cloudwatch-metrics) for details. – Klas Š. Jul 17 '19 at 14:56
  • You need to grant the IAM user the `CloudWatchFullAccess` permission to see this. (Can anyone recommend a more restrictive permission group that permits `cloudwatch:GetMetricStatistics`?) – Chris Apr 07 '21 at 20:09
  • This answer needs an update. You can access the same cloudwatch information directly from the Metrics Tab in the S3 console now, graph and everything. – eco May 05 '21 at 17:52
  • It should be noted that the number displayed in the Metrics tab is either an estimate, or buggy because you won't get the exact count. I verified multiple times by downloading everything from the S3 bucket and comparing the number of local objects to remote, and it was always off by 1 or 2. Even if you wait for some time, and the bucket is not used at all, it doesn't seem to get updated properly. – laurent Nov 19 '21 at 20:08
  • I'm wondering how quickly this updates. My use-case is to count the number of objects uploaded to an s3 bucket in a pipeline. I suspect CW won't be fast enough. – Cognitiaclaeves Dec 31 '21 at 13:07
110

s3cmd can do this :

s3cmd du s3://bucket-name

wazoox
  • 6,782
  • 4
  • 30
  • 62
Stefan Ticu
  • 1,209
  • 2
  • 9
  • 4
  • Thanks. Here's some timing. On a bucket that holds an s3ql deduplicated filesystem with about a million files using about 33 GB of undupicated data, and about 93000 s3 objects, s3cmd du took about 4 minutes to compute the answer. I'm curious to know how that compares with other approaches like the php one described elsewhere here. – nealmcb Jul 10 '12 at 23:46
  • 1
    It is slow because the [S3 ListObjects API call](http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysUsingAPIs.html) returns objects in pages of 1000 objects. As I/O is by far the limiting factor I think any solution will be relatively slow over 93000 objects. – David Snabel-Caunt Apr 20 '13 at 13:54
  • 12
    [s4cmd](https://github.com/bloomreach/s4cmd) can also do the same thing, with the added benefit of multi-threading the requests to S3's API to compute the result faster. The tool hasn't been updated recently, but the Internet passer-by may find it useful. – Nick Chammas Jul 07 '14 at 17:34
  • s4cmd just returns 0 for me, and returns `BotoClientError: Bucket names cannot contain upper-case characters when using either the sub-domain or virtual hosting calling format.` for buckets with uppercase characters. – Lakitu Oct 05 '15 at 20:52
26

If you download a usage report, you can graph the daily values for the TimedStorage-ByteHrs field.

If you want that number in GiB, just divide by 1024 * 1024 * 1024 * 24 (that's GiB-hours for a 24-hour cycle). If you want the number in bytes, just divide by 24 and graph away.

Christopher Schultz
  • 1,056
  • 1
  • 11
  • 20
26

If you want to get the size from AWS Console:

  1. Go to S3 and select the bucket
  2. Click on "Metrics" tab

enter image description here

By default you should see Total bucket size metrics on the top

Hooman Bahreini
  • 486
  • 6
  • 14
22

Using the official AWS s3 command line tools:

aws s3 ls s3://bucket/folder --recursive | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

This is a better command, just add the following 3 parameters --summarize --human-readable --recursive after aws s3 ls. --summarize is not required though gives a nice touch on the total size.

aws s3 ls s3://bucket/folder --summarize --human-readable --recursive
dyltini
  • 321
  • 2
  • 4
15

s4cmd is the fastest way I've found (a command-line utility written in Python):

pip install s4cmd

Now to calculate the entire bucket size using multiple threads:

s4cmd du -r s3://bucket-name
Brent Faust
  • 251
  • 2
  • 5
  • 8
    No, `s4cmd du s3://123123drink` will not simply return the size of the bucket. To get the size of the bucket you do add the recursive `-r`, like this: s4cmd du -r s3://123123drink – My Name Nov 09 '15 at 16:12
  • 1
    Yes, good point @BukLau (added `-r` to example above to avoid confusion when people are using simulated folders on S3). – Brent Faust Apr 09 '18 at 22:02
  • What if we want versions also to be considered in the calculation for versioned buckets? – DJ_Stuffy_K Dec 19 '20 at 15:42
9

You can use the s3cmd utility, e.g.:

s3cmd du -H s3://Mybucket
97G      s3://Mybucket/
Giovanni Toraldo
  • 2,557
  • 18
  • 27
user319660
  • 211
  • 2
  • 3
6

I used the S3 REST/Curl API listed earlier in this thread and did this:

<?php
if (!class_exists('S3')) require_once 'S3.php';

// Instantiate the class
$s3 = new S3('accessKeyId', 'secretAccessKey');
S3::$useSSL = false;

// List your buckets:
echo "S3::listBuckets(): ";
echo '<pre>' . print_r($s3->listBuckets(), 1). '</pre>';

$totalSize = 0;
$objects = $s3->getBucket('name-of-your-bucket');
foreach ($objects as $name => $val) {
    // If you want to get the size of a particular directory, you can do
    // only that.
    // if (strpos($name, 'directory/sub-directory') !== false)
    $totalSize += $val['size'];
}

echo ($totalSize / 1024 / 1024 / 1024) . ' GB';
?>
Vic
  • 284
  • 4
  • 18
6

So trolling around through the API and playing some same queries, S3 will produce the entire contents of a bucket in one request and it doesn't need to descend into directories. The results then just requiring summing through the various XML elements, and not repeated calls. I don't have a sample bucket that has thousands of items so I don't know how well it will scale, but it seems reasonably simple.

Jim Zajkowski
  • 1,604
  • 12
  • 11
  • This does seem to be the best option. Will update this post in the future if it scales poorly and I need to do something else. The library that ended up providing easy access to the raw API results was this PHP one: http://undesigned.org.za/2007/10/22/amazon-s3-php-class – Garret Heaton Nov 16 '09 at 15:20
  • Isn't that only limited to the first 1000 items? – Charlie Schliesser Apr 13 '15 at 18:30
4

... A bit late but, the best way I found is by using the reports in the AWS portal. I made a PHP class for downloading and parsing the reports. With it you can get total number of objects for each bucket, total size in GB or byte hrs and more.

Check it out and let me know if was helpful

AmazonTools

  • This is an interesting solution, although a little hackish. Worried about it breaking if/when Amazon changes their site, but I may have to try this out once I have enough objects that the other way becomes too slow. Another benefit of this approach is that you don't get charged for any API calls. – Garret Heaton Dec 21 '09 at 16:16
  • . . . its an assumption but, if Amazon do change the look of their site, I doubt they would change the back end much, meaning the current GET and POST queries should work. I will maintain the class in the event it does break anyway as I use it often. –  Dec 22 '09 at 00:26
  • @Corey its not working http://undesigned.org.za/2007/10/22/amazon-s3-php-class – Maveňツ Jan 21 '22 at 14:18
3

I recommend using S3 Usage Report for large buckets, see my How To on how to get it Basically you need to download Usage Report for S3 service for the last day with Timed Storage - Byte Hrs and parse it to get disk usage.

cat report.csv | awk -F, '{printf "%.2f GB %s %s \n", $7/(1024**3 )/24, $4, $2}' | sort -n
3

The AWS documentation tells you how to do it:

aws s3 ls s3://bucketnanme --recursive --human-readable --summarize

This is the output you get:

2016-05-17 00:28:14    0 Bytes folder/
2016-05-17 00:30:57    4.7 KiB folder/file.jpg
2016-05-17 00:31:00  108.9 KiB folder/file.png
2016-05-17 00:31:03   43.2 KiB folder/file.jpg
2016-05-17 00:31:08  158.6 KiB folder/file.jpg
2016-05-17 00:31:12   70.6 KiB folder/file.png
2016-05-17 00:43:50   64.1 KiB folder/folder/folder/folder/file.jpg

Total Objects: 7

   Total Size: 450.1 KiB
2

For a really low-tech approach: use an S3 client that can calculate the size for you. I'm using Panic's Transmit, click on a bucket, do "Get Info" and click the "Calculate"-button. I'm not sure how fast or accurate it is in relation to other methods, but it seems to give back the size I had expected it to be.

zmippie
  • 121
  • 2
2

Since there are so many answers, I figured I'd pitch in with my own. I wrote my implementation in C# using LINQPad. Copy, paste, and enter in the access key, secret key, region endpoint, and bucket name you want to query. Also, make sure to add the AWSSDK nuget package.

Testing against one of my buckets, it gave me a count of 128075 and a size of 70.6GB. I know that is 99.9999% accurate, so I'm good with the result.

void Main() {
    var s3Client = new AmazonS3Client("accessKey", "secretKey", RegionEndpoint.???);
    var stop = false;
    var objectsCount = 0;
    var objectsSize = 0L;
    var nextMarker = string.Empty;

    while (!stop) {
        var response = s3Client.ListObjects(new ListObjectsRequest {
            BucketName = "",
            Marker = nextMarker
        });

        objectsCount += response.S3Objects.Count;
        objectsSize += response.S3Objects.Sum(
            o =>
                o.Size);
        nextMarker = response.NextMarker;
        stop = response.S3Objects.Count < 1000;
    }

    new {
        Count = objectsCount,
        Size = objectsSize.BytesToString()
    }.Dump();
}

static class Int64Extensions {
    public static string BytesToString(
        this long byteCount) {
        if (byteCount == 0) {
            return "0B";
        }

        var suffix = new string[] { "B", "KB", "MB", "GB", "TB", "PB", "EB" };
        var longBytes = Math.Abs(byteCount);
        var place = Convert.ToInt32(Math.Floor(Math.Log(longBytes, 1024)));
        var number = Math.Round(longBytes / Math.Pow(1024, place), 1);

        return string.Format("{0}{1}", Math.Sign(byteCount) * number, suffix[place]);
    }
}
Gup3rSuR4c
  • 661
  • 2
  • 13
  • 29
1

I know this is an older question but here is a PowerShell example:

Get-S3Object -BucketName <buckename> | select key, size | foreach {$A += $_.size}

$A contains the size of the bucket, and there is a keyname parameter if you just want the size of a specific folder in a bucket.

BE77Y
  • 2,577
  • 3
  • 17
  • 23
DCJeff
  • 21
  • 3
  • First run the Get-object..line and then run $A (for those not familiar with PowerShell) – Faiz Sep 30 '16 at 10:34
1

To check all buckets size try this bash script

s3list=`aws s3 ls | awk  '{print $3}'`
for s3dir in $s3list
do
    echo $s3dir
    aws s3 ls "s3://$s3dir"  --recursive --human-readable --summarize | grep "Total Size"
done
1

You can use s3cmd:

s3cmd du s3://Mybucket -H

or

s3cmd du s3://Mybucket --human-readable

It gives the total objects and the size of the bucket in a very readable form.

womble
  • 95,029
  • 29
  • 173
  • 228
bpathak
  • 11
  • 1
  • Does `du` traverse list all the objects or retrieve the metadata? Would really like an api version of the reports version or what is displayed in the aws console... – user67327 Jul 02 '19 at 22:52
0

I wrote a Bash script, s3-du.sh that will list files in bucket with s3ls, and print count of files, and sizes like

s3-du.sh testbucket.jonzobrist.com
149 files in bucket testbucket.jonzobrist.com
11760850920 B
11485205 KB
11216 MB
10 GB

Full script:

#!/bin/bash

if [ “${1}” ]
then
NUM=0
COUNT=0
for N in `s3ls ${1} | awk ‘{print $11}’ | grep [0-9]`
do
NUM=`expr $NUM + $N`
((COUNT++))
done
KB=`expr ${NUM} / 1024`
MB=`expr ${NUM} / 1048576`
GB=`expr ${NUM} / 1073741824`
echo “${COUNT} files in bucket ${1}”
echo “${NUM} B”
echo “${KB} KB”
echo “${MB} MB”
echo “${GB} GB”
else
echo “Usage : ${0} s3-bucket”
exit 1
fi    

It does do subdirectory size, as Amazon returns the directory name and the size of all of it's contents.

Deer Hunter
  • 1,070
  • 7
  • 17
  • 25
0

Also Hanzo S3 Tools does this. Once installed, you can do:

s3ls -s -H bucketname

But I believe this is also summed on the client side and not retrieved through the AWS API.

Giacomo1968
  • 3,522
  • 25
  • 38
Ville
  • 247
  • 2
  • 11
0

Hey there is a metdata search tool for AWS S3 at https://s3search.p3-labs.com/.This tool gives statstics about objects in a bucket with search on metadata.

longneck
  • 22,793
  • 4
  • 50
  • 84
pyth
  • 101
0

By Cloudberry program is also possible to list the size of the bucket, amount of folders and total files, clicking "properties" right on top of the bucket.

Giacomo1968
  • 3,522
  • 25
  • 38
KiKo
  • 1
0

If you don't want to use the command-line, on Windows and OSX, there's a general purpose remote file management app called Cyberduck. Log into S3 with your access/secret key pair, right-click on the directory, click Calculate.

Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
jpillora
  • 101
  • 2
0

CloudWatch has a default S3 service dashboard now which lists it in a graph called "Bucket Size Bytes Average". I think this link will work for anyone already logged into AWS Console:

flickerfly
  • 2,533
  • 3
  • 24
  • 27
-1

Following way uses AWS PHP SDK to get the total size of the bucket.

// make sure that you are using correct region (where the bucket is) to get new Amazon S3 client
$client = \Aws\S3\S3Client::factory(array('region' => $region));

// check if bucket exists
if (!$client->doesBucketExist($bucket, $accept403 = true)) {
    return false;
}
// get bucket objects
$objects = $client->getBucket(array('Bucket' => $bucket));

$total_size_bytes = 0;
$contents = $objects['Contents'];

// iterate through all contents to get total size
foreach ($contents as $key => $value) {
   $total_bytes += $value['Size'];
}
$total_size_gb = $total_size_bytes / 1024 / 1024 / 1024;
-1

This works for me..

aws s3 ls s3://bucket/folder/ --recursive | awk '{sz+=$3} END {print sz/1024/1024 "MB"}'
Flup
  • 7,688
  • 1
  • 31
  • 43
GrantO
  • 1