71

I'm running a Linux instance on EC2 (I have MongoDB and node.js installed) and I'm getting this error:

Cannot write: No space left on device

I think I've tracked it down to this file, here is the df output

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda1             1032088   1032088         0 100% /

The problem is, I don't know what this file is and I also don't know if this file is even the problem.

So my question is: How do I fix the "No space left on device" error?

Chris Biscardi
  • 811
  • 1
  • 7
  • 5

9 Answers9

101

That file, / is your root directory. If it's the only filesystem you see in df, then it's everything. You have a 1GB filesystem and it's 100% full. You can start to figure out how it's used like this:

sudo du -x / | sort -n | tail -40

You can then replace / with the paths that are taking up the most space. (They'll be at the end, thanks to the sort. The command may take awhile.)

David Schwartz
  • 31,215
  • 2
  • 53
  • 82
  • 29
    To get the output in a human-readable format, you can use `sudo du -x -h / | sort -h | tail -40` (from [this answer](http://serverfault.com/a/156648/115568)). – mkobit Jun 21 '16 at 14:09
  • 3
    For those on micro AWS AMI instances, this can take a minute or so to run. Be patient! – Dr Rob Lang Nov 15 '18 at 11:31
  • what to do with this:`sort: write failed: /tmp/sortGmL8oF: No space left on device` – dOM Dec 03 '18 at 09:28
  • 1
    @dOM Ouch. Try to clean off some space on `/tmp`. Or, if you must, narrow it down step by step with commands like `du -xhs /*`. – David Schwartz Dec 03 '18 at 09:45
  • `du -x -h / | sort -h | tail -40 | sort -h -r` can be used to sort in descending order when using human-readable output. – Vigs May 11 '19 at 17:53
28

I know i am replying in this thread after nearly 5 years but it might help someone, I had the same problem, i had m4.xlarge instance df -h told that the /dev/xvda1 was full, - 100%

Filesystem      Size  Used Avail Use% Mounted on
udev            7.9G     0  7.9G   0% /dev
tmpfs           1.6G  177M  1.4G  12% /run
/dev/xvda1      7.7G  7.7G     0 100% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/1000

i tried to solve it here are the steps

sudo find / -type f -printf '%12s %p\n' 2>/dev/null|awk '{if($1>999999999)print $0;}'

Helped me to know that it was the docker container that was talking all my space so i push all my container to my docker registry then did sudo rm -rf /var/lib/docker/ it cleared up my space :) hope it helps someone :)

Swat
  • 445
  • 5
  • 7
  • Thanks. How đi you push your container to your docker resgistry? It is not good just to delete the xxx-json.log file right? – Freelensia Jun 23 '21 at 13:45
11

If you are running an EBS boot instance (recommended) then you can increase the size of the root (/) volume using the procedure I describe in this article:

Resizing the Root Disk on a Running EBS Boot EC2 Instance
http://alestic.com/2010/02/ec2-resize-running-ebs-root

If you are running an instance-store instance (not recommended) then you cannot change the size of the root disk. You either have to delete files or move files to ephemeral storage (e.g., /mnt) or attach EBS volumes and move files there.

Here's an article I wrote that describes how to move a MySQL database from the root disk to an EBS volume:

Running MySQL on Amazon EC2 with EBS
http://aws.amazon.com/articles/1663

...and consider moving to EBS boot instances. There are many reasons why you'll thank yourself later.

Eric Hammond
  • 10,901
  • 34
  • 56
  • I'm running on EBS, it's fairly cheap to expand the root disc right? Fortunately I don't have to deal with MySQL, My projects are currently Mongo/Redis. some great material here. +1 –  Nov 13 '11 at 06:31
4

Paulo was on the right track for me, but when I tried to run

sudo apt autoremove

it responded:

Reading package lists... Error!
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.

First, I had to run

sudo apt-get clean

That cleared just enough space for me to run 'sudo apt autoremove', and that took me from 100% full on /dev/xvda1 to 28%.

Mason
  • 141
  • 1
4

I have just solved that problem by running this command:

sudo apt autoremove

and a lot of old packages were removed, freeing up 5 gigabytes, for instance there was many packages like this "linux-aws-headers-4.4.0-1028"

Paulo
  • 41
  • 1
4

I've recently run into this issue on Amazon Linux. My crontab outbound email queue /var/spool/clientmqueue was 4.5GB.

I solved it by:

  1. Locating large files: sudo find / -type f -size +10M -exec ls -lh {} \;
  2. Deleting large files: /bin/rm -f <path-to-large-file>
  3. Restart server instance

Problem solved!

1

Hope this helps to those who are using codedeploy agent and having a similar issue.

I was using Amazon Linux EC2 instance and my directory was 100% full. first, to run the command I deleted all files /var/log/journal/.

then run this command. sudo du -xhc / and found that out of 8GB, codedeploy-agent/deployment-root folder was using 5.1GB space.

By default codedeploy-agent store last 5 archive so i changed :max_revision from 5 to 2 in /etc/codedeploy-agent/conf/codedeployagent.yml

Vishal Patel
  • 111
  • 2
1

It could be coming from Jenkins or Docker. To solve that, you should clean Jenkings logs and set it's size.

T.Todua
  • 204
  • 4
  • 14
0

Use du -hs * | sort -rh | head -5 to check top 5 usage, then rm -rf name to remove junk like large size log file or archived within logs folder

Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47