-1

I have a free tier linux instance on AWS running a bitnami ubuntu parse dashboard that is the backend for an iOS app and website. When I log on to my instance it says that I am using 100% of the 9.76gb available when I run the df command:

df -h bitnami@ip-172-31-22-220:~/apps/parse/htdocs/logs$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      9.7G  9.7G     0 100% /

I have found other posts about searching for what files are filling up the file server but was hoping someone may be able to offer insights into the specific set up of AWS/Ubuntu/Bitnami parse-platform. This is my first question on this board, so before voting down please let me know if there are ways to improve my question. For example when I noticed when running du -ah there were a series of files that looked like AWS specific log files like /usr/src/linux-aws-headers-4.4.0-1109. Does AWS write out some files that I need to clear out or back up to S3 to keep my server from filling?

My website is down because of this and I am not sure what commands I can run that will help me find what is taking up all the space. I did some research on the du command but wasn't quite sure how I could use that to track down where the space has accumulated. I stumbled on one log file area that I try to clean out but it was only filling up about 1.6gb. Is there a way I can run a command on unix that will list out which directories are > a 1gb or something? I suspect that there is another place like the folder I found du -h /home/bitnami/apps/parse/htdocs/logs where log or error files exist that need to be cleaned out and I just don't know how the ubuntu bitnami parse instance stores those things. I am new to AWS so I am pretty sure that I need to upgrade my account or something so I have more then 9.76 gb in filespace before I release my app for general use, so you if you happen to have any advice for what an AWS instance that is the backend for an iOS should be set up as I would appreciate any insights on that topic as well.

  • This has been asked and answered many times. Your use case is no different from anyone other. – user9517 Jun 24 '20 at 07:09
  • https://serverfault.com/questions/422528/find-files-folders-that-are-filling-up-disk-space – user9517 Jun 24 '20 at 07:09
  • https://serverfault.com/questions/62119/how-do-i-find-out-what-is-using-up-all-the-space-on-my-partition – user9517 Jun 24 '20 at 07:10
  • My question was only partially about finding Files (I already found 2 spots) and I mentioned that it was unique because I am not sure about the interplay of the AWS instance and parse logs I found. If all I wanted to know was how to find the big files on Linux it would be fair to close, but I was hoping someone could tell me what those AWS folders of 100mb a piece are, if they are safe to remove, and maybe even if anyone has parse experience how to limit the size of the log files at the other directory I posted. – Daniel Patriarca Jun 24 '20 at 17:12
  • Your question is sufficiently unclear that oth the answers it garnered are already mentioned in the duplicates. – user9517 Jun 24 '20 at 21:09

2 Answers2

0

log onto the machine as root and issue

cd /
du -k | sort -n | tail -222

dir / is the root dir from which all other dirs start from ... at bottom of the listing will be the biggest dirs ... some of those big dirs may be system owned and are typically not a concern ... hopefully you simply have a logging dir which has become full ... try to bake into your processing various clean up processes which delete files or trim them on an ongoing basis so your machine can reach a sustainable steady state

here is a break down of what above set of commands is doing

du -k      #  print out size and name of each directory 
sort -n    #  do a numeric sort of input
tail -222  #  only keep bottom 222 lines of input

those commands are piped together using the tool shown here as | which is the key just above right side enter key in USA keyboards ... output of du is piped into input of the next command sort ... ditto for sort | tail

as an aside its always good to script everything including the creation + installation + destruction of your server processing ... that way you simply destroy that aws instance and run the script to spin up a fresh instance ... doing so makes troubleshooting easier to say the least

Scott Stensland
  • 225
  • 4
  • 10
0

NCDU is a very useful text / graphic utility that helps you find where your disk space is being used.

sudo apt-get install ncdu
sudo ncdu /

You can also make the disk bigger.

Tim
  • 30,383
  • 6
  • 47
  • 77