Bash scripting: test for empty directory

106

24

I want to test if a directory doesn't contain any files. If so, I will skip some processing.

I tried the following:

if [ ./* == "./*" ]; then
    echo "No new file"
    exit 1
fi

That gives the following error:

line 1: [: too many arguments

Is there a solution/alternative?

Anthony Kong

Posted 2011-10-31T03:47:31.703

Reputation: 3 117

related: https://stackoverflow.com/q/91368/52074

– Trevor Boyd Smith – 2019-02-12T19:36:13.550

Answers

135

if [ -z "$(ls -A /path/to/dir)" ]; then
   echo "Empty"
else
   echo "Not Empty"
fi

Also, it would be cool to check if the directory exists before.

Andrey Atapin

Posted 2011-10-31T03:47:31.703

Reputation: 1 682

3I had trouble getting this method to work when the /path/to/dir contained spaces and needed to be quoted. I used [ $(ls -A "$path" | wc -l) -ne 0], inspired by @ztank1013's answer. – pix – 2015-05-19T03:23:07.497

1For those who are looking for a one liner : [ "$(ls -A ./path/to/dir)" ] && echo 'NOT EMPTY' || echo 'EMPTY' – tdhulster – 2016-06-03T16:29:38.893

8Oh no! There is a very important problem with this code. It should be if [ -n "$(ls -A /path/to/dir)" ]; then ... Please update the answer before someone pastes this code into their server somewhere and a hacker figures out how to exploit it. If /path/to/dir isn't empty, then the filenames there get passed as arguments to /bin/test which is clearly not intended. Just add the -n argument, problem solved. Thanks! – Edward Ned Harvey – 2016-12-05T21:20:55.397

1Just in case if someone will be looking for "correct/stable" one liner: [ -z "$(ls -A /path/to/dir)" ] && { echo "Not Empty" ; YourCommandA ; true ; } || { echo "Empty" ; YourCommandB ; }. – Victor Yarema – 2017-10-12T23:24:22.030

2This checks whether the directory exists, and deals with spaces in the path (notice the nested quotes in the subshell):

if [ -d "/path/to/dir" ] && [ -n "$(ls -A "/path/to/dir")" ]; then echo "Non-empty folder" else echo "Empty or not a folder" fi – Jonathan H – 2019-01-10T00:40:40.907

Honestly, at this point, if it's my own system I just write a quick 10 line Python script, symlink it to /usr/local/bin or something, and then call it like if isNotEmpty "$directory"; then ... fi – bd1251252 – 2019-08-27T19:02:02.797

12Don't use && and || simultaneously! If echo "Not Empty" fails, echo "Empty" will run! Try echo "test" && false || echo "fail"!

Yes, I know echo will not fail but if you change any other command, you will be suprise! – uzsolt – 2011-10-31T09:27:48.513

4Please, provide at least one example when the code above won't work. Concretely this code is absolutely correct. I hope that asker is able to adapt this for his own purposes – Andrey Atapin – 2011-10-31T09:56:37.867

3[ "$(ls -A /)" ] && touch /non-empty || echo "Empty" - if you want "mark" the non-empty dirs with a created file named non-empty, will fails and prints "Empty". – uzsolt – 2011-10-31T10:00:46.177

4where's touch /empty in my line? – Andrey Atapin – 2011-10-31T10:01:52.277

3Yes, you didn't wrote touch. But if you want more (other) than an echo "Not Empty", your script may be wrong! I've written this in my first comment, please read again (maybe you've read before my edit). – uzsolt – 2011-10-31T10:04:24.457

24

No need for counting anything or shell globs. You can also use read in combination with find. If find's output is empty, you'll return false:

if find /some/dir -mindepth 1 | read; then
   echo "dir not empty"
else
   echo "dir empty"
fi

This should be portable.

slhck

Posted 2011-10-31T03:47:31.703

Reputation: 182 472

Nice solution, but I think your echo calls reflect the wrong result : in my test (under Cygwin) find . -mindepth 1 | read had a 141 error code in a non-empty dir, and 0 in an empty dir – Lucas Cimon – 2017-12-20T09:17:11.947

@LucasCimon Not here (macOS and GNU/Linux). For an non-empty directory, read returns 0, and for an empty one, 1. – slhck – 2017-12-20T11:28:23.447

3PSA does not work with set -o pipefail – Colonel Thirty Two – 2019-09-10T02:07:19.783

20

if [ -n "$(find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty 2>/dev/null)" ]; then
    echo "Empty directory"
else
    echo "Not empty or NOT a directory"
fi

uzsolt

Posted 2011-10-31T03:47:31.703

Reputation: 1 017

4It needs quotes (2x) and the test -n to be correct and safe (test with directory with spaces in the name, test it with non-empty directory with name '0 = 1'). ... [ -n "$(find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty 2>/dev/null)" ]; ... – Zrin – 2017-03-07T23:51:22.563

1

@ivan_pozdeev That's not true, at least for GNU find. You may be thinking of grep. https://serverfault.com/questions/225798/can-i-make-find-return-non-0-when-no-matching-files-are-found

– Vladimir Panteleev – 2018-06-19T10:55:27.133

It might be simpler to write find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty | grep ., and rely on the exit status from grep. Whichever way you do it, this is very much the right answer to this question. – Tom Anderson – 2018-07-06T13:56:21.993

Correct and fast. Nice! – l0b0 – 2012-02-24T11:00:06.380

14

#!/bin/bash
if [ -d /path/to/dir ]; then
    # the directory exists
    [ "$(ls -A /path/to/dir)" ] && echo "Not Empty" || echo "Empty"
else
    # You could check here if /path/to/dir is a file with [ -f /path/to/dir]
fi

Renaud

Posted 2011-10-31T03:47:31.703

Reputation: 348

4That must be it, no need for parsing ls output, just see if it is empty or not. Using find just feels like an overkill to me. – akostadinov – 2013-09-18T12:34:35.313

4

With FIND(1) (under Linux and FreeBSD) you can look non-recursively at a directory entry via "-maxdepth 0" and test if it is empty with "-empty". Applied to the question this gives:

if test -n "$(find ./ -maxdepth 0 -empty)" ; then
    echo "No new file"
    exit 1
fi

TimJ

Posted 2011-10-31T03:47:31.703

Reputation: 141

1It may not be 100% portable, but it's elegant. – Craig Ringer – 2018-11-28T02:59:44.703

This also finishes early in large directories, works with pipefail: `set -o pipefail;

{ find "$DIR" -mindepth 1 || true ; } | head -n1 | read && echo NOTEMPTY || echo EMPTY ` – macieksk – 2019-11-24T12:29:38.060

4

This will do the job in the current working directory (.):

[ `ls -1A . | wc -l` -eq 0 ] && echo "Current dir is empty." || echo "Current dir has files (or hidden files) in it."

or the same command split on three lines just to be more readable:

[ `ls -1A . | wc -l` -eq 0 ] && \
echo "Current dir is empty." || \
echo "Current dir has files (or hidden files) in it."

Just replace ls -1A . | wc -l with ls -1A <target-directory> | wc -l if you need to run it on a different target folder.

Edit: I replaced -1a with -1A (see @Daniel comment)

ztank1013

Posted 2011-10-31T03:47:31.703

Reputation: 451

1-1 is definitely redundant. Even if ls will not print one item per line when it will be piped then it doesn't affect the idea of checking if it produced zero or more lines. – Victor Yarema – 2017-10-13T09:31:19.500

2use ls -A instead. Some file systems don't have . and .. symbolic links according to the docs. – Daniel Beck – 2011-10-31T10:12:17.457

1Thanks @Daniel, I edited my answer after your suggestion. I know the "1" might be removed too. – ztank1013 – 2011-10-31T10:21:41.160

3It doesn't hurt, but it's implied if output it not to a terminal. Since you pipe it to another program, it's redundant. – Daniel Beck – 2011-10-31T10:24:41.810

4

A hacky, but bash-only, PID-free way:

is_empty() {
    test -e "$1/"* 2>/dev/null
    case $? in
        1)   return 0 ;;
        *)   return 1 ;;
    esac
}

This takes advantage of the fact that test builtin exits with 2 if given more than one argument after -e: First, "$1"/* glob is expanded by bash. This results in one argument per file. So

  • If there are no files, the asterisk in test -e "$1"* does not expand, so Shell falls back to trying file named *, which returns 1.

  • ...except if there actually is one file named exactly *, then the asterisk expands to well, asterisk, which ends up as the same call as above, ie. test -e "dir/*", just this time returns 0. (Thanks @TrueY for pointing this out.)

  • If there is one file, test -e "dir/file" is run, which returns 0.

  • But if there are more files than 1, test -e "dir/file1" "dir/file2" is run, which bash reports it as usage error, i.e. 2.

case wraps the whole logic around so that only the first case, with 1 exit status is reported as success.

Possible problems I haven't checked:

  • There are more files than number of allowed arguments--I guess this could behave similar to case with 2+ files.

  • Or there is actually file with an empty name--I'm not sure it's possible on any sane OS/FS.

Alois Mahdal

Posted 2011-10-31T03:47:31.703

Reputation: 2 014

1Minor correction: if there is no file in dir/, then test -e dir/* is called. If the only file is '*' in dir then test will return 0. If there are more files, then it returns 2. So it works as described. – TrueY – 2018-09-07T11:43:20.563

You're right, @TrueY, I've incorporated it in the answer. Thanks! – Alois Mahdal – 2018-09-11T14:45:19.737

3

Use the following:

count="$( find /path -mindepth 1 -maxdepth 1 | wc -l )"
if [ $count -eq 0 ] ; then
   echo "No new file"
   exit 1
fi

This way, you're independent of the output format of ls. -mindepth skips the directory itself, -maxdepth prevents recursively defending into subdirectories to speed things up.

Daniel Beck

Posted 2011-10-31T03:47:31.703

Reputation: 98 421

Of course, you're now dependent on wc -l and find output format (which is reasonably plain though). – Daniel Beck – 2011-10-31T10:25:23.790

3

Using an array:

files=( * .* )
if (( ${#files[@]} == 2 )); then
    # contents of files array is (. ..)
    echo dir is empty
fi

glenn jackman

Posted 2011-10-31T03:47:31.703

Reputation: 18 546

3Very nice solution, but note that it requires shopt -s nullglob – xebeche – 2017-01-16T11:24:17.200

3The ${#files[@]} == 2 assumption doesn't stand for the root dir (you will probably not test if it's empty but some code that doesn't know about that limitation might). – ivan_pozdeev – 2018-01-21T04:29:46.440

1@ivan_pozdeev: What do you mean? When I do cd / && files=(* .*), I get an enumeration of all the files and directories in the root directory, which includes . and ... So the ${#files[@]} == 2 test is valid. – Scott – 2020-02-03T04:08:18.970

2

What about testing if directory exists and not empty in one if statement

if [[ -d path/to/dir && -n "$(ls -A path/to/dir)" ]]; then 
  echo "directory exists"
else
  echo "directory doesn't exist"
fi

stanieviv

Posted 2011-10-31T03:47:31.703

Reputation: 21

1

if find "${DIR}" -prune ! -empty -exit 1; then
    echo Empty
else
    echo Not Empty
fi

EDIT: I think that this solution works fine with gnu find, after a quick look at the implementation. But this may not work with, for example, netbsd's find. Indeed, that one uses stat(2)'s st_size field. The manual describes it as:

st_size            The size of the file in bytes.  The meaning of the size
                   reported for a directory is file system dependent.
                   Some file systems (e.g. FFS) return the total size used
                   for the directory metadata, possibly including free
                   slots; others (notably ZFS) return the number of
                   entries in the directory.  Some may also return other
                   things or always report zero.

A better solution, also simpler, is:

if find "${DIR}" -mindepth 1 -exit 1; then
    echo Empty
else
    echo Not Empty
fi

Also, the -prune in the 1st solution is useless.

EDIT: no -exit for gnu find.. the solution above is good for NetBSD's find. For GNU find, this should work:

if [ -z "`find \"${DIR}\" -mindepth 1 -exec echo notempty \; -quit`" ]; then
    echo Empty
else
    echo Not Empty
fi

yarl

Posted 2011-10-31T03:47:31.703

Reputation: 68

find from GNU findutils 4.6.0 (the latest version) doesn't have an -exit predicate. – Dennis – 2018-09-11T15:13:15.953

1

The Question was:

if [ ./* == "./*" ]; then
    echo "No new file"
    exit 1
fi

Answer is:

if ls -1qA . | grep -q .
    then ! exit 1
    else : # Dir is empty
fi

HarriL

Posted 2011-10-31T03:47:31.703

Reputation: 11

1

I think the best solution is:

files=$(shopt -s nullglob; shopt -s dotglob; echo /MYPATH/*)
[[ "$files" ]] || echo "dir empty" 

thanks to https://stackoverflow.com/a/91558/520567

This is an anonymous edit of my answer that might or might not be helpful to somebody: A slight alteration gives the number of files:

files=$(shopt -s nullglob dotglob; s=(MYPATH/*); echo ${s[*]}) 
echo "MYPATH contains $files files"

This will work correctly even if filenames contains spaces.

akostadinov

Posted 2011-10-31T03:47:31.703

Reputation: 1 140

0

This work for me, to check & process files in directory ../IN, considering script is in ../Script directory:

FileTotalCount=0

    for file in ../IN/*; do
    FileTotalCount=`expr $FileTotalCount + 1`
done

if test "$file" = "../IN/*"
then

    echo "EXITING: NO files available for processing in ../IN directory. "
    exit

else

  echo "Starting Process: Found ""$FileTotalCount"" files in ../IN directory for processing."

# Rest of the Code

Arijit

Posted 2011-10-31T03:47:31.703

Reputation: 1

0

I made this approach:

CHECKEMPTYFOLDER=$(test -z "$(ls -A /path/to/dir)"; echo $?)
if [ $CHECKEMPTYFOLDER -eq 0 ]
then
  echo "Empty"
elif [ $CHECKEMPTYFOLDER -eq 1 ]
then
  echo "Not Empty"
else
  echo "Error"
fi

Alex Sano

Posted 2011-10-31T03:47:31.703

Reputation: 9

0

This is all great stuff - just made it into a script so I can check for empty directories below the current one. The below should be put into a file called 'findempty', placed in the path somewhere so bash can find it and then chmod 755 to run. Can easily be amended to your specific needs I guess.

#!/bin/bash
if [ "$#" == "0" ]; then 
find . -maxdepth 1 -type d -exec findempty "{}"  \;
exit
fi

COUNT=`ls -1A "$*" | wc -l`
if [ "$COUNT" == "0" ]; then 
echo "$* : $COUNT"
fi

Warren Sherliker

Posted 2011-10-31T03:47:31.703

Reputation: 9

-1

For any directory other than the current one, you can check if it's empty by trying to rmdir it, because rmdir is guaranteed to fail for non-empty directories. If rmdir succeeds, and you actually wanted the empty directory to survive the test, just mkdir it again.

Don't use this hack if there are other processes that might become discombobulated by a directory they know about briefly ceasing to exist.

If rmdir won't work for you, and you might be testing directories that could potentially contain large numbers of files, any solution relying on shell globbing could get slow and/or run into command line length limits. Probably better to use find in that case. Fastest find solution I can think of goes like

is_empty() {
    test -z $(find "$1" -mindepth 1 -printf X -quit)
}

This works for the GNU and BSD versions of find but not for the Solaris one, which is missing every single one of those find operators. Love your work, Oracle.

flabdablet

Posted 2011-10-31T03:47:31.703

Reputation: 174

Not a good idea. The OP simply wanted to test if the directory was empty or not. – roaima – 2018-05-02T14:05:33.920

-3

You can try to remove the directory and wait to an error; rmdir will not delete the directory if it is not empty.

_path="some/path"
if rmdir $_path >/dev/null 2>&1; then
   mkdir $_path        # create it again
   echo "Empty"
else
   echo "Not empty or doesn't exist"
fi

impxd

Posted 2011-10-31T03:47:31.703

Reputation: 7

3-1 This is the kind of code that backfires. rmdir will fail if I have no permission to remove the directory; or if it's a Btrfs subvolume; or if it belongs to a read-only filesystem. And if rmdir doesn't fail and mkdir runs: what if the already removed directory belonged to another user? what about its (possibly non-standard) permissions? ACL? extended attributes? All lost. – Kamil Maciorowski – 2018-04-08T21:37:20.910

1Well, I'm just learning bash and I thought that it could be a faster way rather than iterate through all the directory, but CPUs are powerful and you are right, not secure. – impxd – 2018-04-09T05:21:30.993