In Linux what happens if 1000 files in a directory are moved to another location while another 300 files were added to the source directory?

38

5

In Linux what happens if 1000 files in a directory are moved to another location and another 300 files were added to the source directory while original 1000 files were being moved. Will the destination end up being 1300 files? or will there be 300 files remaining in the source folder.

Shayan Ahmad

Posted 2019-02-26T12:02:57.597

Reputation: 509

7

This is not a direct answer, which seems to be well provided by @Eugene-Rieck. But, you might find it interesting/userful to read about Race Conditions (https://en.wikipedia.org/wiki/Race_condition ). They seem to be relevant to your question. In effect, if the specific commands you use to do the moving and adding of files create a race condition, then unusual things will happen.

– user02814 – 2019-02-27T05:13:01.353

4@user02814: The problem with race conditions is that unusual things might happen. When you're looking for them or writing tests, they usually don't happen. When you're putting code in production, they will surely happen. :) – Eric Duminil – 2019-02-27T12:58:35.813

1As an anecdotal case, I was moving a directory (mv dir/ other/) during which I added files to it. At the end of the move the directory was deleted and the uncopied files disappeared with it. – The Vee – 2019-03-01T06:33:11.910

To my above comment: across filesystems, that is. – The Vee – 2019-03-01T07:00:22.677

Answers

90

This depends on which tools you use: Let's check a few cases:

If you run something along the lines of mv /path/to/source/* /path/to/dest/ int a shell, you will end up with the original 1000 files being moved, the new 300 being untouched. This comes from the fact, that the shell will expand the * before starting the move operation, so when the move is in progress, the list is already fixed.

If you use Nautilus (and other GUI friends), you will end up the same way: It will run the move operation based on which files were selected - this doesn't change when new files show up.

If you use your own program using syscalls along the line of loop over glob and only one mv until glob stays empty, you will end up with all 1300 files in the new directory. This is because every new glob will pick up the new files, that have showed up in the meantime.

Eugen Rieck

Posted 2019-02-26T12:02:57.597

Reputation: 15 128

7What happens if you opendir() the source, then loop over readdir() or getdents()? – user1686 – 2019-02-26T12:15:26.890

If you loop only once, then it won't change. – Eugen Rieck – 2019-02-26T13:58:48.160

2Is that true for all filesystems, and regardless of the amount of files? I assumed the kernel generally returns live results through readdir(), and doesn't pre-cache them or anything. – user1686 – 2019-02-26T14:14:29.437

16The result-set of an opendir() is stable according to POSIX. A quick test with PHP's opendir() confirms that (but I tested only ext4). – Eugen Rieck – 2019-02-26T14:53:21.290

Turns out the result set of opendir() must be cached, as there is a limit on the number of returned values. This can't be true on a dynamic directory handle. – Eugen Rieck – 2019-02-26T14:58:31.773

@grawity: Not well defined. – Joshua – 2019-02-26T16:21:32.300

24@grawity: POSIX says: If a file is removed from or added to the directory after the most recent call to opendir() or rewinddir(), whether a subsequent call to readdir() returns an entry for that file is unspecified. Also, NFS may put some restrictions on what is implementable, IIRC it complicates implementation of telldir()/seekdir() – ninjalj – 2019-02-26T17:51:15.143

1

@grawity: tangentially related: https://lwn.net/Articles/544520/

– ninjalj – 2019-02-26T18:04:18.013

1minor nitpicking thought: couldn't the expansion take so long that the 300 files - or some of them - are already copied over even though copying was triggered slightly after the moving, e.g. in a script ? – Frank Hopkins – 2019-02-26T20:29:23.050

2What happens if the dir contains something like a billion files where the expanding of the wildcard takes a non-insignificant amount of time and the 300 files are moved there while this expanding takes place? – d-b – 2019-02-26T22:00:54.060

3@d-b With more than say 100 000 files, the expanded command line will exceed the maximum command length limit (ARG_MAX, usually a few MiB) and the mv will fail to execute. – TooTea – 2019-02-27T08:18:30.300

3@d-b If files are added or removed while expanding takes place, then the readdir discussion applies (because that's how the expansion is done too). – user1686 – 2019-02-27T08:24:04.243

A bit more digging shows, that the resultset is stable and not stable - in Linux readdir() will populate a buffer of 32K a time and keep this stable - if the resultset exceeds 32K, the next 32K block is loaded, which is an unstable operation, and then kept stable until exhausted. So for many thousands of files things looks different. You can use getdents() with a bigger buffer size than 32K instead of "pure" readdir()to care for the "many files" case – Eugen Rieck – 2019-02-27T10:44:52.687

What's funny is that while this approach seems very sensible, it is in fact not. That's because the likelihood of the directory contents changing mid-operation goes up as the number of files to process or copy (and thus time needed) increases. So, what looks like a sensible implementation is actually quite stupid because it wastes resources for cases where there isn't a problem to start with (copying 1-2 files, or maybe 20 of them), and presses the problem where it is likely to occur (copying 20,000 files). Still, good catch on your side :-) – Damon – 2019-02-27T10:51:55.280

1@Damon I beg to differ: Reading a directory from disk is a very expensive operation - a 32K buffer takes care of between 500 and 1000 files with a single disk read, which seems perfect for the vast majority of readdir() operations used. Directories with more than 1000 files are rather exotic, and using getdents() with a bigger buffer is a reasonable burden for those. – Eugen Rieck – 2019-02-27T11:22:08.513

1@EugenRieck since your comment and Ninjalj's are in contradiction; could you provide your reference that posix says that it's stable? – UKMonkey – 2019-02-28T11:27:52.660

2

I had a bookmark of the same article, that Ninjalj quoted and it contains a link to https://lwn.net/Articles/544846/ (The spec allows the application to read one filename a week and still be guaranteed to see all files that existed when it started the read with no duplicates.). This was of course made obsolete by my digging deeper, which showed stability only in buffer-sized chunks.

– Eugen Rieck – 2019-02-28T12:22:41.160

"you will end up with the original 1000 files being moved, the new 300 being untouched" I just tested this by moving 100 000 from FS A to FS B while moving another 10 000 files from a different location in FS to where those 100 000 files are. The 10 000 files just vanished. – UTF-8 – 2019-03-01T19:35:57.727

@UTF-8 This has nothing to do with this question: You either hit a bug or did somethin wrong - files should NEVER just vanish. – Eugen Rieck – 2019-03-02T18:45:05.367

@EugenRieck I suppose I forgot about the /*. But without, there really seems to be a bug. I just reproduced my previous result and documented exactly what I did in this video: https://youtu.be/TvJYf_H6-O8 Edit: It will take a few minutes for this video to appear in any reasonable resolution where you can actually see what I'm doing. YT seems to be slow. It's a 4K video, not a 360p one. Don't worry. ;) Do you think it's a bug I should file or did I do something wrong?

– UTF-8 – 2019-03-02T20:39:40.160

@UTF-8 That's a completely different story! You moved the directory, not the files it contains! That's a single move operation, not thousands of them. – Eugen Rieck – 2019-03-02T20:52:02.130

@EugenRieck Yes, I know. That's why I stated that I was wrong. I did not exactly do what was asked about in the question. However, do you think this is a bug in mv? – UTF-8 – 2019-03-02T20:59:53.877

8

When you tell the system to move all the files from a directory, it lists all the files and then starts moving them. If new files appear in the directory, they aren't added to the list of files to move, so they'll remain in the original location.

You can, of course, program a way of moving files different to mv which will periodically check for new files in the source directory.

choroba

Posted 2019-02-26T12:02:57.597

Reputation: 14 741

like say xargs mv? – Joshua – 2019-02-28T02:53:20.810

8

The kernel itself can't be "in the middle" of a "move 1000 files" operation. You need to be much more specific about what operation you're proposing.

One thread can only move one file at a time with the rename(*oldpath, const char *newpath) or renameat system calls (and only within the same filesystem1). Or Linux renameat2 which has flags like RENAME_EXCHANGE to atomically exchange two pathnames, or RENAME_NOREPLACE to not replace the destination if it exists. (e.g. allowing a mv -i implementation that avoids the race condition of stat and then rename, which would still overwrite a file created after stat. link + unlink could also solve that, because link fails if the new name exists.)

But each of these system calls only renames a single directory entry per system call. Using POSIX renameat with olddirfd and newdirfd (opened with open(O_DIRECTORY)) would allow you to keep looping over files in a directory even if the source or destination directory itself had been renamed. (Using relative paths could also allow that with regular rename().)

Anyway, as the other answers say, most programs that use the rename system call will figure out a list of filenames before doing the first rename. (Usually using the readdir(3) POSIX library function as a wrapper for platform-specific system calls like Linux getdents).

But if you're talking about find -exec ... {} \; to run one command per file, or the more efficient -exec {} + with so many files that they don't fit on one command line, then you can certainly have renames happening while still scanning. e.g.

find . -name '*.txt' -exec mv -t ../txtfiles {} \;   # Intentionally inefficient

If you created some new .txt files while this was running, you might see some of them in ../txtfiles. But internally find(1) will have used open(O_DIRECTORY) and getdents on ..

If one system call was enough to return all the directory entries in . (which find will loop over one at a time, only making further system calls if needed for -type or to recurse, or fork+exec on a match), then the list is a snapshot of the directory entries at one point in time. Further changes to the directory can't affect what find does, because it already has a copy of the directory listing what it will loop over. (Probably it internally uses readdir(3), which returns one entry at a time, but inside glibc we know from using strace find . that it makes a getdents64 system call with a buffer size of count=32768 entries.)

But if the directory is huge and/or the kernel doesn't fill find's buffer, it will have to make a 2nd getdents system call after looping over what it got the first time. So it could maybe see new entries after doing some renames.

But see discussion in comments under other answers: the kernel might have snapshotted for us, because (I think) getdents isn't allowed to return the same filename twice. Different filesystems use different sorting / indexing mechanisms for making access to an entry in a huge directory more efficient than a linear search. So adding or removing a directory might possibly have other effects on the order of the remaining entries. Hmm, probably it's more likely that filesystems keep a stable order, and just update an actual index (like the EXT4 dir_index feature), so a directory FD's position can just be a directory entry to resume from? I really don't know how the telldir(3) library interface maps onto lseek, or if that's purely a user-space thing for looping over the buffer obtained by user-space. But multiple getdents can be needed to get all the entries from a huge directory, so even if seeking isn't supported, the kernel needs to be able to record a current position.


Footnote 1:

To "move" between filesystems, it's up to user-space to copy and unlink. (e.g. with open and either read+write, mmap+write or sendfile(2) or copy_file_range(2), the latter two totally avoiding bouncing the file data through user-space.)

Peter Cordes

Posted 2019-02-26T12:02:57.597

Reputation: 3 141