3

Due to an external unavoidable situation*, I need to have more than 32k directories in a directory (but as far as I can tell, less than 64k). I'm hitting the limit of ext3. I presume the original server was running ReiserFS. The backup is stored in S3.

My solution is upgrading to ext4, which according to Wikipedia:

In ext3 a directory can have at most 32,000 subdirectories. In ext4 this limit increased to 64,000.

My question is: will mounting the fs as ext4 automatically increase this limit? will I have to run some command to enable new features? do I have to re-create the directory?

* restoring a backup to convert the information to a new and better system we wrote

ewwhite
  • 194,921
  • 91
  • 434
  • 799
Pablo
  • 7,249
  • 25
  • 68
  • 83
  • proceed directly there: http://serverfault.com/questions/482998/how-can-i-fix-it-ext4-fs-warning-device-sda3-ext4-dx-add-entry-directory-in#comment537484_482998 :) – poige Feb 27 '13 at 14:55
  • Are you interested in alternative solutions (other that "migrate from ext3 to ext4")? – Hauke Laging Feb 27 '13 at 15:08
  • @HaukeLaging maybe, what do you have in mind? – Pablo Feb 27 '13 at 15:37
  • @poige apparently it doesn't happen automamagically, but does it happen when creating a new dir? when enabling new features? That information is not in the question and answer you linked to. – Pablo Feb 27 '13 at 15:40

4 Answers4

3

The short answer: Yes. Converting from ext3 to ext4 does solve the problem.

The long answer:

Here's how I worked around this:

I have a 5 terabyte RAID array that hit this limit with about 4TB of data on the partition. So I:

Ran the following to convert it from ext3 to ext4:

tune2fs -O extents,uninit_bg,dir_index /dev/DEV

where /dev/DEV for me was something like /dev/sdb1

Then I ran:

e2fsck -fDC0 /dev/DEV

This took approximately 8 hours to run on 4TB of data.

Then I modified /etc/fstab to tell it to mount the partition as ext4.

Then I ran

mount /big

where /big is the name of my partition. And it worked perfectly.

So to answer your question, yes converting to ext4 does actually solve the problem.

Read these before you do this conversion: http://www.debian-administration.org/article/643/Migrating_a_live_system_from_ext3_to_ext4_filesystem

https://ext4.wiki.kernel.org/index.php/Ext4_Howto#Converting_an_ext3_filesystem_to_ext4

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
Mark172
  • 217
  • 1
  • 5
  • 2
    The `-D` option to `e2fsck` is the important part here. This converts all the directories to the new B-tree format, so you don't suffer from O(n) every time you try to traverse a directory. – Michael Hampton Mar 22 '14 at 19:17
1

No, changing the fs type and mounting it as ext4 wouldn't make the number of inodes to grow large. That is dictated at the filesystem creation time and cannot be changed on the fly in ext* fs.

Moreover, it is not a very clean and recommended approach to go to ext4 just by unmounting and mounting it. ext3 is block based and ext4 is extent based, and even if you mount ext3 as ext4, it would remain block based. So, you won't get the major benefits of ext4.

If you have a test system, you can try to do the conversion and watch the dumpe2fs output.

Did a quick check on the source. It is hardcoded.

/*
 * Maximal count of links to a file
*/
#define EXT3_LINK_MAX           32000

/*

From include/linux/ext3_fs.h

Soham Chakraborty
  • 3,534
  • 16
  • 24
  • The subdirectory limit is a matter of the overall number of inodes? Doesn't make sense to me. IIRC newly created files use extents after mounting as ext4. – Hauke Laging Feb 27 '13 at 14:55
  • No, subdir limit is not definitely about no of inodes. I am going to update the answer. – Soham Chakraborty Feb 27 '13 at 15:00
  • The subdir limit has nothing to do with inodes. As you say, it's hardcoded in the driver and the ext4 has a different hardcoded limit. I'm not sure if switching to extent is required to have the bigger limit or not, hence this question. I understand you can switch to extent and/or new directories/files use extent. This is something I'm trying to test. – Pablo Feb 27 '13 at 15:39
0

An alternative solution to the original problem may be: Stop the restore process every few minutes and check whether there are e.g. 10000 subdirectorys. If so, create a new directory (maybe not even in this one), move the 10000 directories there and create symlinks to them. This way you get the expected structure without hitting the FS limit.

Hauke Laging
  • 5,157
  • 2
  • 23
  • 40
  • I am considering this option. The problem is that it's a huge amount of files and before we process them we'll have to re-sync. I'm not sure how the re-syncing process will work with symlinks and how the processing software will work with it (we have some control over that). – Pablo Feb 27 '13 at 15:55
  • Can you check that in avdance with some test data (structure)? If that shows problems another idea would be to combine the contents of these helper directories via unionfs / aufs in one mount point. I don't know how they react to such huge amounts of subdirectories though. – Hauke Laging Feb 27 '13 at 16:19
0

I don't have the whole answer but this is as close as I can get with the experiments I run. Once you run this command on a file system:

tune2fs -O extents,uninit_bg,dir_index /dev/DEV

you can create more that 32k directories in a directory.

The part I do not know (and I cannot find right now) is whether mounting it as ext4 but with less features enabled let's you create more than 32k directories.

Pablo
  • 7,249
  • 25
  • 68
  • 83