5

I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get:

me@corellia:~/Configs/$ git push origin master

Counting objects: 18, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (14/14), done.
fatal: Out of memory, malloc failed MiB | 685 KiB/s   
error: pack-objects died of signal 13
error: failed to push some refs to 'git@dagobah:Configs'

I've been searching the web, and notably found: http://www.mail-archive.com/git-users@googlegroups.com/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get:

24262 git       18   0 16204 6084 1096 S    2  1.2   0:00.12 git-unpack-obje   

Also, during the push if I run /head/meminfo, I get:

MemTotal:       524288 kB
MemFree:        289408 kB
Buffers:             0 kB
Cached:              0 kB
SwapCached:          0 kB
Active:              0 kB
Inactive:            0 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       524288 kB

So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it.

Thanks!

EDIT:

The output of running the ulimit -a command:

scottj@dagobah:~$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 204800
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 204800
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

EDIT:

The git objects and sizes are:

313796  .git/objects/88/7ce48679f885af4c466d4ddccef9a9954a9de6
224276  .git/objects/18/261f6a52926a756c7ecb453e025d1f25937026
6248    .git/objects/63/a0b4e622c893d3dcc162052b43301030d0c86d
5608    .git/objects/a2/0c65987656cba591171549752eb97f0207fec8
2608    .git/objects/pack/pack-3be8300f69b67fa8fa687df84bbd9b8c96e86c8e.pack
28  .git/objects/pack/pack-3be8300f69b67fa8fa687df84bbd9b8c96e86c8e.idx
24  .git/objects/c9/8909563ec60369d69ac2d317af25a44c9fc198
24  .git/objects/5d/1f74bd9bc4c575a7eeec08d59916d9829068d1
24  .git/objects/53/edad79cb051f5e7864d9d3339fa59990ccfe2d
8   .git/objects/80/dd50c7a314950e5a1f56c0210b0a91f48ee792
jwir3
  • 155
  • 1
  • 2
  • 10
  • Is this a 32-bit or 64-bit build of `git`? This is usually caused by some very restrictive limit on virtual memory imposed by a bone-headed administrator who doesn't realize that virtual memory is *not* a scarce resource and should not be aggressively limited. Can you paste your `ulimit -a` output? – David Schwartz Mar 23 '12 at 22:37
  • I posted the output of `ulimit -a` in the original question. On the client machine, I'm using a 64-bit git client. On the server machine, it's 32-bit (or at least I think it's 32-bit, because the output of the package says it's Architecture: all). Both systems are Ubuntu Linux. – jwir3 Mar 23 '12 at 23:10
  • 3
    Git likes to mmap its objects and packfiles. This doesn't take up much physical memory, but (especially on 32-bit) you can easily run out of virtual address space. It's possible that the client is pushing a packfile that the server is unable to load. Does your repository contain large files? – ephemient Mar 24 '12 at 04:29
  • Yes, this started when I tried to push a large file - specifically one that is 463M and one that is 365M. This could be causing the problem... What's the typical resolution of this in the 32-bit case? Is there one, or is the solution to upgrade to 64 bit? – jwir3 Mar 26 '12 at 19:41
  • Checking top output is irrelevant since the malloc is failing! IMHO you simply need more memory than you have, try adding a swap if possible. – Giovanni Toraldo Mar 31 '12 at 07:44
  • @Giovanni: Well, that's not something I have control over at the moment. So, there's no way to run a git server that accepts large files during push that has 512MB of memory? (I know 512M isn't a lot, but it seems reasonably sufficient for most tasks). – jwir3 Apr 01 '12 at 01:38

2 Answers2

2

It is a bit of a stretch but give this a try

git -c core.packedGitWindowSize=32m -c core.packedGitLimit=256m push origin master

This overrides a couple of parameters that limit the number of bytes mapped from files. These are the defaults used for a 32-bit system, the 64-bit defaults are much larger. I'm speculating you are using a 64-bit system, which is causing git to use very large defaults, but there are resource constraints (perhaps from running in a VM) that trigger the error.

These configuration parameters and values came from http://schacon.github.com/git/git-config.html

Brian Swift
  • 151
  • 2
  • Hm, no that doesn't seem to have done the trick. :( It did get further than it has in the past, though. – jwir3 Apr 04 '12 at 17:28
  • If the reduced memory helped some, maybe try reducing the values further by a factor of four to 8m and 64m. – Brian Swift Apr 04 '12 at 18:18
  • Hmm... still no dice. :( – jwir3 Apr 04 '12 at 18:37
  • 1
    I'm still stretching here, but maybe try adding `-c pack.threads=1 -c pack.deltaCacheSize=64m pack.windowMemory=64m` – Brian Swift Apr 04 '12 at 19:08
  • Nope. Still fails. It gets to "Writing Objects: 66%" and sits there for a minute, with the speed gradually increasing from 100 kB/s to about 450 kB/s, then it dies. :| – jwir3 Apr 04 '12 at 22:18
  • If you'd like to try another one, add `-c core.bigFileThreshold=128m` – Brian Swift Apr 05 '12 at 00:00
  • Dang. Still nothing. I was really hoping that would work. :) – jwir3 Apr 05 '12 at 00:18
  • On the odd chance the sizes reported for the objects are 512-byte blocks, `-c core.bigFileThreshold=64m`. Also, I'd look to see if anything is being logged in `/var/log/messages` on the server when the crash occurs. Also, does `df` on the server show plenty of free disk space. Also, running `top` on the server observe the Mem free and Swap free values just before the failure or alternatively `watch grep -i -e MemFree -e SwapFree /proc/meminfo`. – Brian Swift Apr 05 '12 at 01:24
  • df does show about 13G left on the partition where / is mounted. – jwir3 Apr 05 '12 at 02:26
  • The core.bigFileThreshold=64M didn't work either. When I do the `watch grep -i -e MemFree -e SwapFree /proc/meminfo`, I see that there is no swap space available. This could be causing my problem. It's possible I don't have a swap partition at all! (Is it possible to install one from free space left on the device?) – jwir3 Apr 05 '12 at 02:32
  • Unfortunately, I think SwapFree should only be a problem if MemFree drops to 0. Yes, it is possible to to add swap as a plain file I the partition, but I don't have the recipe for that at hand. – Brian Swift Apr 05 '12 at 04:25
  • Brian Swift: Thanks for pointing to "-c pack.threads=1"! My biggest object was only 5MB and I was perplexed to see that strange error on a 24-CPU-with-3GBRAM-free-server. "...Delta compression using up to 24 threads..." pointed me to trying your thread suggestion.Probably it would work with more than ONE thread, but 1 definitely works :D – Arno Teigseth May 08 '14 at 22:58
0

What platform/distribution are you on? Ubuntu, redhat, centos, etc... Both for the client and server? What's the memory usage on the client you're pushing from? I've had this happen before with pushes which encompass a large # of revisions. I know one workaround is to incrementally push your changes to the server if at all possible. The other solution is to increase your kernel memory usage. With some kernel distributions, there are setting which prevent the kernel from allocation max memory to a single process:

Set Kernel Parameters
Modify the "/etc/sysctl.conf" file to include the lines appropriate to your operating system.
# Red Hat Enterprise Linux 3.0 and CentOS 3.x 
 kernel.shmmax = 2147483648
 kernel.shmmni = 4096
 kernel.shmall = 2097152
 kernel.shmmin = 1
 kernel.shmseg = 10

# semaphores: 
 semmsl, semmns, semopm, semmni kernel.sem = 250 32000 100 128
 fs.file-max = 65536

# Red Hat Enterprise Linux 4.0 and CentOS 4.x 
 kernel.shmmax = 536870912
 kernel.shmmni = 4096
 kernel.shmall = 2097152

If your git process exceeds the limits, the kernel will kill the process despite the max memory reported being available on your system.

Note: be careful with these settings. You probably don't want to use the settings in that example as i pulled them from a server in our environment.

A few extra notes to mention:

To Update and test kernel settings with sysctl, use following commands:

List current settings: sysctl -A|grep shm

sysctl -w kernel.shmmax=<value> to write in sysctl.conf
sysctl -p /etc/sysctl.conf to read/reload the values from sysctl.conf

Disable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.

SELINUX=disabled
Jason Huntley
  • 1,253
  • 3
  • 10
  • 22
  • The server is using Debian Linux "Lenny", and the client is Ubuntu Linux 11.04. Let me check and see what I can find out about the memory usage of the client - I am actually away from my machine right now on travel, so it'll be later next week. Thanks for the reply. – jwir3 Mar 30 '12 at 20:51
  • BTW - this is a single commit push, it just has a very large file in it. – jwir3 Apr 04 '12 at 17:29
  • What does top show for mem usage when you go to push the large file? – Jason Huntley Apr 04 '12 at 17:34
  • On the client, it shows: ` 7784 sjohnson 20 0 1362m 1.3g 1124 S 100 16.8 0:10.27 git` – jwir3 Apr 04 '12 at 17:44
  • On the server, it shows: ` 1625 git 18 0 16204 6688 1112 S 2 1.3 0:00.15 git-unpack-obje ` – jwir3 Apr 04 '12 at 17:48
  • k so it looks like you're not using that much mem on the server, if that is in bytes. I would execute cat /proc/sys/kernel/shmmax on your client and see what the setting is. – Jason Huntley Apr 04 '12 at 17:52
  • Looks like the output of that is: 33554432 – jwir3 Apr 04 '12 at 22:14
  • I'd try increasing the size. 32MB might be too small to pack a 512MB file. However, I'm not real certain how git packs in memory. It wouldn't hurt trying it out. Just set your shmmax to 2G and try again, echo "2147483648" > /proc/sys/kernel/shmmax. You shouldn't have to reboot setting using that method. If it doesn't work ,just set it back to original value. – Jason Huntley Apr 04 '12 at 23:46
  • Yeah, that didn't work, either. I am beginning to think there might just not be a solution. :( – jwir3 Apr 04 '12 at 23:59
  • wait, how much total memory is on the client end? Physical memory? – Jason Huntley Apr 05 '12 at 00:03
  • 8 GB of physical memory – jwir3 Apr 05 '12 at 00:19
  • I would start considering it either a bug or limitation of git, http://code.google.com/p/msysgit/issues/detail?id=292 . It also seems you're not the only one who has encountered problems packing before a push. You could always try the last comment in that issue, git config --global pack.windowMemory 256m . Another option, try: git repack -adf --window=5. – Jason Huntley Apr 05 '12 at 00:29
  • Yeah, unfortunately, neither of those worked. It could definitely be a limitation in git... I knew I wasn't the only one with this problem, but I was thinking there actually was a solution and I just didn't understand it. :( I will consider maybe upgrading the VM. – jwir3 Apr 05 '12 at 02:28