11

I have a problem with my Gitlab installation running on a small Ubuntu LTS 16.04. I have to point out that I don't have much experience with Linux or Gitlab.

My Gitlab installation with a few personal projects (only 4) was running Ok, though pushing is extremely slow and sometimes fails. Also accessing the web interface is extremely slow. I checked the server and noticed that up to 96% of total memory were used. The culprit seems to be a bundle process.

top - 00:15:30 up 59 days, 16:17,  1 user,  load average: 0.00, 0.01, 0.09
Tasks: 160 total,   1 running, 159 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.5 us,  0.2 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 72.4/2048272  [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||                           ]
KiB Swap:  0.0/0        [                                                                                                    ]

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 8760 git       20   0  648908 412768  14700 S   0.7 20.2   0:30.58 bundle
 8799 git       20   0  513748 302632  14300 S   0.0 14.8   0:20.02 bundle
 8833 git       20   0  513748 293028   4696 S   0.0 14.3   0:00.03 bundle
 8839 git       20   0  513748 292904   4572 S   0.0 14.3   0:00.02 bundle
 8836 git       20   0  513748 292840   4508 S   0.3 14.3   0:00.04 bundle
11792 mysql     20   0 1567168 158296      0 S   0.0  7.7   5:01.31 mysqld
32688 root      20   0 11.279g  99476   1164 S   0.0  4.9   1:21.06 dotnet
 8092 gitlab-+  20   0  576816  39616  39020 S   0.0  1.9   0:00.10 postgres
 8854 gitlab-+  20   0  595572  15004  10524 S   0.0  0.7   0:00.09 postgres
 8075 git       20   0  128348  14896   7680 S   0.0  0.7   0:00.07 gitlab-workhors
 8830 gitlab-+  20   0  592816  12196   9780 S   0.0  0.6   0:00.04 postgres
 9534 gitlab-+  20   0  592824  12060   9668 S   0.0  0.6   0:00.01 postgres
 8781 gitlab-+  20   0  592816  11932   9616 S   0.0  0.6   0:00.02 postgres
32684 root      20   0   61856  11420      0 S   0.0  0.6  23:35.39 supervisord
 8100 gitlab-+  20   0   37552  11112   2868 S   0.3  0.5   0:03.74 redis-server
 8094 gitlab-+  20   0  577068   7944   7324 S   0.0  0.4   0:00.01 postgres
 8087 gitlab-+  20   0   46756   7932   2900 S   0.0  0.4   0:00.01 nginx
 8095 gitlab-+  20   0  577068   7052   6444 S   0.0  0.3   0:00.06 postgres
 8088 gitlab-+  20   0   46412   6752   1992 S   0.0  0.3   0:00.10 nginx
  975 root      20   0   38236   6368   1908 S   0.0  0.3   8:47.56 systemd-journal
 8097 gitlab-+  20   0  578076   5600   4240 S   0.0  0.3   0:00.05 postgres
 8086 root      20   0   42240   5524   4696 S   0.0  0.3   0:00.00 nginx
  974 root      20   0   12204   4720     60 S   0.0  0.2   2:33.12 haveged
    1 root      20   0  185260   4308   2408 S   0.0  0.2   3:23.22 systemd
 7757 root      20   0   25224   4256   2484 S   0.0  0.2   0:00.28 bash
 9857 root      20   0   42468   3708   3076 R   0.0  0.2   0:00.09 top
 8098 gitlab-+  20   0   26956   3296   2608 S   0.0  0.2   0:00.08 postgres
 8089 gitlab-+  20   0   42424   3260   2224 S   0.0  0.2   0:00.01 nginx
 8784 git       20   0   18100   2980   2664 S   0.0  0.1   0:00.38 gitlab-unicorn-
 8096 gitlab-+  20   0  577068   2932   2332 S   0.0  0.1   0:00.03 postgres

I've hit pstree and these bundle processes seem to be related to the ruby application (must be gitlab).

systemd─┬─agetty
        ├─atd
        ├─bundle─┬─3*[bundle───{ruby-timer-thr}]
        │        └─{ruby-timer-thr}
... 

Does anyone have had similar experiences or an idea what might cause this?

chelmertz
  • 103
  • 5
mode777
  • 211
  • 1
  • 2
  • 4

6 Answers6

4

Those will be the unicorn workers and sidekiq. They appear to be using the correct amount of memory. 2GB is about the bare minimum of RAM to run gitlab; if your system has much of any activity you'll want 4GB or more.

I have a personal gitlab instance on 2GB of RAM as well, and it shows similar usage:

top - 23:30:42 up 5 days,  7:53,  1 user,  load average: 0.04, 0.03, 0.05
Tasks: 172 total,   2 running, 170 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.2 us,  0.2 sy,  0.0 ni, 98.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  2048816 total,    72636 free,  1762504 used,   213676 buff/cache
KiB Swap:  1048572 total,   801180 free,   247392 used.    73972 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
  664 git       20   0  715620 458296   2964 S   3.0 22.4 139:48.55 bundle      
 1623 git       20   0  543608 327472   3044 S   0.0 16.0   3:46.02 bundle      
 1626 git       20   0  543608 324384   3224 S   0.0 15.8   3:51.97 bundle      
 1620 git       20   0  543608 324244   3088 S   0.0 15.8   3:51.68 bundle      
 1556 git       20   0  510840 149736   2616 S   0.0  7.3   0:18.45 bundle    

Note that top doesn't show you what the processes are really doing, but you can easily find out with ps. For instance:

# ps 664
  PID TTY      STAT   TIME COMMAND
  664 ?        Ssl  139:49 sidekiq 4.2.1 gitlab-rails [0 of 25 busy]
# ps 1556
  PID TTY      STAT   TIME COMMAND
 1556 ?        Sl     0:18 unicorn master -D -E production -c /var/opt/gitlab/gitlab-rails/etc/unicorn.rb /opt/gitlab/embedded/service/gitlab-rails/config.ru
Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
  • 2
    Thank you for your answer. I think I will have to look for a more lightweight solution. [Gogs](https://gogs.io/) looks promising – mode777 Dec 03 '16 at 07:40
  • I also have 2gb of RAM and gitlab runs fine at the begining. It seems that there is a memory leak on sidekick ( https://gitlab.com/gitlab-org/gitlab-ce/issues/30564 ). There is some things you can do like: https://docs.gitlab.com/ce/administration/operations/sidekiq_memory_killer.html (but I haven't done that myself) or restart that sidekick process every now and then (maybe a cron?). – Josejulio Jul 06 '17 at 16:13
  • Unicorn killer might also be useful https://about.gitlab.com/2015/06/05/how-gitlab-uses-unicorn-and-unicorn-worker-killer/ – Josejulio Jul 06 '17 at 16:21
  • 1
    I am evaluating gitlab for a project and have encountered a similar issue, here in March of 2018. A shiny new Debian install on a 2gb node, Gitlab runs fine, but over a few days the `bundle` processes consume memory and cause excessive swapping. This was fixed, at least temporarily, with `gitlab-ctl restart`. "Gitlab has memory leaks," the documentation says. Yeah, it has leaks from the moment you install it, when it is running idle. – Roger Halliburton Mar 14 '18 at 20:57
  • 2
    You can press `c` in top to show the actual command lines. – Thomas Oct 30 '18 at 16:02
  • @Thomas This works, but also requires expanding your terminal width. This isn't always possible. – Michael Hampton Oct 30 '18 at 16:09
3

GitLab CE wants to use at least 4GB of RAM. So if you have 2GB RAM, GitLab tries to add another 2GB of memory by using SWAP, which results in 2GB of swap memory. This makes GitLab very slow, even if you're the only user.

The solution: Your machine must have at least 4 GB RAM or more. Don't waste your time on tweaking the GitLab's configuration file, just make sure you have hard 4 GB of RAM.

Read the 'Memory' section of this GitLab's document: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/requirements.md

Good luck!

AndaluZ
  • 139
  • 2
  • This worked for me: I saw it but thought 1 user was much less than 500: "4GB RAM is the required minimum memory size and supports up to 500 users" -- apparently 1 =~ 500 here. IOW the 4 GB is critical here. – learning2learn Mar 30 '21 at 17:19
1

I know this thread is a lil stale but does anybody else still encounter this? I'm on a physical box with 24GB and 12cores/24threads and I'm seeing bundle forked like mad until it sucks up all memory. I looked in gitlab config and found sidekiq concurrency is set to 25 by default - apparently that means up to 25 copies of bundle running? It creates as many as it can before out of memory. Crazy.

BoeroBoy
  • 156
  • 3
  • Update I found this thread which helps: https://stackoverflow.com/questions/36122421/high-memory-usage-for-gitlab-ce – BoeroBoy Jul 30 '18 at 19:41
0

Add the following to the gitlab.rb file and restart to take effect:

puma['worker_processes'] = 0
sidekiq['max_concurrency'] = 10
postgresql['shared_buffers'] = "256MB"
prometheus_monitoring['enable'] = false
 
gitlab_rails['env'] = { 'MALLOC_CONF' => 'dirty_decay_ms:1000,muzzy_decay_ms:1000' }
 
gitaly['env'] = {
  'LD_PRELOAD' => '/opt/gitlab/embedded/lib/libjemalloc.so',
  'MALLOC_CONF' => 'dirty_decay_ms:1000,muzzy_decay_ms:1000',
  'GITALY_COMMAND_SPAWN_MAX_PARALLEL' => '2'
}
 
gitaly['concurrency'] = [
  {
    'rpc' => "/gitaly.SmartHTTPService/PostReceivePack",
    'max_per_repo' => 10
  }, {
    'rpc' => "/gitaly.SSHService/SSHUploadPack",
    'max_per_repo' => 4
  }
]

In case where this came from? Then look here.

Shery
  • 111
  • 3
0

Have you tried turning it off and then back on again?

gitlab-ctl restart

Whatever is happening with bundle, it seems pretty clear that the *-killer tools are not catching these issues. It looks like these processes are started from sidekiq.

0

There is an issue on gitlab.com about this #40816.

It seems that setting the MIN + MAX higher can help: https://docs.gitlab.com/ee/user/gitlab_com/index.html#unicorn

I use:

        gitlab_rails['env'] = {
          'GITLAB_UNICORN_MEMORY_MIN' => '786432000',
          'GITLAB_UNICORN_MEMORY_MAX' => '1572864000'
        }
gotjosh
  • 133
  • 4