4

EDIT: It turns out this is a Gitlab problem, however I still do not have a solution.

I have a weird situation going on with two of my AWS EC2 instances. They are exactly the same in terms of OS, region, and instance type (both t3.micro), set up in the same way (however, the first was set up a few months ago).

Both exist in the eu-central-1c availability zone, and both are operating on the same git repository. Both are also up to date (CentOS 7.6.1810).

Older server:

$ time git pull
Already up-to-date.

real    0m0.306s
user    0m0.034s
sys     0m0.016s

Newer server:

$ time git pull
Already up-to-date.

real    2m7.547s
user    0m0.026s
sys     0m0.024s

It also consistently takes about 2m7s.

Also:

Older server:

--2019-04-09 10:52:03--  https://speed.hetzner.de/1GB.bin
Resolving speed.hetzner.de (speed.hetzner.de)... 88.198.248.254, 2a01:4f8:0:59ed::2
Connecting to speed.hetzner.de (speed.hetzner.de)|88.198.248.254|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M) [application/octet-stream]
Saving to: ‘1GB.bin’

100%[===============================================================>] 1,048,576,000  121MB/s   in 6.5s   

2019-04-09 10:52:10 (154 MB/s) - ‘1GB.bin’ saved [1048576000/1048576000]

Newer server:

--2019-04-09 10:54:04--  https://speed.hetzner.de/1GB.bin
Resolving speed.hetzner.de (speed.hetzner.de)... 88.198.248.254, 2a01:4f8:0:59ed::2
Connecting to speed.hetzner.de (speed.hetzner.de)|88.198.248.254|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M) [application/octet-stream]
Saving to: ‘1GB.bin’

100%[===============================================================>] 1,048,576,000  130MB/s   in 5.9s   

2019-04-09 10:54:10 (170 MB/s) - ‘1GB.bin’ saved [1048576000/1048576000]

EDIT: I tried to use a GitHub repository instead of our GitLab, and it turns out it seems to be a GitLab issue. What could possibly be causing GitLab to respond quickly to the older server but not to the other one?

EDIT 2: Attempted to clone over HTTPS. It takes 2 minutes just for it to ask for my username.

Also, verbose output over SSH:

$ GIT_CURL_VERBOSE=1 GIT_TRACE=1 git pull
trace: exec: 'git-pull'
trace: run_command: 'git-pull'
trace: built-in: git 'rev-parse' '--git-dir'
trace: built-in: git 'rev-parse' '--is-bare-repository'
trace: built-in: git 'rev-parse' '--show-toplevel'
trace: built-in: git 'ls-files' '-u'
trace: built-in: git 'symbolic-ref' '-q' 'HEAD'
trace: built-in: git 'config' '--bool' 'branch.#hidden#.rebase'
trace: built-in: git 'config' '--bool' 'pull.rebase'
trace: built-in: git 'rev-parse' '-q' '--verify' 'HEAD'
trace: built-in: git 'fetch' '--update-head-ok'
trace: run_command: 'ssh' '-p' '#hidden#' 'git@#hidden.tld#' 'git-upload-pack '\''/#hidden#/#hidden#.git'\'''
Shreyas
  • 181
  • 6

1 Answers1

3

Problem found using verbose output.

The newer server was trying to contact the git endpoint server using IPv6, and waiting for timeout, before falling back to IPv4 (which actually works).

$ GIT_CURL_VERBOSE=1 GIT_TRACE=1 git clone https://#hidden#/#hidden#/#hidden#.git
trace: built-in: git 'clone' 'https://#hidden#/#hidden#/#hidden#.git'
Cloning into '#hidden#'...
trace: run_command: 'git-remote-https' 'origin' 'https://#hidden#/#hidden#/#hidden#.git'
* Couldn't find host #hidden# in the .netrc file; using defaults
* About to connect() to #hidden# port 443 (#0)
*   Trying x:x:x:x:x:x:x:x...
* Connection timed out
*   Trying x.x.x.x...
Shreyas
  • 181
  • 6