How to Debug Gitlab Problems
Created On: 2016-04-20 Updated On: 2019-09-02
Gitlab is a free git hosting platform that you can run on your own server. It helps you create and manage project git repositories. It also offers web-based file listing, commits, diffs etc. It also provides project management tools like issue tracking, wiki, Gitlab CI. At the time when github didn't have $7 unlimited private repo and bitbucket is not as good it is now, gitlab is a good choice. By the way, I don't like the Github UI today. Small icons, hide important information like the clone URL by default, promotes using git as SVN by providing SVN interface for every git repo, I like none of that. Today, Gitlab remains a good choice if you are serious on hosting git yourself.
Gitlab is very easy to install and get up running using the omnibus package and their chef based configuration command line tool. Their document also have enough information for some non-default settings. However, since gitlab is a complex system, when things doesn't work as expected, you don't usually know where is the problem. In this post I document how I debug gitlab installation when something is wrong. If you work for or contribute to gitlab, please feel free to integrate this information into official document.
[Update on 2019-09-02: Gitlab new versions are much easier to debug because of
gitlab-ctl tail command. You can just keep this command running when
reproducing a problem. You don't need to know the location of those log
files. This post is kept for history.]
This document applies to Gitlab 8.6.6, which is the version I am running. It does not apply to old Gitlab 6.x releases, which I know because I have used it before. In all components that gitlab uses, workhorse is unfamiliar to system admins. Gitlab introduced workhorse in Gitlab 8. It serves static files for web, large blobs for git, and do reverse proxy for ruby webapp. For why they stopped using nginx for those things, read a Brief History of GitLab Workhorse.
Steps to Debug Gitlab 8.6.6 Problems
- Check runsvdir-start and runsv processes.
status gitlab-runsvdir ps -efw|grep runsv
- Check runsv service states.
Service configuration file and log files:
nginx conf: /var/opt/gitlab/nginx/conf/nginx.conf /var/opt/gitlab/nginx/conf/gitlab-http.conf listen on 127.0.0.1:2050 /var/log/gitlab/nginx/gitlab_access.log /var/log/gitlab/nginx/gitlab_error.log workhorse (workhorse provides static file hosting and reverse-proxy) conf: /opt/gitlab/sv/gitlab-workhorse/run listen on unix:/var/opt/gitlab/gitlab-workhorse/socket /var/log/gitlab/gitlab-workhorse/current unicorn conf: /var/opt/gitlab/gitlab-rails/etc/unicorn.rb listen on 127.0.0.1:8080, /var/opt/gitlab/gitlab-rails/sockets/gitlab.socket /var/log/gitlab/unicorn/unicorn_stderr.log INFO is also logged here. /var/log/gitlab/unicorn/unicorn_stdout.log this is empty. postgres conf: /var/opt/gitlab/postgresql/data/postgresql.conf listen on unix domain socket, use -h /var/opt/gitlab/postgresql to connect. datadir: /var/opt/gitlab/postgresql/data/ no log enabled here? connect to gitlab db: sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql gitlabhq_production /var/log/gitlab/postgresql/current redis conf: /var/opt/gitlab/redis/redis.conf listen: /var/opt/gitlab/redis/redis.socket dump file: /var/opt/gitlab/redis/dump.rdb sidekiq (like celery, it's workers based on queues.) conf: /opt/gitlab/sv/sidekiq/run started using bundle.
- Check hanging runsv processes.
If a service is down, check whether there are hanging runsv process. If there is, stop the gitlab-runsvdir service, then kill the runsv process and their ./run.
- Run gitlab check command.
gitlab-rake gitlab:check SANITIZE=true
- Run gitlab reconfigure command.