Image

I’ve been using GitLab for a while now and I really like it. I can’t objectively say whether it’s better than GitHub or not (I have a few projects on GitHub but I rarely make any changes to them and even more rarely use the web UI), but one of the things I appreciate about GitLab is the fact that I can run my own copy of it and store my own stuff in it. I also use it every day for work and at home so am much more familiar with it than GitHub.

Recently I’ve been playing with the CI aspect of GitLab. I’ve used Jenkins to handle “CI duties” in the past, and GitLab and Jenkins work quite well together, but I wanted to play around with GitLab’s built-in CI because of how tightly integrated it is (and since I run a small GitLab here at home, I can use the same system for my runners and don’t have to worry about setting up Jenkins).

I found it quite easy to setup, although there are a few things to be aware of and I wanted to note them here. Partly so if I need to do it again in the future it’ll be easy for me to refer to.

Create the user to run the service:

# groupadd -g 2001 otter
# useradd -u 2001 -g 2001 -d /srv/www/otter -s /bin/bash otter
# chmod 0711 /srv/www/otter

The above creates the “otter” user and group which will run the service and makes /srv/www/otter traversable since we will checkout the git repository (as user otter) and it will live in /srv/www/otter/otter/:

# su - otter
$ mkdir otter
$ cd otter
$ git clone https://[gitlab-url]/otter.git

This project is public so there is no need for authentication. If you had a private project you could still authenticate over HTTPS by creating a ~/.netrc file that looks like this:

machine [gitlab-host]
login [gitlab-user]
password [password]

After this I added the otter user to /etc/sudoers to be able to restart the otter.service but that didn’t work out so well. When I used the following in .gitlab-ci.yml:

    script:
        - ssh -t otter@production.host "cd otter && git pull && sudo /bin/systemctl restart otter.service"

Because the CI does not allocate a pseudo-terminal, which is ultimately required for sudo (even though we set up the private key without a password), the deployment failed. This means that while we can deploy the new code via calling git pull, we cannot restart the gunicorn daemon that is serving the content. A work-around for this is to setup a cronjob to run every minute and look for a specific file, since we can easily do something like touch /srv/www/otter/otter.restart with our ssh call after doing the git pull. So editing /etc/crontab and adding:

* * * * * root test -f /srv/www/otter/otter.restart && /bin/systemctl restart otter.service && rm -f /srv/www/otter/otter.restart

does the trick. This actually is a bit nicer than trying to use sudo because it’s still only root that can restart the service and given /srv/www/otter is writable only by the otter user, and outside the git repository, nothing else that isn’t either a root or otter process can create the file. This removes the need to change anything in /etc/sudoers or give this user any kind of special permissions. The downside is this will add an entry every minute to /var/log/cron, but change the threshold to whatever you want (i.e. use “*/5” to check every 5 minutes if preferred).

To use this, the .gitlab-ci.yml file was updated to:

    script:
        - ssh -t otter@production.host "cd otter && git pull && touch /srv/www/otter/otter.restart"

Obviously there are other considerations that can and should be done here. For instance, if you have database changes, the above isn’t sufficient for an automatic deployment so a script to make changes to the database as part of the deployment would probably be good. This could be an external script or something that your systemd initscript handles, or the web application itself. One thought that comes to mind is to use a table in the database for configuration information and store a version in the database that can be compared and if less than what the script expects, automatically perform the migration at start. (Note to self, I should implement this…)

For reference, the full .gitlab-ci.yml file looks like:

image: centos:7

stages:
    - test
    - deploy

before_script:
    - yum install which -y
    # install ssh-agent if not already installed, it is required by docker
    - 'which ssh-agent || ( yum install openssh-clients -y )'
    # run ssh-agent (inside the build environment)
    - eval $(ssh-agent -s)
    # add the ssh key stored in SSH_PRIVATE_KEY variable to the agent store
    - ssh-add <(echo "$SSH_PRIVATE_KEY")
    # for docker builds disable host key checking although this can lead to
    # mitm attacks; only use this in docker or it will overwrite the host
    # ssh config!
    - mkdir -p ~/.ssh
    - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'

test:
    stage: test
    script:
        - yum update -y
        - yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y
        - yum install mariadb-devel mariadb-server python-virtualenv python-pip gcc gcc-c++ freetype-devel libpng-devel python-requests MySQL-python mailx python-simplejson vim httpd mod_wsgi -y
        - sh setup.sh

production:
    stage: deploy
    script:
        - ssh -t otter@production.host "cd otter && git pull && touch /srv/www/otter/otter.restart"
    only:
        - master
    environment: production

The key is handled using a project variable in GitLab which can be set by going to the project in question, clicking the gear icon and selecting “Variables”. You want to add a variable named SSH_PRIVATE_KEY with the contents of a private key you generate. The corresponding public key would be added to the ~/.ssh/authorized_keys file for, in this case, the otter user on “production.host”. You can read more about GitLab variables.

Finally, and I mention this because I found using systemd’s initscripts a little bit painful at first (especially with mod_uwsgi, which is why I opted to use gunicorn and mod_proxy instead, see my earlier blog post about this for more details), I leave you with the service scripts. There are two scripts in question, the otter.socket and the otter.service scripts.

otter.socket sets up the listeners:

[Unit]
Description=otter socket

[Socket]
ListenStream=/run/otter/socket
ListenStream=0.0.0.0:5000

[Install]
WantedBy=sockets.target

and otter.service runs the gunicorn service:

[Unit]
Description=otter daemon
Requires=otter.socket
After=network.target

[Service]
PIDFile=/run/otter/pid
User=otter
Group=otter
WorkingDirectory=/srv/www/otter/otter
ExecStart=/srv/www/otter/otter/flask/bin/gunicorn --pid /run/otter/pid --access-logfile /srv/www/otter/otter.log app:app
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

These files need to live in /etc/systemd/system/ and are enabled using:

# systemctl enable otter.socket
# systemctl enable otter.service

I also have a /etc/tmpfiles.d/otter.conf for the socket file:

d /run/otter 0755 otter otter -

Which can be setup using:

# systemd-tmpfiles --create

You’d want to do that before starting the services for the first time.

Hopefully this is helpful for someone who is interested in running a Flask application as a service under systemd with some CI integration using GitLab. It covers a little bit more than just using GitLab CI to deploy remotely, although all the pieces are tied together and it seemed odd to focus on the one part of the picture without giving some details on the rest to make things work. One thing I did consider is making the systemd services to be user services so that the otter user could systemctl restart otter.service but I didn’t really get around to it and it didn’t really matter to me (perhaps something to fiddle with in the future).

I’d love any feedback or ideas for improvement. This is all pretty new to me yet so undoubtably there are ways these can be implemented better, but this works and I felt like sharing. =)

Share on: TwitterLinkedIn


Related Posts


Published

Category

Linux

Tags

Stay in touch