Ansible + auto git pull in virtual machine cluster in the cloud



Good day


We have several cloud clusters with a large number of virtual machines in each. All this business is hosted at Hetzner'e. In each cluster, we have one master machine, a snapshot is taken from it and automatically distributed to all virtual machines within the cluster.

This scheme does not allow us to use gitlab-runners normally, since a lot of problems arise when many identical registered runners appear, which prompted us to find a workaround and to write this article / manual.

This is probably not the best practice, but this solution seemed as convenient and simple as possible.

For the tutorial, I ask for a cat.

Required packages on the master machine:


The general principle of implementing automatic gut pull on all virtual machines is that you need a machine on which Ansible will be installed. From this machine, ansible will send git pull commands and restart the service that has been updated. We created a separate virtual machine outside the clusters for these purposes, and installed on it:


From organizational issues - you need to register gitlab-runner, make ssh-keygen, drop the public ssh key of this machine in .ssh/authorized_keys on the master machine, open port 22 for ansible on the master machine.

Now configure ansible


Since our goal is to automate everything that is possible. In the /etc/ansible/ansible.cfg file /etc/ansible/ansible.cfg we uncomment the line host_key_checking = False so that ansible does not ask for confirmation of new machines.

Next, you need to automatically generate an inventory file for ansible, from where it will pick up the ip of the machines on which you need to do git pull.

We generate this file using the Hetzner API, you can take the list of hosts from your AWS, Asure, database (you have an API somewhere to display your running machines, right?).

The structure of the inventory file is very important for Ansible, its appearance should be as follows:

 [] ip- ip- [2] ip- ip- 


To generate such a file, let's make a simple script (let's call it vm_list ):

 #!/bin/bash echo [group] > /etc/ansible/cloud_ip && " CLI    IP    " >> /etc/ansible/cloud_ip echo " " >> /etc/ansible/cloud_ip echo [group2] > /etc/ansible/cloud_ip && " CLI    IP     " >> /etc/ansible/cloud_ip 

It's time to check that ansible works and is friendly with the recipient of ip addresses:

 /etc/ansible/./vm_list && ansible -i /etc/ansible/cloud_ip -m shell -a 'hostname' group 

The output should receive the hostnames of the machines on which the command was executed.
A few words about the syntax:


Go ahead - try to do git pull on our virtual machines:

 /etc/ansible/./vm_list && ansible -i /etc/ansible/cloud_ip -m shell -a 'cd /path/to/project && git pull' group 

If we see already up to date or unloading from the repository in the output, then everything works.

Now what it was all meant for


We teach our script to be executed automatically when committing in the master branch in gitlab

First, we’ll make our script more beautiful and put it in an executable file (let's call it exec_pull) -

 #!/bin/bash /etc/ansible/./get_vms && ansible -i /etc/ansible/cloud_ip -m shell -a "$@" 

We go to our gitlab and in the project we create the file .gitlab-ci.yml
Inside we put the following:

 variables: GIT_STRATEGY: none VM_GROUP: group stages: - pull - restart run_exec_pull: stage: pull script: - /etc/ansible/exec_pull 'cd /path/to/project/'$CI_PROJECT_NAME' && git pull' $VM_GROUP only: - master run_service_restart: stage: restart script: - /etc/ansible/exec_pull 'your_app_stop && your_app_start' $VM_GROUP only: - master 

Everything is ready. Now -


When transferring .yml to other projects, you just need to change the service name for restart and the name of the cluster on which the ansible commands will be executed.

Source: https://habr.com/ru/post/472064/


All Articles