How to deploy a project using docker with gitlab-ci - docker

I'm fairly new to docker and gitlab-ci with the docker runner.
The docker runner works and I'm fine with it except of one thing. It seems as if the docker runner cannot see locally available images. Which means I may have to create a custom registry unless there's a way to make the docker command to check on the host docker.
What I try to achieve is this:
Build a Dockerfile and fetch a few other git repositories to
Create a new docker image based on the Dockerfile.
Start a new docker container on the host docker which will remain alive even after the job is done.
In other words, I'm trying to generate a docker image and start/replace an existing service in the host's dockerd service.
Right now that's what I came with but it doesn't work as data isn't passed from one job to the other. And even if job build would work I doubt the docker service I created would be accessible from the outside world.
stages:
- test
- prepare
- build
# Build the Dockerfile
prepare_script:
stage: prepare
image: debian:stretch
script:
- apt-get update
- apt-get install -y git python3
- python3 prepare_project.py
# Build and deploy the docker image
build:
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker build -t my-project .
- docker run --add-host db:172.17.42.1 -d --name my-project-inst --restart always -p 8069:8069 myproject
How can I use gitlab-ci to automatically deploy docker images in the host docker service?
The problem I'm trying to solve is to generate the docker file so fetching of git repositories and submodules can be done dynamically without having to hand modify Dockerfiles.

Related

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

GitLab Docker Runner to reuse installed software layers

A very typical scenario with GitLab CI is to install a few packages you need for your jobs (linters, code coverage tools, deployment-specific helpers and so on) and to then run your actual stages/steps of a building, testing and deploying your software.
The Docker runner is a very neat and clean solution, but it seems very wasteful to always run the steps that install the base software. Normally, Docker is able to cache such layers, but with the way the GitLab Docker runner works, that doesn't happen.
Do we realize that setting up another project to produce pre-configured Docker images would be one solution, but are there any better ones? Basically, what we want to say is: "If the before section hasn't changed, you can reuse the image from last time, no need to reinstall wget or whatever".
Any solution like that out there?
You can use the registry of your gitlab project.
eg.
images:
stage: build
image: docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY # login
# pull the current image or in case the image does not exit, do not stop the script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
# build with the pulled image as cache:
- docker build --pull --cache-from $CI_REGISTRY_IMAGE:latest -t "$CI_REGISTRY_IMAGE:latest" .
# push the final image:
- docker push "$CI_REGISTRY_IMAGE:latest"
This way docker build will profit from the work done by the last run of the job. See the docs. Maybe you want to avoid unnecessary runs by some rules.

How to push multiple images needed for docker-compose to GitLab registry in GitLab CI?

I recently got into CI/CD, and a good starting point for me was GitLab, since they provide an easy interface for that and i got started about what pipelines and stages are, but i have run into some kind of contradictory thought about GitLab CI running on Docker.
My app runs on Docker Compose. It contains (blah blah) that makes it easy to build & run containers. Each service in the Docker Compose creates a single Docker container, excepting the php-fpm one, which is able to do the thing called "horizontal scale", so I can scale it later.
I will use that Docker Compose for production, I am currently using it in development and I want to use it too in CI/CD pipelines.
However the .gitlab-ci.yml provides support for only one image, so I have to build it and push it to either their GitLab Registry or Docker Hub in order to pull it later in the CI/CD process.
How can I build my Docker Compose's service as a single image in order to push it to the Registry/Docker so I can pull it in the CI/CD?
My project contains a docker folder and a docker-compose.yml. In the docker folder, each service has its own separate directory (php-fpm, nginx, mysql, etc.) and each one (prepare yourself) contains a Dockerfile with build details, especially the php-fpm one (deps and libs are strong with this one)
Each service in the docker-compose.yml has a build context in each of their own folder.
If I was unclear, I can provide additonal info.
However the .gitlab-ci.yml provides support for only one image
This is not true. From the official documentation:
Your image will be named after the following scheme:
<registry URL>/<namespace>/<project>/<image>
GitLab supports up to three levels of image repository names.
Following examples of image tags are valid:
registry.example.com/group/project:some-tag
registry.example.com/group/project/image:latest
registry.example.com/group/project/my/image:rc1
So the solution to your problem is simple - just build individual images and push them to GitLab container registry under different image name.
If you would like an example, my pipelines are set up like this:
.template: &build_template
image: docker:stable
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest || true
- if [ -z ${CI_COMMIT_TAG+x} ];
then docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
else docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
- if [ -z ${CI_COMMIT_TAG+x} ]; then
docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
build:image1:
<<: *build_template
variables:
IMAGE_NAME: image1
DOCKERFILE_NAME: Dockerfile.1
build:image2:
<<: *build_template
variables:
IMAGE_NAME: image2
DOCKERFILE_NAME: Dockerfile.2
And you should be able to pull the same image using $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA in later pipeline jobs or your compose file (provided that the variables are passed to where you run your compose file).
You don't need dind to run a docker-compose stack. You can run multiple docker-compose up commands.
acceptance_testing:
stage: test
before_script:
- docker-compose -p $CI_JOB_ID up -d
script:
- docker-compose -p $CI_JOB_ID exec -T /run/your/test/suite.sh
after_script:
- docker-compose -p $CI_JOB_ID down -v --remove-orphans || true
I think you search something like this
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache py-pip
- pip install docker-compose
- docker-compose up -d
Also good to know:
In Docker, what's the difference between a container and an image?
Building Docker images with GitLab CI/CD
I have a project of Drupal which contains two images: one for Drupal source code & another for MySQL database.
I tagged them:
docker build -t registry.mysite.net/drupal/blog/blog_db:v1.3 mysql/db
docker build -t registry.mysite.net/drupal/blog/blog_drupal:v1.3 src/drupal
Where registry.mysite.net is the url of the git site, and can be found under Container registry settings.
drupal is the group name,
blog is the project name,
blog_db is the image for database, mysql/db is the location for the Dockerfile, and likewise for the other image.
And then to push it to gitlab use:
docker push registry.mysite.net/drupal/blog/blog_db:v1.3
docker push registry.mysite.net/drupal/blog/blog_drupal:v1.3
Hope this might help someone.

The container name is already in use by container -- gitlab ci

I am getting the following error during the "test_image" step when running tests against docker images in my gitlab CI pipeline. I cannot reproduce it locally, it only occurs on the gitlab runner box. Any ideas?
The container name "/common_run_1" is already in use by container
image: docker:latest
stages:
- build
- test
- release
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN myregistry.gitlab
build_image:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker-compose up -d --build
- docker push $CONTAINER_TEST_IMAGE
pylint:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker-compose run common pylint common
test_image:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker-compose run common nosetests common
push_master_image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_MASTER_IMAGE
- docker push $CONTAINER_MASTER_IMAGE
only:
- master
push_prod_image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_PROD_IMAGE
- docker push $CONTAINER_PROD_IMAGE
only:
- prod
Update:
There are multiple suggestions to simply use "docker-compose down" or "docker stop". I have done this on my gitlab-runner box (completely cleaned out docker processes, images, volumes, and networks), and re-submitted the pipeline request. In this case, I get the same error in the gitlab pipeline. It makes me think there is a concurrency issue in the "test" stage. Furthermore, if I add a "test2" stage and place the "pylint" script inside of it, the pipeline will succeed, further re-enforcing the idea of a concurrency problem.
Your stage:test is having two docker-compose run and both are running using same container name. You can change this by adding --name test1 in docker-compose run of first test and --name test2 in docker-compose run of second test.
Original Answer
Run docker ps -a and it will list which container names are already being used. This is caused mostly because you have already run the container using docker-compose up and the containers are still up.
Your options are
Run docker-compose down. This should bring down the already running containers. And should most probably solve your error.
If option 1 fails, then you can see which containers are running and stop those containers by running docker stop <container_name>.

How do i deploy from GitLab CI to Google Container Engine instance using Docker?

I am trying to set up automated deployment using a GitLab CI runner to deploy our 4-container app via docker-compose. I can pull the container images down using docker pull commands, but I'm stuck on how to connect to the Google Compute Engine instance in order to run the full docker-compose script.
Typically, from my local machine, I run something like:
eval $(docker-machine env <machine-instance>)
docker-compose up -d
But my .gitlab-ci.yml script doesn't have docker-machine available.
Do I have to install docker-machine via the script section in my
.gitlab-ci.yml file?
How do I provision the instance without
creating a new one every time? Normally, from my local host, I would
run docker-machine create ... once then just use the eval
command above to reconnect to the instance. But how would this work
with CI?
Here's a sample of my .gitlab-ci.yml:
deploy staging:
image: docker:latest
services:
- docker:dind
environment: staging
stage: deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN my-registry.githost.io
script:
- docker pull my-registry.githost.io/group/project1:develop
- docker pull my-registry.githost.io/group/project2:develop
- docker pull my-registry.githost.io/group/project3:develop
- docker pull my-registry.githost.io/group/project4:develop
- docker-machine ls
Not sure what you need docker-machine for in this case. You might want to get rid of it.
But to go back to your question, the docker image you're using does not come with neither docker-machine, nor docker-compose :
https://github.com/docker-library/docker/blob/36e2107fb879d5d5c3dbb5d8d93aeef0a2d45ac8/1.12/Dockerfile
So you will need to create a new image (or find an existing one) that comes with those two installed.
So in the .gitlab-ci.yml, instead of image: docker:latest, it's going to be something like image: mydocker
You maybe have to install docker-machine in the GitLab CI Runner to use it with GCE
https://docs.docker.com/machine/install-machine/
https://docs.docker.com/machine/drivers/gce/

Resources