I'm trying to build a CI Pipeline with Gitlab CI/CD. My project is a really simple API based on Symfony. To create a consistent environment i'm using docker-compose with four very simple containers (nginx,PHP,MySQL & composer). My .gitlab-ci.yaml looks like this:
stages:
- setup
setup:
stage: setup
before_script:
- docker-compose up -d
script:
- sleep 15
- docker-compose exec -T php php bin/console doctrine:schema:create
after_script:
- [...]
- docker-compose down
The problem I'm encountering is that the ci script does not wait till the docker-compose up -d is finished. To bypass this I've added this stupid sleep.
Is there a better way to do this?
To save some time for the people searching it, I implemented the solution commented by #gbrener.
The idea: Wait until getting the log that shows that the container is up, then continue the pipeline.
1 - Get the log to be the checkpoint. I used the last log of my container. Ex: Generated backend app.
2 - Get the container name. Ex: ai-server-dev.
3 - Create a sh script like the below and name it something.sh. Ex:
#!/bin/bash
while ! docker logs ai-server-dev --tail=1 | grep -q "Generated backend app";
do
sleep 10
echo "Waiting for backend to load ..."
done
4 - Replace the 'sleep 15' with 'sh wait_server.sh' as in the question to run the script in your pipeline.
Related
I'm a total newbie when it comes to CI/CD, so I'm asking your pardon in advance for not using the right terms/doing stupid things.
I have a docker-compose file which I can use to start my application with sudo docker-compose up -d. Works fine locally, but I also have a remote Virtual Machine, which I want to use to test my application.
I want to run some tests (I will implement them later on) and deploy the app if everything is ok with every push to my repoistory. I looked into the docs, installed gitlab-runner, and tried this for a .gitlab-ci.yml file:
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
test-job1:
stage: test
script:
- echo "This job tests something"
deploy-prod:
stage: deploy
script:
- echo "Restaring containers.."
- cd /path/too/app/repo
- git pull
- sudo docker-compose down && sudo docker-compose up -d
However, when I tried the docker runner, it gave me an error saying it could not find the path to the directory. I understand that this is caused by the fact that it runs every stage in a separate container. How can I restart my application containers (preferably with compose) on the VM? Is there a better approach to achieve what I want to do?
I am a bit lost with the automated testing using Gitlab CI. I hope I can explain my problem so somebody can help me. I'll try to explain the situation first, after which I'll try to ask a question (which is harder than it sounds)
Situation
Architecture
React frontend with Jest unit tests and Cypress e2e tests
Django API server 1 including a Postgres database and tests
Django API server 2 with a MongoDB database (which communicates with the other API
Gitlab
For the 2 API's, there is a Docker and a docker-compose file. These work fine and are set up correctly.
We are using GitLab for the CI/CD, there we have the following stages in this order:
build: where dockers for 1, 2 & 3 are build separate and pushed to private-registry
Test: Where the unit testing and e2e test (should) run
Release: where the docker images are released
Deploy: Where the docker images are deployed
Goal
I want to set up the GitLab CI such that it runs the cypress tests. But for this, all build dockers are needed. Currently, I am not able to use all dockers together when performing the end-to-end tests.
Problem
I don't really get how I would achieve this.
Can I use the dockers that are built in the build stage for my e2e tests and can somebody give me an example of how this would be achieved? (By running the build docker containers as a service?)
Do I need one Docker-compose file including all dockers and databases?
Do I even need a dind?
I hope somebody can give me some advice on how to achieve this. An example would be even better but I don't know if somebody would want to do that.
Thanks for taking the time to read!
(if needed) Example of the API server 1
build-api:
image: docker:19
stage: build
services:
- docker:19-dind
script:
cd api
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
docker pull $IMAGE_TAG_API:latest || true
docker build -f ./Dockerfile --cache-from $IMAGE_TAG_API:latest --tag $IMAGE_TAG_API:$CI_COMMIT_SHA .
docker push $IMAGE_TAG_API:$CI_COMMIT_SHA
test-api:
image: docker:19
stage: test
services:
- postgres:12.2-alpine
- docker:19-dind
variables:
DB_NAME: project_ci_test
POSTGRES_HOST_AUTH_METHOD: trust
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker run $IMAGE_TAG_API:$CI_COMMIT_SHA sh -c "python manage.py test"
after_script:
- echo "Pytest tests complete"
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
release-api-staging:
image: docker:19
stage: release
services:
- docker:19-dind
only:
refs: [ master ]
changes: [ ".gitlab-ci.yml", "api/**/*" ]
environment:
name: staging
script:
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker tag $IMAGE_TAG_API:$CI_COMMIT_SHA $IMAGE_TAG_API:latest
- docker push $IMAGE_TAG_API:latest
The answer is a bit late, but still i'll try to explain the approach briefly for other developers with same issues. I also created an example project, contain 3 microservices in GitLab, where Server A runs end-to-end tests and is dependend on Server B and Server C.
When e2e test full-stack applications you have to either:
mock all the responses of the microservices
test against a deployed environment;
or spin-up the environment temporary in the pipeline
As you noted, you want to spin-up the environment temporary in the pipeline. The following steps should be taken:
Deploy all backends as docker images in GitLab's private registry;
Mimic your docker-compose.yml services in 1 job in the pipeline;
connect the dots together.
Deploy backends as docker images in GitLab private registry
First you have to publish your docker images in the private registry of GitLab. You do this, because you now can reuse those images in another job. For this approach you need docker:dind. A simple example job to publish to a private registry on gitlab looks like:
before_script:
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
publish:image:docker:
stage: publish
image: docker
services:
- name: docker:dind
alias: docker
variables:
CI_DOCKER_NAME: ${CI_REGISTRY_IMAGE}/my-docker-image
script:
- docker pull $CI_REGISTRY_IMAGE || true
- docker build --pull --cache-from $CI_REGISTRY_IMAGE --tag $CI_DOCKER_NAME --file Dockerfile .
- docker push $CI_DOCKER_NAME
only:
- master
To see a real-world example, I have an example project that is public available.
Mimic your docker-compose.yml services in 1 job in the pipeline
Once you dockerized all backends and published the images on a private registry, you can start to mimic your docker-compose.yml with a GitLab job. A basic example:
test:e2e:
image: ubuntu:20.04
stage: test
services:
- name: postgres:12-alpine
alias: postgress
- name: mongo
alias: mongo
# my backend image
- name: registry.gitlab.com/[MY_GROUP]/my-docker-image
alias: server
script:
- curl http://server:3000 # expecting server exposes on port 3000, this should work
- curl http://mongo:270117 # should work
- curl http://postgress:5432 # should work!
Run the tests
Now everything is running in a single job in GitLab, you can simply start your front-end in detached mode and run cypress to test it. Example:
script:
- npm run start & # start in detached mode
- wait-on http://localhost:8080 # see: https://www.npmjs.com/package/wait-on
- cypress run # make sure cypress is available as well
Conclusion
Your docker-compose.yml is not meant to run in a pipeline. Mimic it instead using GitLab services. Dockerize all backends and store them in GitLab's private registry. Spin up all services in your pipeline and run your tests.
This article might shed some light.
https://jessie.codes/article/running-cypress-gitlab-ci/
Essentially, you make two docker composers, one for your Cypress test and one for your items that are to be tested. This gets around the issues with images being able to access node and docker.
I was looking for a method to implement a CI/CD pipeline within my projects. I decided to use Gitlab with its gitlab-runner technology. I tried to use it through docker containers but, after more than 100 attempts, I decided to install it on the machine.
I followed the official Gitlab guide step by step. Everything is working perfectly; I run the register, fill all the fields correctly and I go on to write the .gitlab-ci.yml:
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
As you can imagine when looking at the yml file, when some operation is performed on the master, the pipeline starts and executes a docker-compose up --build -d (the project in question is a PHP application with a SQL database deployed through a compose).
First run:
Absolutely perfect; the pipeline starts, the build is executed correctly and is correctly put in online
Second and following 140 runs:
That's the nightmare. Over 140 builds failed for the same reason; when cloning the repository, the runner doesn't seem to have write permissions on his home directory (/home/gitlab-runner/builds/...).
If I manually delete the nested folder inside builds/ the runner works, but only for one run, then same situation.
I tried to:
run chown gitlab-runner:gitlab-runner on its home directory (also as
pre_clone_script in the TOML file);
add gitlab-runner to the sudoers group;
I added gitlab-runner to the docker group;
a series of file permissions operations, then chmod 777, chgrp with
the runner group and more.
You always should not forget to stop your containers with after_script section.
But in your case, you can use GIT_STRATEGY to clear repository before your job.
variables:
GIT_STRATEGY: none
Your yml file with this fix
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
variables:
GIT_STRATEGY: none
I'm trying to implement a CI&CD server using gitlab.
my gitlab.ci config is something like this:
stages:
- deploy
step-deploy-staging:
stage: deploy
script:
- docker-compose up -d web
if I run my container with -d (detach) I can't understand the result of it because it will run on background. If i run docker-compose without detach mode, our gitlab pipeline never done
So I wrote a simple one-page server with node and express. I wrote a dockerfile for this and ran it locally. Then I made a postman collection and tested the endpoints.
I want to do this with gitlab ci using newman so I came up with the following .gitlab-ci.yml:
image: docker:latest
services:
- docker:dind
before_script:
- docker build -t test_img .
- docker run -d -p 3039:3039 test_img
stages:
- test
# test
api-test:
image:
name: postman/newman:alpine
entrypoint: [""]
stage: test
script:
- newman run pdfapitest.postman_collection.json
It fails saying:
docker build -t test_img .
/bin/sh: eval: line 86: docker: not found
ERROR: Job failed: exit code 127
full output: https://pastebin.com/raw/C3mmUXKa
what am I doing wrong here? this seems to me like a very common use case but I haven't found anything useful about this.
The issue is that your api-test job uses the image postman/newman:alpine to run the script.
This means that when GitLab tries to run the before_script section, it has no docker command available.
What you should do is to provide the docker command in the image you're using to run the job. You can do that either by installing docker as the first step of your script, or starting from a custom image which contains the software you're using inside the job plus the docker client itself.