I'm a total newbie when it comes to CI/CD, so I'm asking your pardon in advance for not using the right terms/doing stupid things.
I have a docker-compose file which I can use to start my application with sudo docker-compose up -d. Works fine locally, but I also have a remote Virtual Machine, which I want to use to test my application.
I want to run some tests (I will implement them later on) and deploy the app if everything is ok with every push to my repoistory. I looked into the docs, installed gitlab-runner, and tried this for a .gitlab-ci.yml file:
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
test-job1:
stage: test
script:
- echo "This job tests something"
deploy-prod:
stage: deploy
script:
- echo "Restaring containers.."
- cd /path/too/app/repo
- git pull
- sudo docker-compose down && sudo docker-compose up -d
However, when I tried the docker runner, it gave me an error saying it could not find the path to the directory. I understand that this is caused by the fact that it runs every stage in a separate container. How can I restart my application containers (preferably with compose) on the VM? Is there a better approach to achieve what I want to do?
Related
I'm using gitlab-ci for my simple project.
And everything is ok my runner is working on my local machine(ubuntu18-04) and I tested it with simple .gitlab-ci.yml.
Now I try to use the following yml:
image: ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- sudo apt-get update
but I get the following error:
/bin/bash: line 110: sudo: command not found
How can I use sudo?
You shouldn't have to worry about updating the Ubuntu image used in a Gitlab CI pipeline job because the docker container is destroyed when the job is finished. Furthermore, the docker images are frequently updated. If you look at ubuntu:18.04's docker hub page, it was just updated 2 days ago: https://hub.docker.com/_/ubuntu?tab=tags&page=1&ordering=last_updated
Since you're doing an update here, I'm going to assume that next you might want to install some packages. It's possible to do so, but not advised since every pipeline that you run will have to install those packages, which can really slow them down. Instead, you can create a custom docker image based on a parent image and customize it that way. Then you can either upload that docker image to docker hub, Gitlab's registry (if using self-hosted Gitlab, it has to be enabled by an admin), or built on all of your gitlab-runners.
Here's a dumb example:
# .../custom_ubuntu:18.04/Dockerfile
FROM ubuntu:18.04
RUN apt-get install git
Next you can build the image: docker build /path/to/directory/that/has/dockerfile, tag it so you can reference it in your pipeline config file: docker tag aaaaafffff59 my_org/custom_ubuntu:18.04. Then if needed you can upload the tagged image docker push my_org/custom_ubuntu:18.04.
In your .gitlab-ci.yml file, reference this custom Ubuntu image:
image: my_org/custom_ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- git --version # ensures the package you need is available
You can read more about using custom images in Gitlab CI here: https://docs.gitlab.com/charts/advanced/custom-images/
I was looking for a method to implement a CI/CD pipeline within my projects. I decided to use Gitlab with its gitlab-runner technology. I tried to use it through docker containers but, after more than 100 attempts, I decided to install it on the machine.
I followed the official Gitlab guide step by step. Everything is working perfectly; I run the register, fill all the fields correctly and I go on to write the .gitlab-ci.yml:
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
As you can imagine when looking at the yml file, when some operation is performed on the master, the pipeline starts and executes a docker-compose up --build -d (the project in question is a PHP application with a SQL database deployed through a compose).
First run:
Absolutely perfect; the pipeline starts, the build is executed correctly and is correctly put in online
Second and following 140 runs:
That's the nightmare. Over 140 builds failed for the same reason; when cloning the repository, the runner doesn't seem to have write permissions on his home directory (/home/gitlab-runner/builds/...).
If I manually delete the nested folder inside builds/ the runner works, but only for one run, then same situation.
I tried to:
run chown gitlab-runner:gitlab-runner on its home directory (also as
pre_clone_script in the TOML file);
add gitlab-runner to the sudoers group;
I added gitlab-runner to the docker group;
a series of file permissions operations, then chmod 777, chgrp with
the runner group and more.
You always should not forget to stop your containers with after_script section.
But in your case, you can use GIT_STRATEGY to clear repository before your job.
variables:
GIT_STRATEGY: none
Your yml file with this fix
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
variables:
GIT_STRATEGY: none
I'm trying to build a CI Pipeline with Gitlab CI/CD. My project is a really simple API based on Symfony. To create a consistent environment i'm using docker-compose with four very simple containers (nginx,PHP,MySQL & composer). My .gitlab-ci.yaml looks like this:
stages:
- setup
setup:
stage: setup
before_script:
- docker-compose up -d
script:
- sleep 15
- docker-compose exec -T php php bin/console doctrine:schema:create
after_script:
- [...]
- docker-compose down
The problem I'm encountering is that the ci script does not wait till the docker-compose up -d is finished. To bypass this I've added this stupid sleep.
Is there a better way to do this?
To save some time for the people searching it, I implemented the solution commented by #gbrener.
The idea: Wait until getting the log that shows that the container is up, then continue the pipeline.
1 - Get the log to be the checkpoint. I used the last log of my container. Ex: Generated backend app.
2 - Get the container name. Ex: ai-server-dev.
3 - Create a sh script like the below and name it something.sh. Ex:
#!/bin/bash
while ! docker logs ai-server-dev --tail=1 | grep -q "Generated backend app";
do
sleep 10
echo "Waiting for backend to load ..."
done
4 - Replace the 'sleep 15' with 'sh wait_server.sh' as in the question to run the script in your pipeline.
I am pretty new to Docker and Docker compose.
I want to use docker compose to test my project and publish it if tests are ok. If tests are failed, it should not publish the app at all.
Here is my docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
test:
build:
context: .
dockerfile: Dockerfile.tests
links:
- mongodb
publish:
build:
context: .
dockerfile: Dockerfile.publish
?? # I want to say here that publish step is dependent to test.
After that, in my testAndPublish.sh file, I would like to say:
docker-compose up
if [ $? = 0 ]; then # If all the services succeed
....
else
....
fi
So if test or publish steps are failed, I am not going to push it.
How can I build step like processes in docker-compose?
Thanks.
I think you're trying to do everything with docker-compose which is the wrong way around.
When it comes to CI (f.e. Travis or CircleCI) I always make my workflow as follows:
let's say you have a web node and database node
In travis.yml or circle.yml at the install step I always put things like f.e. docker-compose run web npm install and others
at the test step I would put docker-compose run web npm test or something similar like docker-compose run web my-test-script.sh, that way you'll know that the tests will run in the declared docker environment, if they fail this step fails and the whole test step in the CI fails which is desired
at the deploy step I would run some deploy.sh script which will build the image from Dockerfile (the one that web uses) and push it for example to Docker Hub.
This way your CI test routine still depends on specific Docker environment but the deploy push (which doesn't need Docker) is kept separately from the application which makes it more convinient imho.
I've noticed a problem, when configuring my gitlab-ci and gitlab-runner.
I want to have few separate application environments on one server, running on other external ports, but using same docker image.
What I want to achieve
deploy-dev running Apache at port 80 in container, but at external port 81
deploy-rcrunning Apache at port 80 in container, but on external port 82
I've seen that docker run has --publish argument, that allows port binding, like 80:81, but unfortunately I can't find any option in gitlab-ci.yml or gitlab-runner's config.toml to set that argument.
Is there any way to achieve port binding in Docker ran by gitlab-runner?
My gitlab-ci.yml:
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
deploy:
image: webdevops/php-apache:centos-7-php56
stage: deploy
only:
- dockertest
script:
- composer self-update
- export SYMFONY_ENV=dev
- composer install
- app/console doc:sch:up --force
- app/console doc:fix:load -e=dev -n
- app/console ass:install
- app/console ass:dump -e=dev
tags:
- php
You're confusing two concepts: Continuous integration tasks, and docker deployment.
What you have configured is a continuous integration task. The idea is that these perform build steps and complete. Gitlab-ci will record the results of each build step and present it back to you. These can be docker jobs themselves, though they don't have to be.
What you want to do is deploy to docker. That is to say you want to start a docker job that contains your program. Going through this is probably beyond the scope of a stack overflow answer, but I'll try my best to outline what you need to do.
First take what you have a script already, and turn this into a dockerfile. Your dockerfile will need to add all the code in your repo, and then perform the composer / console scripts you list. Use docker build to turn this dockerfile into a docker image.
Next (optionally) you can upload the the docker image to a repository.
The final step is to perform a docker run command that loads up your image and runs it.
This sounds complicated, but it's really not. I have a ci pipeline that does this. One step runs: docker build ... forllowed by docker push ... and the next step runs docker run ... to spawn the new container.