So we use Gitlab CI. The issue was the pain of having to commit each time we want to test wether or not our build pipeline was configured correctly. Unfortunately no way to easily test Gitlab CI locally when our containers/pipeline ain't workin right.
Our solution, use docker-compose.yml as a CI pipeline runner for local testing of containerized build steps, why not ya know . . . ? Basically Gitlab CI, and most others, have each section spawn a container to run a command and won't continue until the preceding steps complete, i.e. the first step must fully complete and then the next step happens.
Here is a simple .gitlab-ci.yml file we use:
stages:
- install
- test
cache:
untracked: true
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
install:
image: node:10.15.3
stage: install
script: npm install
test:
image: node:10.15.3
stage: test
script:
- npm run test
dependencies:
- install
Here is the docker-compose.yml file we converted it to:
version: "3.7"
services:
install:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- install
volumes:
- .:/home/node:Z
test:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- run
- test
volumes:
- .:/home/node:Z
depends_on:
- install
OK, now for the real issue here. The depends_on part of the compose file doesn't wait for the install container to finish, it just waits for the npm command to be running. Therefore, once the npm command is officially loaded up and running, the test container will start running and complain there are no node_modules yet. This happens because npm is running does not mean the npm command has actually finished.
Anyone know any tricks to better control what docker considers to be done. All the solutions I looked into where using some kind of wrapper script which watched some port on the internal docker network to wait for a service, like a db, to be fully turned on and ready.
When using k8s I can setup a readiness probe which is super dope, doesn't seem to be a feature of Docker Compose though. Am I wrong here? Would be nice to just write a command which docker uses to determine what done means.
For now we must run each step manually and then run the next when the preceding step is complete like so:
docker-compose up install
wait ....
docker-compose up test
We really just want to say:
docker-compose up
and have all the steps complete in correct order by waiting for preceding steps.
I went through the same issue, this is a permission related thing when you are mapping from your local machine to docker.
volumes:
- .:/home/node:Z
Create a file inside the container, and check the permission of this same file in your local machine, if you see the root user or anything else is the owner, instead of your current user, you have to run first
export DOCKER_USER="$(id -u):$(id -g)"
and change
user: node
by
user: $DOCKER_USER
PS: I'm assuming you can run docker without having to use sudo, just mentioning this bc this is the scenario I have.
This question was many years ago. I now use this project: https://github.com/firecow/gitlab-ci-local
It runs your Gitlab Pipeline locally using docker just as you would expect it to run.
Related
I was looking for a method to implement a CI/CD pipeline within my projects. I decided to use Gitlab with its gitlab-runner technology. I tried to use it through docker containers but, after more than 100 attempts, I decided to install it on the machine.
I followed the official Gitlab guide step by step. Everything is working perfectly; I run the register, fill all the fields correctly and I go on to write the .gitlab-ci.yml:
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
As you can imagine when looking at the yml file, when some operation is performed on the master, the pipeline starts and executes a docker-compose up --build -d (the project in question is a PHP application with a SQL database deployed through a compose).
First run:
Absolutely perfect; the pipeline starts, the build is executed correctly and is correctly put in online
Second and following 140 runs:
That's the nightmare. Over 140 builds failed for the same reason; when cloning the repository, the runner doesn't seem to have write permissions on his home directory (/home/gitlab-runner/builds/...).
If I manually delete the nested folder inside builds/ the runner works, but only for one run, then same situation.
I tried to:
run chown gitlab-runner:gitlab-runner on its home directory (also as
pre_clone_script in the TOML file);
add gitlab-runner to the sudoers group;
I added gitlab-runner to the docker group;
a series of file permissions operations, then chmod 777, chgrp with
the runner group and more.
You always should not forget to stop your containers with after_script section.
But in your case, you can use GIT_STRATEGY to clear repository before your job.
variables:
GIT_STRATEGY: none
Your yml file with this fix
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
variables:
GIT_STRATEGY: none
I'm using drone/drone:0.8 along with the Docker plugin, and I'm kinda stuck with a Dockerfile I use to build the app.
This Dockerfile runs the app test suite as part of it's build process - relevant fragment shown:
# ENV & ARG settings:
ENV RAILS_ENV=test RACK_ENV=test
ARG DATABASE_URL=postgres://postgres:3x4mpl3#postgres:5432/app_test
# Run the tests:
RUN rails db:setup && rspec
The test suite requires a connection to the database, for which I'm including the postgres service in the .drone.yml file:
pipeline:
app:
image: plugins/docker
repo: vovimayhem/example-app
tags:
- ${DRONE_COMMIT_SHA}
- ${DRONE_COMMIT_BRANCH/master/latest}
compress: true
secrets: [ docker_username, docker_password ]
use_cache: true
build_args:
- DATABASE_URL=postgres://postgres:3x4mpl3#postgres:5432/app_test
services:
postgres:
image: postgres:9-alpine
environment:
- POSTGRES_PASSWORD=3x4mpl3
But it looks like the services defined in the drone file are not accessible from within the build process:
Step 18/36 : RUN rails db:setup && rspec
---> Running in 141734ca8f12
could not translate host name "postgres" to address: Name does not resolve
Couldn't create database for {"encoding"=>"unicode", "schema_search_path"=>"partitioning,public", "pool"=>5, "min_messages"=>"log", "adapter"=>"postgresql", "username"=>"postgres", "password"=>"3x4mpl3", "port"=>5432, "database"=>"sibyl_test", "host"=>"postgres"}
rails aborted!
PG::ConnectionBad: could not translate host name "postgres" to address: Name does not resolve
Is there any configuration I'm missing out? Or this is a feature not currently present in the plugin?
I know this could be related somehow with the --network and/or --add-host options from docker build command... I could help in case you think we should include this behavior.
So a couple things jump out to me (although I don't have the full context so take what you think makes sense)
I would probably separate out the build/testing piece of the code into a different step, and then use the docker plugin to publish the artifacts ones the've passed
I think the docker plugin is really to publish the image (I don't believe its container is going to be able to reach the service containers due to dind)
if you do separate it out you'll probably need - sleep 15 in the commands section of the build to give the db time to startup
http://docs.drone.io/postgres-example/ has examples of how to use postgres but again, it would required separating the build pieces from creating and publishing the docker image :)
here's a sample I'm talking about ;)
pipeline:
tests-builds: //Should probably be separate :)
image: python:3.6-stretch
commands:
- sleep 15 //wait for postgrest to start
- pip install --upgrade -r requirements.txt
- pip install --upgrade -r requirements-dev.txt
- pytest --cov=sfs tests/unit
- pytest --cov=sfs tests/integration //This tests the db interactio0ns
publish:
image: plugins/docker
registry: quay.io
repo: somerepot
auto_tag: true
secrets: [ docker_username, docker_password ]
when:
event: [ tag, push ]
services:
database:
image: postgres
I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs
I am pretty new to Docker and Docker compose.
I want to use docker compose to test my project and publish it if tests are ok. If tests are failed, it should not publish the app at all.
Here is my docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
test:
build:
context: .
dockerfile: Dockerfile.tests
links:
- mongodb
publish:
build:
context: .
dockerfile: Dockerfile.publish
?? # I want to say here that publish step is dependent to test.
After that, in my testAndPublish.sh file, I would like to say:
docker-compose up
if [ $? = 0 ]; then # If all the services succeed
....
else
....
fi
So if test or publish steps are failed, I am not going to push it.
How can I build step like processes in docker-compose?
Thanks.
I think you're trying to do everything with docker-compose which is the wrong way around.
When it comes to CI (f.e. Travis or CircleCI) I always make my workflow as follows:
let's say you have a web node and database node
In travis.yml or circle.yml at the install step I always put things like f.e. docker-compose run web npm install and others
at the test step I would put docker-compose run web npm test or something similar like docker-compose run web my-test-script.sh, that way you'll know that the tests will run in the declared docker environment, if they fail this step fails and the whole test step in the CI fails which is desired
at the deploy step I would run some deploy.sh script which will build the image from Dockerfile (the one that web uses) and push it for example to Docker Hub.
This way your CI test routine still depends on specific Docker environment but the deploy push (which doesn't need Docker) is kept separately from the application which makes it more convinient imho.
Before I post any configuration, I try to explain what I would like to archive and would like to mention, that I’m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so I’m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a “service” and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with “volumes_from” etc. but I decided to show you this, because it’s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the “data” service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae
After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.