Mounting a volume with gitlab docker:dind services - docker

I have an issue with gitlab runner using docker:dind service.
I'm trying to run a docker-compose file with simple volume on a job, here the job :
test_e2e:
image: tmaier/docker-compose
stage: test
services:
- docker:dind
variables:
GIT_STRATEGY: none
GIT_CHECKOUT: "false"
DOCKER_DRIVER: overlay2
before_script:
- ls
script:
- cp .env.dist .env
- docker-compose -f docker-compose.yml -f docker-compose-ci.yml up -d
The job start normally but a container in docker-compose-ci.yml doesn't seem to mount the volume as specified in it, here docker-compose-ci.yml
version: '3.3'
services:
wait_app:
image: dadarek/wait-for-dependencies
networks:
- internal
depends_on:
- traefik
- webapp
command: webapp:3000
cypress:
# the Docker image to use from https://github.com/cypress-io/cypress-docker-images
image: "cypress/included:6.5.0"
networks:
- internal
depends_on:
- traefik
- webapp
- api
- mysql
- redis
environment:
# pass base url to test pointing at the web application
- CYPRESS_baseUrl=http://app.localhost:3000
working_dir: /cypress
volumes:
- ./cypress/:/cypress
Here if I make an "docker exec app_cypress_1 sh -c "ls -al" || 1" of /cypress folder inside the container cypress, I will have nothing even though I do have files in there on the host.
But I tried on a different version of the runner 13.7.0 instead of 13.5.0, and it work as expected.
Where could be the issue ? Is it the gitlab runner are maybe there is another parameter that I can change to make it work ?
Thank you

Related

gitlab CICD runner in docker container cant use docker commands like docker-compose

I want to create CICD that builds docker image and then compose up this image. My runner is in container as a shell executor, but compose must be on host machine (win10).
my actual gitlab.yaml:
build-job:
image: docker:latest
services:
- docker
before_script:
- docker info
stage: build
script:
- cd ...
- cd ....
- docker-compose build
test-job1:
stage: test
script:
- echo "build worked!"
deploy-prod:
stage: deploy
script:
- cd ...
- cd ....
- docker-compose up
environment: production
and my docker-compose for my runner with auto-registration:
version: '3'
name: Worker
services:
register:
container_name: registration
image: gitlab/gitlab-runner
command:
- register
- --non-interactive
- --locked=false
- --name="...."
- --executor=shell
- --docker-volumes=/var/run/docker.sock:/var/run/docker.sock
- --docker-privileged=true
- --docker-volumes=/certs/client
volumes:
- gitlab-runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
tty: true
stdin_open: true
restart: "no"
environment:
- CI_SERVER_URL=....
- REGISTRATION_TOKEN=....
labels:
- "traefik.enable=false"
worker:
container_name: ....
image: gitlab/gitlab-runner
volumes:
- /usr/bin/docker:/usr/bin/docker
- gitlab-runner-data:/etc/gitlab-runner
- gitlab-runner-data:/home/gitlab-runner
- gitlab-runner-config:/etc/gitlab-runner
restart: always
volumes:
gitlab-runner-config:
external: true
gitlab-runner-data:
external: true
at the end i get my best result - Cannot connect to the Docker daemon at unix:///var/run/docker.sock (at docker info)
i've tried with //usr or giving full $PATH from env in windows 10,
network_mode changed to "host"
a lot of changes with volumes in compose of registration and runner
How could i get it working or at least enable usage of docker commands?

docker-compose build env not populated in docker image

Use case is to build and image and deploy to Rancher 2.5.5 with gitlab-ci.yml. Since envs couldn't be passed directly in my situation I'm trying to build-in envs to docker image with docker-compose build (dev/stage things is the next thing, just let's leave it for now). docker-compose run --env-file works, but docker-compose build ignores envs.
Any advice will be appreciated
P.S. if you know the way to pass envs to rancher2 container somehow from gitlab-ci it also resolves the problem
I've tried the following:
set it in docker-compose
version: '3'
services:
testproject:
build:
context: .
env_file: .env-dev
image: example.registry/testimage:latest
set it in gitlab-ci
variables:
IMAGE: "$CI_REGISTRY_IMAGE:latest"
build-image:
stage: build
allow_failure: false
tags:
- docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker-compose --env-file .env-dev build
- docker-compose push
deploy:
stage: deploy
image: kpolszewski/rancher2-gitlab-deploy
tags:
- docker
script:
- upgrade --cluster $CLUSTER --environment $ENV --stack $NAMESPACE --service $SERVICE --new-image $IMAGE
source it in Dockerfile entrypoint
set it in .env file
nothing works
I can see new image in the registry and local (when I test it locally ) but no env inside when I run container
If you want to set env values on the build stage, you can use build-args as follows:
FROM ...
ARG SOME_ARG
ENV SOME_ENV=$SOME_ARG
Then, in your docker-compose.yml:
version: '3'
services:
testproject:
build:
context: .
args:
SOME_ARG: "SOME_VALUE"
env_file: .env-dev
image: example.registry/testimage:latest
But think twice, are you sure you want your ENV variables be dynamically set on the build stage?

docker not found when building in Jenkins

I am trying to make sure my docker work or not in my Jenkins,
I am running Jenkins in docker and it was running but when I check in Jenkins Pipeline, it said docker: not found
here is my docker-compose.yml
version: '3.7'
services:
jenkins:
image: jenkinsci/blueocean:latest
user: root
privileged: true
restart: always
ports:
- 8080:8080
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
registry:
image: registry
container_name: registry
restart: always
ports:
- 5000:5000
then I run sudo docker-compose up -d
then the Jenkins is running,
can I know why the docker not found ? is my docker-compose wrong ?
You do not need to bind - /usr/bin/docker:/usr/bin/docker, as - /var/run/docker.sock:/var/run/docker.sock is engough to interact with host docker. you should not bind executable with docker container
remove this from the compose file and it should work.
- /usr/bin/docker:/usr/bin/docker

How use gitlab ci to test and deploy my php application?

I have below docker-compose.yml
version: "2"
services:
api:
build:
context: .
dockerfile: ./build/dev/Dockerfile
container_name: "project-api"
volumes:
# 1. mount your workdir path
- .:/app
depends_on:
- mongodb
links:
- mongodb
- mysql
nginx:
image: nginx:1.10.3
container_name: "project-nginx"
ports:
- 80:80
restart: always
volumes:
- ./build/dev/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/app
links:
- api
depends_on:
- api
mongodb:
container_name: "project-mongodb"
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- "27018:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
mysql:
container_name: "gamestore-mysql"
image: mysql:5.7.23
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
And below .gitlab-ci.yml
test:
stage: test
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
script:
- docker-compose up -d
- docker-compose exec -T api ls -la
- docker-compose exec -T api composer install
- docker-compose exec -T api php core/init --env=Development --overwrite=y
- docker-compose exec -T api vendor/bin/codecept -c core/common run
- docker-compose exec -T api vendor/bin/codecept -c core/rest run
When i running my gitlab pipeline it's become field because i think gitlab can't work with services runned by docker-compose.
The error says that mysql refuse the connection.
I need this connection because my test written by codeception will test my models and api actions.
I want test my branches every time any one push in them and if pass just in develop deploy into test server and in master deploy on production server.
What is best way to run my test in gitlab ci/cd and then deploy them in my server?
You should use GitLab CI services instead of docker-compose.
You have to pick one image as your main, in which your commands will be run, and other containers just as services.
Sadly CI services cannot have mounted files in gitlab, you have to be able to configure them with env variables, or you need to create you own image with files in it (you can do that CI stage)
I would suggest you to don't use nginx, and use built-in php server for tests. It that's not possible (you have spicifix nginx config), you will need to build yourself nginx image with copied files inside it.
Also for PHP (the api service in docker-compose.yaml i assume), you need to either build the image ahed or copy command from your dockerfile to script.
So the result should be something like:
test:
stage: test
image: custom-php-image #build from ./build/dev/Dockerfile
services:
- name: mysql:5.7.23
alias: gamestore-mysql
- name: mongo:latest
alias: project-mongodb
command: mongod --smallfiles --logpath=/dev/null
variables:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
script:
- api ls -la
- composer install
- php core/init --env=Development --overwrite=y
- php -S localhost:8000 # You need to configure your built-in php server probably here
- vendor/bin/codecept -c core/common run
- vendor/bin/codecept -c core/rest run
I don't know your app, so you will probably have to made some tweaks.
More on that:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
https://docs.gitlab.com/ee/ci/services/
http://php.net/manual/en/features.commandline.webserver.php

Running shell script against Localstack in docker container

I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'

Resources