Forward host gitlab-ci for dind - docker

Problem: need to add some row (ex: 124.343.23.34 gitlab.example.com) to /etc/hosts for dind(docker in docker).
Everything this in gitlab-ci.yml
Current script:
cache:build:
stage: cache
image: docker:dind
services:
- redis:latest
- docker:dind
tags:
- docker
cache:
<<: *cache_build
policy: pull-push
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker pull $DOCKER_DEV_IMAGE
- docker pull node:current-alpine
- docker run --rm -v $(pwd):/var/www -w /var/www $DOCKER_DEV_IMAGE composer install -n
- docker run --rm -v $(pwd):/var/www -w /var/www $DOCKER_DEV_IMAGE bin/console fos:js-routing:dump --format=json --target=public/js/fos_js_routes.json
- docker run --rm -v $(pwd):/var/www -w /var/www node:current-alpine yarn install
- docker run --rm -v $(pwd):/var/www -w /var/www node:current-alpine yarn prod

Have you tried using the --add-host option? As detailed in https://www.thegeekdiary.com/how-to-add-new-host-entry-in-etc-hosts-when-a-docker-container-is-run/

Related

GitLab CI/CD not taking latest code changes

So I have used GitLab CI/CD to deploy changes to private docker hub repo and using Digital Ocean droplet to run the server using docker but the changes are not being reflected in the docker container running on digital ocean. Here's the config file.
variables:
IMAGE_NAME: codelyzer/test-repo
IMAGE_TAG: test-app-1.0
stages:
- test
- build
- deploy
run_tests:
stage: test
image:
node:16
before_script:
- npm install jest
script:
npm run test
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASS
script:
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker image prune -f &&
docker ps -aq | xargs docker stop | xargs docker rm &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"
The digital ocean server wasn't fetching the latest image from the repository so I added docker prune as additional step to do.
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker ps -aq | (xargs docker stop || true) | (xargs docker rm || true) &&
docker system prune -a -f &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"

Can't communicate between docker containers in Gitlab-CI

In the second stage of my CI pipeline (after building a fresh docker image). I'm using other docker containers to test the new image, but I'm not able to get send any requests between these containers. My CI config is as follows:
stages:
- build
- test
services:
- docker:dind
variables:
# Enable network-per-build to allow gitlab services to see one another
FF_NETWORK_PER_BUILD: "true"
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: '/certs'
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# IMAGE SETTINGS
NODE_ENV: "development"
API_URL: "http://danielgtaylor-apisprout:8000"
PORT: 8080
build:
stage: build
image: docker
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t ${CI_REGISTRY_IMAGE}:test .
- docker push ${CI_REGISTRY_IMAGE}:test
test:
stage: test
image: docker
services:
- docker:dind
- name: ${CI_REGISTRY_IMAGE}:test
alias: server
script:
- docker run --rm --name apisprout -d -v $CI_PROJECT_DIR/v2-spec.yml:/api.yaml danielgtaylor/apisprout /api.yaml
- docker run --rm --name newman -v $CI_PROJECT_DIR:/etc/newman postman/newman run 'Micros V2.postman_collection.json'
And receive the following error ENOTFOUND server server:8080
I have also tried with a new bridge network:
test:
stage: test
image: docker
services:
- docker:dind
script:
- docker network create -d bridge mynet
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker run -d --network=mynet --hostname=server ${CI_REGISTRY_IMAGE}:test
- docker run --rm --network=mynet --hostname=apisprout --name apisprout -d -v $CI_PROJECT_DIR/v2-spec.yml:/api.yaml danielgtaylor/apisprout /api.yaml
- docker run --rm --network=mynet --name newman -v $CI_PROJECT_DIR:/etc/newman postman/newman run 'Micros V2.postman_collection.json'
But I receive the same error ENOTFOUND server server:8080.
I am unable to run the docker run containers as services as I don't believe attaching volumes is supported yet.
I'm also running this on Gitlab.com, not a private runner.

Docker doesn't work when using getsentry/sentry-cli

I want to upload my frontend to sentry, but I need to get the folder using docker commands. However when I use image: getsentry/sentry-cli
docker doesn't works and e.g. in before_script I get error that docker doesn't exist
sentry_job:
stage: sentry_job
image: getsentry/sentry-cli
services:
- docker:18-dind
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.gitlab.cz
script:
# script...
. # Get the dist folder from the image
- mkdir frontend_dist
- docker run --rm -v $PWD/frontend_dist:/mounted --entrypoint="" $IMAGE /bin/sh -c "cp /frontend/dist /mounted"
- ls frontend_dist
tags:
- dind
How do I fix that?
To achieve what you want, you need to use a single job (to have the same build context) and specify docker:stable as the job image (along with docker:stable-dind as a service).
This setup is called docker-in-docker and this is the standard way to allow a GitLab CI script to run docker commands (see doc).
Thus, you could slightly adapt your .gitlab-ci.yml code like this:
sentry_job:
stage: sentry_job
image: docker:stable
services:
- docker:stable-dind
variables:
IMAGE: "${CI_REGISTRY_IMAGE}:latest"
before_script:
- docker login -u gitlab-ci-token -p "${CI_JOB_TOKEN}" registry.gitlab.cz
script:
- git pull "$IMAGE"
- mkdir -v frontend_dist
- docker run --rm -v "$PWD/frontend_dist:/mounted" --entrypoint="" "$IMAGE" /bin/sh -c "cp -v /frontend/dist /mounted"
- ls frontend_dist
- git pull getsentry/sentry-cli
- docker run --rm -v "$PWD/frontend_dist:/work" getsentry/sentry-cli
tags:
- dind
Note: the git pull commands are optional (they ensure Docker will use the latest version of the images).
Also, you may need to change the definition of variable IMAGE.

Gitlab CI/CD using FTP for .NET Core

I'm trying to auto build, test and deploy my .NET Core app, and so far it builds and tests but it won't deploy. The Gitlab pipeline shows that the job succeeded but it didn't actually work. This is my Dockerfile:
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /source
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash - \
&& apt-get install -y nodejs
COPY ./src/*.csproj .
RUN dotnet restore
COPY ./src/ ./
RUN dotnet publish "./Spa.csproj" --output "./dist" --configuration Release --no-restore
FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /source/dist .
RUN apt-get update \
&& apt-get install -y php-cli
COPY deployment.ini /app
EXPOSE 80
ENTRYPOINT ["dotnet", "Spa.dll"]
and this is what my .gitlab-ci.yml file looks like:
# ### Define variables
#
variables:
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
SOURCE_CODE_PATH: 'src/'
# ### Define stage list
stages:
- build
- test
- deploy
cache:
# Per-stage and per-branch caching.
key: "$CI_JOB_STAGE-$CI_COMMIT_REF_SLUG"
paths:
- '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/project.assets.json'
- '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/*.csproj.nuget.*'
- '$NUGET_PACKAGES_DIRECTORY'
- '**/node_modules/'
build:
image: docker:stable
stage: build
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t registry.gitlab.com/chinloyal/spa .
- docker push registry.gitlab.com/chinloyal/spa
tests:
image: microsoft/dotnet:2.2-sdk
stage: test
before_script:
- curl -sL https://deb.nodesource.com/setup_11.x | bash -
- apt-get install -y nodejs
script:
- dotnet test --no-restore Tests/
deploy:
image: docker:stable
services:
- docker:dind
stage: deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull registry.gitlab.com/chinloyal/spa
- docker run --name spa -p 80:80 -d registry.gitlab.com/chinloyal/spa
script:
- docker exec -d spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar
- docker exec -d spa bash -c "echo remote = $FTP_HOST >> deployment.ini"
- docker exec -d spa bash -c "echo user = $FTP_USER >> deployment.ini"
- docker exec -d spa bash -c "echo password = $FTP_PASSWORD >> deployment.ini"
- docker exec -d spa php deployment.phar deployment.ini
environment:
name: production
only:
- master
This line docker exec -d spa php deployment.phar deployment.ini is the one that is suppose to upload the files from inside the docker container. But I believe that because Gitlab ends the process immediately after that line. Then the process inside the container just ends.
I've tried using the registry image (registry.gitlab.com/chinloyal/spa) as the image for deploy, but every time I try to use it, then it just starts running the project on Gitlab until it timesout or until I cancel it.
I only have ftp access to the server by the way, because it's a shared server. The ftp tool I'm using to deploy is here. I've tried it before so I know it works.
I figured it out, all the docker exec commands used -d flag so I removed the -d flag so that gitlab would show the output of files being downloaded and uploaded. so I changes this docker exec -d spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar to docker exec spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar
I also changed docker exec -d spa php deployment.phar deployment.ini to docker exec spa php deployment.phar deployment.ini.
So before they were running in detached mode so because of that gitlab thought they had finished and just moved on to the next command, removing the -d flag lets gitlab wait.

can not run docker latest on gitlab-ci runner

I'm testing gitlab-ci and trying to generate an image on the registry from the Dockerfile.
I have the same code just to test:
#gitlab-ci
image: docker:latest
tages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test
output:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker is running the image is being pulled but I can not execute docker commands.
In my local environment if a run:
docker run -it docker:latest
I stay inside the container and run docker info i have the same problem. I had to fix it by running the container on this way:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock docker:latest
but I do not know how to fix it on gitlab-ci. I configured my runner so:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Maybe someone can put me in the right direction.
thanks
By default it is not possible to run docker-in-docker (DIND) (as a security measure).
This section in the Gitlab docs is your solution. You must use Docker-in-Docker.
After configuring your runner to use DIND your .gitlab-ci.yml will look like this:
#gitlab-ci
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
stages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test

Resources