use docker-composer in gitlab-ci - docker

Tell me please, is it possible to somehow run the script from the docker collaborator via gitlab-ci.yml
My docker-composer.yml
version: '3.8'
services:
app:
image: USER/test-web_app:latest
ports:
- "9876:80"
My gitlab-ci.yml
variables:
stages:
- build_project
- make_image
- deploy_image
build_project:
stage: build_project
image: node:16.15.0-alpine
services:
- docker:20.10.14-dind
script:
- npm cache clean --force
- npm install --legacy-peer-deps
- npm run build
artifacts:
expire_in: 15 mins
paths:
- build
- node_modules
only:
- main
make_image:
stage: make_image
image: docker:20.10.14-dind
services:
- docker:20.10.14-dind
before_script:
- docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- docker build -t $REGISTER/$REGISTER_USER/$PROJECT_NAME:latest $DOCKER_FILE_LOCATION
- docker push $REGISTER_USER/$PROJECT_NAME:latest
after_script:
- docker logout
only:
- main
deploy_image:
stage: deploy_image
image: alpine:latest
services:
- docker:20.10.14-dind
before_script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker-compose down
**?????????**
after_script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP docker logout
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP exit
only:
- main
How can I use a docker-composer script inside gitlab-ci to run it on a remote server?
Is it possible to use several different docker-composer for different build versions?

Related

Docker Image with Conda Env Activated and Ready for Shell Commands

I have tried many ways through searching for a solution.
I think my problem is different.
I am wanting to have a docker image that has the environment installed and then active and ready for shell commands like: flake8, pylint, black, isort, coverage
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
RUN echo "conda activate up-and-down-pytorch" >> ~/.bashrc
conda_env_unit_test.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
# image: continuumio/miniconda3:latest
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
# - conda env create --file conda_env.yml
# - source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
The Docker Image gets uploaded to the gitlab registry and the unit test stage uses that image, however:
/bin/bash: line 127: coverage: command not found
(ultimate goal was to not have to create the conda environment every time I wanted to lint or run unit tests)
Figured it out today.
Dropped the duration for the unit tests.
Change was to source the environment in the unit-test job. Didn't need to do that in the Dockerfile.
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
conda_env_unit_tests.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pandas
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
- source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml

Is it possible to copy a file from docker container to gitlab repository by gitlab-ci.yml

I created a docker image with automated tests that generates a report XML file. After the test run, this file is generated. I want to copy this file to the repository because the pipeline needs this file to show result tests:
My gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxxx" -p "yyyy" docker.io
script:
- docker run --name authContainer "xxxx/dockerImage:0.0.1"
after_script:
- docker cp authContainer:/artifacts/test-result.xml .
artifacts:
when: always
paths:
- test-result.xml
reports:
junit:
- test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
You're .gitlab-ci file is looking fine. You can have the XML report as artifact and gitlab will populate the results from that. Below is the script that i've used and could see the results.
script:
- pytest -o junit_family=xunit2 --junitxml=report.xml --cov=. --cov-report html
- coverage report
coverage: '/^TOTAL.+?(\d+\%)$/'
artifacts:
paths:
- coverage
reports:
junit: report.xml
when: always

GitLab CI Docker WORKDIR not being created

I am trying to deploy my NodeJS repo to a DO droplet via GitLab CI. I have been following this guide to do so. What is odd is that the deployment pipeline seems to succeed but if I SSH into the box, I can see that the app is not running as has failed to find a package.json in /usr/src/app which is the WORKDIR my Dockerfile is pointing to.
gitlab-ci.yml
cache:
key: "${CI_COMMIT_REF_NAME} node:latest"
paths:
- node_modules/
- .yarn
stages:
- build
- release
- deploy
build:
stage: build
image: node:latest
script:
- yarn
artifacts:
paths:
- node_modules/
release:
stage: release
image: docker:latest
only:
- master
services:
- docker:dind
variables:
DOCKER_DRIVER: "overlay"
before_script:
- docker version
- docker info
- docker login -u ${CI_REGISTRY_USER} -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
script:
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
after_script:
- docker logout ${CI_REGISTRY}
deploy:
stage: deploy
image: gitlab/dind:latest
only:
- master
environment: production
when: manual
before_script:
- mkdir -p ~/.ssh
- echo "${DEPLOY_SERVER_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
script:
- printf "DB_URL=${DB_URL}\nDB_NAME=${DB_NAME}\nPORT=3000" > .env
- scp -r ./.env ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#${DEPLOYMENT_SERVER_IP} "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; docker-compose rm -sf scraper; docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; docker-compose up -d"
Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]
docker-compose.yml
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env
I'm using GitLab Shared Runners for my pipeline. My pipeline looks like it executes completely fine but for this symlink failure at the end:
...which I don't think is anything to worry about. If I SSH into my box & go to where the docker compose was copied & inspect:
Docker has not created /usr/src/app.
Versions:
Docker: 19.03.1
Docker-compose: 1.22.0
My DO box is Docker 1-click btw. Any help appreciated!
EDIT
I have altered my Dockerfile to attempt to force the dir creation so have added RUN mkdir -p /usr/src/app before the line declaring it as the working dir. This still does not create the directory...
When I look at the container status' (docker-compose ps), I can see that the containers are in an exit state & have exited with code either 1 or 254...any idea as to why?
Your compose file is designed for a development environment, where the code directory is replaced by a volume mount to the code on the developers machine. You don't have this persistent directory in production, nor should you be depending on code outside of the image in production, defeating the purpose of copying it into your image.
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
# Comment out or delete these lines, they do not belong in production
#volumes:
# - .:/usr/src/app
# - /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env

Rails dockerized: Continuous delivery using Gitlab and Digitalocean

I'm actually trying to setup continuous delivery for a Rails dockerized project, hosted on Gitlab.com. I followed this article which is not directly related to Rails environment, and that I tried to adapt... Obviously without any success :(
For the context, I created three different services: db, webpacker and app.
Following the above article, here are my .gitlab-ci.yml and docker-compose.staging2.yml (autodeploy):
image: docker
services:
- docker:dind
cache:
paths:
- node_modules
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
CONTAINER_CURRENT_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_LATEST_IMAGE: $CI_REGISTRY_IMAGE:latest
CONTAINER_STABLE_IMAGE: $CI_REGISTRY_IMAGE:stable
stages:
- test
- build
- release
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose --version
test:
stage: test
script:
- docker-compose build --pull
# Here we will run tests when available...
after_script:
- docker-compose down
- docker volume rm `docker volume ls -qf dangling=true`
build:
stage: build
script:
- docker build -t $CONTAINER_CURRENT_IMAGE . --pull
- docker push $CONTAINER_CURRENT_IMAGE
release-latest-image:
stage: release
only:
- feat-dockerisation
script:
- docker pull $CONTAINER_CURRENT_IMAGE
- docker tag $CONTAINER_CURRENT_IMAGE $CONTAINER_LATEST_IMAGE
- docker push $CONTAINER_LATEST_IMAGE
release-stable-image:
stage: release
only:
- feat-dockerisation
script:
- docker pull $CONTAINER_CURRENT_IMAGE
- docker tag $CONTAINER_CURRENT_IMAGE $CONTAINER_STABLE_IMAGE
- docker push $CONTAINER_STABLE_IMAGE
deploy_staging:
stage: deploy
only:
- feat-dockerisation
environment: production
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- which ssh-agent || (apk add openssh-client)
- eval $(ssh-agent -s)
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
script:
- scp -rp ./docker-compose.staging2.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};
docker-compose -f docker-compose.staging2.yml down;
docker pull $CONTAINER_LATEST_IMAGE;
docker-compose -f docker-compose.staging2.yml up -d"
version: '3'
services:
db:
image: postgres:11-alpine
ports:
- 5433:5432
environment:
POSTGRES_PASSWORD: postgres
webpacker:
image: registry.gitlab.com/soykje/beweeg-ror:latest
command: [sh, -c, "yarn && bin/webpack-dev-server"]
ports:
- 3035:3035
app:
image: registry.gitlab.com/soykje/beweeg-ror:latest
links:
- db
- webpacker
ports:
- 3000:3000
I'm getting started with Docker and CI/CD so... Can't find what I am doing wrong :/
After all the jobs are successfully completed on Gitlab CI/CD, when I try to access my app on the Docker droplet I get nothing... When I ssh on my droplet, everything seems ok, but I still cannot browse anything... Would anyone have an idea of what I am missing?
I feel I'm pretty close to achieve (maybe I'm wrong too...), so any help would be very welcome!
Thx in advance!

Circleci 2.0 Build on Dockerfile

I have a Node.JS application that I'd like to build and test using CircleCI and Amazon ECR. The documentation is not clear on how to build an image from a Dockerfile in a repository. I've looked here: https://circleci.com/docs/2.0/building-docker-images/ and here https://circleci.com/blog/multi-stage-docker-builds/ but it's not clear what I put under the executor. Here's what I've got so far:
version: 2
jobs:
build:
docker:
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .
CircleCI fails with the following error:
* The job has no executor type specified. The job should have one of the following keys specified: "machine", "docker", "macos"
The base image is take from the Dockerfile. I'm using CircleCI's built in AWS Integration so I don't think I need to add aws_auth. What do I need to put under the executor to get this running?
Build this with a Docker-in-Docker config:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 gcc \
libffi-dev python-dev \
linux-headers \
musl-dev \
libressl-dev \
make
pip install \
docker-compose==1.12.0 \
awscli==1.11.76 \
ansible==2.4.2.0
- run:
name: Save Vault Password to File
command: echo $ANSIBLE_VAULT_PASS > .vault-pass.txt
- run:
name: Decrypt .env
command: |
ansible-vault decrypt .circleci/envs --vault-password-file .vault-pass.txt
- run:
name: Move .env
command: rm -f .env && mv .circleci/envs .env
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build application Docker image
command: |
docker build --cache-from=app -t app .
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- deploy:
name: Push application Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
login="$(aws ecr get-login --region $ECR_REGION)"
${login}
docker tag app "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
docker push "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
fi
You need to specify a Docker image for your build to run in in the first place. This should work:
version: 2
jobs:
build:
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .

Resources