How to pass environment variable to docker run in gitlab ci cd - docker

I am trying to pass the env variable to my node js docker build image ,while running as shown below
stages:
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- touch env.txt
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- echo "AWS_ACCESS_KEY_ID"=$AWS_ACCESS_KEY_ID >> "env.txt"
- echo "AWS_S3_BUCKET"=$AWS_S3_BUCKET >> "env.txt"
- echo "AWS_S3_REGION"=$AWS_S3_REGION >> "env.txt"
- echo "AWS_SECRET_ACCESS_KEY"=$AWS_SECRET_ACCESS_KEY >> "env.txt"
- echo "DB_URL"=$DB_URL >> "env.txt"
- echo "JWT_EXPIRES_IN"=$JWT_EXPIRES_IN >> "env.txt"
- echo "OTP_EXPIRE_TIME_SECONDS"=$OTP_EXPIRE_TIME_SECONDS >> "env.txt"
- echo "TWILIO_ACCOUNT_SID"=$TWILIO_ACCOUNT_SID >> "env.txt"
- echo "TWILIO_AUTH_TOKEN"=$TWILIO_AUTH_TOKEN >> "env.txt"
- echo "TWILLIO_SENDER"=$TWILLIO_SENDER >> "env.txt"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run --env-file env.txt -d -p 8080:8080 --name my-app $TAG_COMMIT"
environment:
name: development
url: 90900
only:
- master
I am running this command docker run --env-file env.txt ,but it gives me an error docker: open env.txt: no such file or directory.
How Can I solve the issue ,to pass multiple variables in my docker run command

Which job is failing? In your deploy job, you are creating the env.txt locally and using SSH to do the docker building, but you never scp your local env.txt to $SERVER_USER#$SERVER_ID for the remote process to pick it up.

I had the same issue using Gitlab ci/cd. i.e. Trying to inject env vars that were referenced in the project .env file via the runner (docker executor) into the output docker container.
We don't want to commit any sensitive info into git so one option is to save them on the server in a file and include via the --env-file flag but Gitlab runner creates a new container for every run so not possible to use this as the host server running the yaml script is ephemeral and not the actual server that Gitlab runner was installed onto.
The suggestion by #dmoonfire to scp the file over sounded like a good solution but I couldn't get it to work to copy a file from external to the gitlab runner. I'd need to copy the public key from the executor to the gitlab runner server but the docker executor is ephemeral.
I found the simplest solution to use the Gitlab CI/CD variable settings. It's possible to mask variables and restrict to protected branches or protected tags etc. These get injected into the container so that your .env file can access.

Related

GitLab CI/CD not taking latest code changes

So I have used GitLab CI/CD to deploy changes to private docker hub repo and using Digital Ocean droplet to run the server using docker but the changes are not being reflected in the docker container running on digital ocean. Here's the config file.
variables:
IMAGE_NAME: codelyzer/test-repo
IMAGE_TAG: test-app-1.0
stages:
- test
- build
- deploy
run_tests:
stage: test
image:
node:16
before_script:
- npm install jest
script:
npm run test
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASS
script:
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker image prune -f &&
docker ps -aq | xargs docker stop | xargs docker rm &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"
The digital ocean server wasn't fetching the latest image from the repository so I added docker prune as additional step to do.
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker ps -aq | (xargs docker stop || true) | (xargs docker rm || true) &&
docker system prune -a -f &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"

CI/CD script for build & deploy docker image in aws EC2

can I build ,push(to gitlab registry) and deploy the image (to aws EC2) using this CI/CD configuration?
stages:
- build
- deploy
build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
deploy:
stage: deploy
before_script:
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#18.0.0.82 "sudo docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; sudo docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; sudo docker-compose up -d"
after_script:
- sudo docker logout
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
after the script build is getting suceed, deploy gets fail.
(build suceeded)
(deploy got failed)
the configuration must be build and deploy the image
There are a couple of errors, but the overall Pipeline seems good.
You cannot use ssh-add without having the agent running
Why you create the .ssh folder manually if afterwards you're explicitly ignoring the key that is going to be stored under known_hosts?
Using StrictHostKeyChecking=no is dangerous and totally unrecommended.
On the before_script add the following:
before_script:
- eval `ssh-agent`
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H 18.0.0.82 >> ~/.ssh/known_hosts
Also, don't use sudo on your ubuntu user, better add it to the docker group or connect through SSH to an user that is in the docker group.
In case you don't have already a docker group in your EC2 instance, now it's a good moment to configure it:
Access to your EC2 instance and create the docker group:
$ sudo groupadd docker
Add the ubuntu user to the docker group:
$ sudo usermod -aG docker ubuntu
Now change your script to:
script:
- echo $CI_REGISTRY_PASSWORD > docker_password
- scp docker_password ubuntu#18.0.0.82:~/tmp/docker_password
- ssh ubuntu#18.0.0.82 "cat ~/tmp/docker_password | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; docker-compose up -d; docker logout; rm -f ~/tmp/docker_password"
Also, remember that in the after_script you aren't in the EC2 instance but within the runner image so you don't need to logout, but it would be good to kill the SSH agent tho.
Final Job
deploy:
stage: deploy
before_script:
- eval `ssh-agent`
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H 18.0.0.82 >> ~/.ssh/known_hosts
script:
- echo $CI_REGISTRY_PASSWORD > docker_password
- scp docker_password ubuntu#18.0.0.82:~/tmp/docker_password
- ssh ubuntu#18.0.0.82 "cat ~/tmp/docker_password | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; docker-compose up -d; docker logout; rm -f ~/tmp/docker_password"
after_script:
- kill $SSH_AGENT_PID
- rm docker_password
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile

Pytest doesn't run with "docker-compose exec -T web_service pytest" in Gitlab CI Docker executor with Docker-Compose

The main reason I'm trying to use Gitlab CI is to automate unit testing before deployment. I want to
build my Docker images and push them to my image repository, then
ensure all my pytest unit tests pass, and finally
deploy to my production server.
However, my pytest command doesn't run at all if I include the -T flag as follows. It just instantly returns 0 and "succeeds", which is not correct because I have a failing test in there:
docker-compose exec -T web_service pytest /app/tests/ --junitxml=report.xml
On my local computer, I run the tests without the -T flag as follows, and it runs correctly (and the test fails as expected):
docker-compose exec web_service pytest /app/tests/ --junitxml=report.xml
But if I do that in Gitlab CI, I get the error the input device is not a TTY if I omit the -T flag.
Here's some of my ".gitlab-ci.yml" file:
image:
name: docker/compose:1.29.2
# Override the entrypoint (important)
entrypoint: [""]
# Must have this service
# Note: --privileged is required for Docker-in-Docker to function properly,
# but it should be used with care as it provides full access to the host environment
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
# DOCKER_HOST is essential
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
# First test that gitlab-runner has access to Docker
- docker --version
# Set variable names
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export MY_IMAGE=$IMAGE:web_service
# Install bash
- apk add --no-cache bash
# Add environment variables stored in GitLab, to .env file
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
# Login to the Gitlab registry and pull existing images to use as cache
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
# Pull the image for the build cache, and continue even if this image download fails (it'll fail the very first time)
- docker pull $MY_IMAGE || true
# Build and push Docker images to the Gitlab registry
- docker-compose -f docker-compose.ci.build.yml build
- docker push $MY_IMAGE
only:
- master
test:
stage: test
script:
# Pull the image
- docker pull $MY_IMAGE
# Start the containers and run the tests before deployment
- docker-compose -f docker-compose.ci.test.yml up -d
# TODO: The following always succeeds instantly with "-T" flag,
# but won't run at all if I exclude the "-T" flag...
- docker-compose -f docker-compose.ci.test.yml exec -T web_service pytest --junitxml=report.xml
- docker-compose -f docker-compose.ci.test.yml down
artifacts:
when: always
paths:
- report.xml
reports:
junit: report.xml
only:
- master
deploy:
stage: deploy
script:
- bash deploy.sh
only:
- master
I've found a solution here. It's a quirk with docker-compose exec. Instead, I find the container ID with $(docker-compose -f docker-compose.ci.test.yml ps -q web_service) and use docker exec --tty <container_id> pytest ...
In the test stage, I've made the following substitution:
test:
stage: test
script:
- # docker-compose -f docker-compose.ci.test.yml exec -T myijack pytest /app/tests/ --junitxml=report.xml
- docker exec --tty $(docker-compose -f docker-compose.ci.test.yml ps -q web_service) pytest /app/tests --junitxml=report.xml

run job on server using ssh key

I want to deploy on this server serverBNP-prod1
I tried to write this code below. Using this code where should i add my ssh local key please?
Thank you
job_deploy_prod:
stage: deploy
only:
- master
- tags
when: manual
environment:
name: prod
variables:
SERVER: serverBNP-prod1
SSH_OPTS: -p 22 -l udoc -o BatchMode=true -o StrictHostKeyChecking=no
script:
- export VERSION=$(fgrep -m 1 -w version pom.xml | sed -re 's/^.*>(.*)<.*$/\1/')
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com"
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker rm -f proj"
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker pull registry.gitlab.com/bnp/proj:$VERSION"
- ssh $SSH_OPTS -i $HOME/.ssh/id_rsa $SERVER "docker run -d -p 8080:8080 -e 'SPRING_PROFILES_ACTIVE=prod' -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone --name proj registry.gitlab.com/bnp/proj:$VERSION"
tags:
- prod
You can either:
use a Settings / CI-CD / Variable of type File, in which you will put your private key data, or
if you have only a username and password for your server you can use the ''sshpass'' command and provide a SSHPASS environment variable, still in the CI-CD variables section
More details on how to use the GitLab CI/CD environment variables (File type, security, etc.) can be found here:
https://docs.gitlab.com/ce/ci/variables/
You can mask variables but be aware that contrary to Jenkins there is no way to remove the "Reveal value" button when a user has sufficient rights in Gitlab to view/edit the settings, so you will have to carefully select your project rights, e.g., by allowing other people the Developer right but without the Maintainer one (which allows to edit the settings).

Gitlab CI/CD using FTP for .NET Core

I'm trying to auto build, test and deploy my .NET Core app, and so far it builds and tests but it won't deploy. The Gitlab pipeline shows that the job succeeded but it didn't actually work. This is my Dockerfile:
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /source
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash - \
&& apt-get install -y nodejs
COPY ./src/*.csproj .
RUN dotnet restore
COPY ./src/ ./
RUN dotnet publish "./Spa.csproj" --output "./dist" --configuration Release --no-restore
FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /source/dist .
RUN apt-get update \
&& apt-get install -y php-cli
COPY deployment.ini /app
EXPOSE 80
ENTRYPOINT ["dotnet", "Spa.dll"]
and this is what my .gitlab-ci.yml file looks like:
# ### Define variables
#
variables:
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
SOURCE_CODE_PATH: 'src/'
# ### Define stage list
stages:
- build
- test
- deploy
cache:
# Per-stage and per-branch caching.
key: "$CI_JOB_STAGE-$CI_COMMIT_REF_SLUG"
paths:
- '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/project.assets.json'
- '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/*.csproj.nuget.*'
- '$NUGET_PACKAGES_DIRECTORY'
- '**/node_modules/'
build:
image: docker:stable
stage: build
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t registry.gitlab.com/chinloyal/spa .
- docker push registry.gitlab.com/chinloyal/spa
tests:
image: microsoft/dotnet:2.2-sdk
stage: test
before_script:
- curl -sL https://deb.nodesource.com/setup_11.x | bash -
- apt-get install -y nodejs
script:
- dotnet test --no-restore Tests/
deploy:
image: docker:stable
services:
- docker:dind
stage: deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull registry.gitlab.com/chinloyal/spa
- docker run --name spa -p 80:80 -d registry.gitlab.com/chinloyal/spa
script:
- docker exec -d spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar
- docker exec -d spa bash -c "echo remote = $FTP_HOST >> deployment.ini"
- docker exec -d spa bash -c "echo user = $FTP_USER >> deployment.ini"
- docker exec -d spa bash -c "echo password = $FTP_PASSWORD >> deployment.ini"
- docker exec -d spa php deployment.phar deployment.ini
environment:
name: production
only:
- master
This line docker exec -d spa php deployment.phar deployment.ini is the one that is suppose to upload the files from inside the docker container. But I believe that because Gitlab ends the process immediately after that line. Then the process inside the container just ends.
I've tried using the registry image (registry.gitlab.com/chinloyal/spa) as the image for deploy, but every time I try to use it, then it just starts running the project on Gitlab until it timesout or until I cancel it.
I only have ftp access to the server by the way, because it's a shared server. The ftp tool I'm using to deploy is here. I've tried it before so I know it works.
I figured it out, all the docker exec commands used -d flag so I removed the -d flag so that gitlab would show the output of files being downloaded and uploaded. so I changes this docker exec -d spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar to docker exec spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar
I also changed docker exec -d spa php deployment.phar deployment.ini to docker exec spa php deployment.phar deployment.ini.
So before they were running in detached mode so because of that gitlab thought they had finished and just moved on to the next command, removing the -d flag lets gitlab wait.

Resources