Gitlab CI/CD using FTP for .NET Core - docker

I'm trying to auto build, test and deploy my .NET Core app, and so far it builds and tests but it won't deploy. The Gitlab pipeline shows that the job succeeded but it didn't actually work. This is my Dockerfile:
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /source
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash - \
&& apt-get install -y nodejs
COPY ./src/*.csproj .
RUN dotnet restore
COPY ./src/ ./
RUN dotnet publish "./Spa.csproj" --output "./dist" --configuration Release --no-restore
FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /source/dist .
RUN apt-get update \
&& apt-get install -y php-cli
COPY deployment.ini /app
EXPOSE 80
ENTRYPOINT ["dotnet", "Spa.dll"]
and this is what my .gitlab-ci.yml file looks like:
# ### Define variables
#
variables:
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
SOURCE_CODE_PATH: 'src/'
# ### Define stage list
stages:
- build
- test
- deploy
cache:
# Per-stage and per-branch caching.
key: "$CI_JOB_STAGE-$CI_COMMIT_REF_SLUG"
paths:
- '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/project.assets.json'
- '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/*.csproj.nuget.*'
- '$NUGET_PACKAGES_DIRECTORY'
- '**/node_modules/'
build:
image: docker:stable
stage: build
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t registry.gitlab.com/chinloyal/spa .
- docker push registry.gitlab.com/chinloyal/spa
tests:
image: microsoft/dotnet:2.2-sdk
stage: test
before_script:
- curl -sL https://deb.nodesource.com/setup_11.x | bash -
- apt-get install -y nodejs
script:
- dotnet test --no-restore Tests/
deploy:
image: docker:stable
services:
- docker:dind
stage: deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull registry.gitlab.com/chinloyal/spa
- docker run --name spa -p 80:80 -d registry.gitlab.com/chinloyal/spa
script:
- docker exec -d spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar
- docker exec -d spa bash -c "echo remote = $FTP_HOST >> deployment.ini"
- docker exec -d spa bash -c "echo user = $FTP_USER >> deployment.ini"
- docker exec -d spa bash -c "echo password = $FTP_PASSWORD >> deployment.ini"
- docker exec -d spa php deployment.phar deployment.ini
environment:
name: production
only:
- master
This line docker exec -d spa php deployment.phar deployment.ini is the one that is suppose to upload the files from inside the docker container. But I believe that because Gitlab ends the process immediately after that line. Then the process inside the container just ends.
I've tried using the registry image (registry.gitlab.com/chinloyal/spa) as the image for deploy, but every time I try to use it, then it just starts running the project on Gitlab until it timesout or until I cancel it.
I only have ftp access to the server by the way, because it's a shared server. The ftp tool I'm using to deploy is here. I've tried it before so I know it works.

I figured it out, all the docker exec commands used -d flag so I removed the -d flag so that gitlab would show the output of files being downloaded and uploaded. so I changes this docker exec -d spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar to docker exec spa curl -S "https://gitlab.com/chinloyal/ftp-deploy-tool/raw/master/deployment.phar" --output deployment.phar
I also changed docker exec -d spa php deployment.phar deployment.ini to docker exec spa php deployment.phar deployment.ini.
So before they were running in detached mode so because of that gitlab thought they had finished and just moved on to the next command, removing the -d flag lets gitlab wait.

Related

GitLab CI/CD not taking latest code changes

So I have used GitLab CI/CD to deploy changes to private docker hub repo and using Digital Ocean droplet to run the server using docker but the changes are not being reflected in the docker container running on digital ocean. Here's the config file.
variables:
IMAGE_NAME: codelyzer/test-repo
IMAGE_TAG: test-app-1.0
stages:
- test
- build
- deploy
run_tests:
stage: test
image:
node:16
before_script:
- npm install jest
script:
npm run test
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASS
script:
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker image prune -f &&
docker ps -aq | xargs docker stop | xargs docker rm &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"
The digital ocean server wasn't fetching the latest image from the repository so I added docker prune as additional step to do.
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker ps -aq | (xargs docker stop || true) | (xargs docker rm || true) &&
docker system prune -a -f &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"

How to pass environment variable to docker run in gitlab ci cd

I am trying to pass the env variable to my node js docker build image ,while running as shown below
stages:
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- touch env.txt
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- echo "AWS_ACCESS_KEY_ID"=$AWS_ACCESS_KEY_ID >> "env.txt"
- echo "AWS_S3_BUCKET"=$AWS_S3_BUCKET >> "env.txt"
- echo "AWS_S3_REGION"=$AWS_S3_REGION >> "env.txt"
- echo "AWS_SECRET_ACCESS_KEY"=$AWS_SECRET_ACCESS_KEY >> "env.txt"
- echo "DB_URL"=$DB_URL >> "env.txt"
- echo "JWT_EXPIRES_IN"=$JWT_EXPIRES_IN >> "env.txt"
- echo "OTP_EXPIRE_TIME_SECONDS"=$OTP_EXPIRE_TIME_SECONDS >> "env.txt"
- echo "TWILIO_ACCOUNT_SID"=$TWILIO_ACCOUNT_SID >> "env.txt"
- echo "TWILIO_AUTH_TOKEN"=$TWILIO_AUTH_TOKEN >> "env.txt"
- echo "TWILLIO_SENDER"=$TWILLIO_SENDER >> "env.txt"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run --env-file env.txt -d -p 8080:8080 --name my-app $TAG_COMMIT"
environment:
name: development
url: 90900
only:
- master
I am running this command docker run --env-file env.txt ,but it gives me an error docker: open env.txt: no such file or directory.
How Can I solve the issue ,to pass multiple variables in my docker run command
Which job is failing? In your deploy job, you are creating the env.txt locally and using SSH to do the docker building, but you never scp your local env.txt to $SERVER_USER#$SERVER_ID for the remote process to pick it up.
I had the same issue using Gitlab ci/cd. i.e. Trying to inject env vars that were referenced in the project .env file via the runner (docker executor) into the output docker container.
We don't want to commit any sensitive info into git so one option is to save them on the server in a file and include via the --env-file flag but Gitlab runner creates a new container for every run so not possible to use this as the host server running the yaml script is ephemeral and not the actual server that Gitlab runner was installed onto.
The suggestion by #dmoonfire to scp the file over sounded like a good solution but I couldn't get it to work to copy a file from external to the gitlab runner. I'd need to copy the public key from the executor to the gitlab runner server but the docker executor is ephemeral.
I found the simplest solution to use the Gitlab CI/CD variable settings. It's possible to mask variables and restrict to protected branches or protected tags etc. These get injected into the container so that your .env file can access.

Getting error 'jq: command not found' in Gitlab pipeline for docker

I am building the Docker images and deploying it to AWS ECS service using Gitlab pipeline but getting the error as 'jq: command not found' in spite having successfully installed the jq package (Refer images)
Error Image
jq package installation step status
.gitlab-ci.yml file for reference.
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build_dev
- deploy_dev
before_script:
- docker run --rm docker:git apk update
- docker run --rm docker:git apk upgrade
- docker run --rm docker:git apk add --no-cache curl jq
- docker run --rm docker:git apk add python3 py3-pip
- pip3 install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY
- aws configure set aws_secret_access_key $AWS_SECRET_KEY
- aws configure set region $AWS_DEFAULT_REGION
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_LOGIN_URI
build_dev:
stage: build_dev
only:
- dev
script:
- docker build -t research-report-phpfpm .
- docker tag research-report-phpfpm:latest $REPOSITORY_URI_LARAVEL:latest
- docker push $REPOSITORY_URI_LARAVEL:latest
- docker build -t research-report-nginx -f Dockerfile_Nginx .
- docker tag research-report-nginx:latest $REPOSITORY_URI_NGINX:latest
- docker push $REPOSITORY_URI_NGINX:latest
deploy_dev:
stage: deploy_dev
script:
- echo $REPOSITORY_URI_LARAVEL:$IMAGE_TAG
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${AWS_DEFAULT_REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URI_LARAVEL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${AWS_DEFAULT_REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
- echo "Updating the service..."
- aws ecs update-service --region "${AWS_DEFAULT_REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URI_LARAVEL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
This command is not running inside the docker container where you are installing jq.
Your gitlab ci/cd configuration is running within a container tagged as docker:latest.
You've made the docker image docker:dind available during runtime as a container (presumably to avoid having to start dockerd manually).
You're then running commands on a container called docker:git, which is separate to the context of this build.
You are also running these commands with --rm, which guarantees the apk add you are running is lost after the statement.
Not having used gitlab pipelines myself, I can't be 100%, but I'd be 90% that one of these may resolve your issue:
Install the packages locally in the already running container:
apk update && apk add curl jq python3 py3-pip
Don't use --rm
Change the installation of jq from container docker:git, to docker:latest
docker run docker:latest apk update
docker run docker:latest apk add curl jq python3 py3-pip
Given that the command:
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URI_LARAVEL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
Is actually running in the context of the pipeline (presumably within docker:latest), my bet is on 1 - you are running jq on the 'host', but installing jq inside a container within the host.

Docker doesn't work when using getsentry/sentry-cli

I want to upload my frontend to sentry, but I need to get the folder using docker commands. However when I use image: getsentry/sentry-cli
docker doesn't works and e.g. in before_script I get error that docker doesn't exist
sentry_job:
stage: sentry_job
image: getsentry/sentry-cli
services:
- docker:18-dind
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.gitlab.cz
script:
# script...
. # Get the dist folder from the image
- mkdir frontend_dist
- docker run --rm -v $PWD/frontend_dist:/mounted --entrypoint="" $IMAGE /bin/sh -c "cp /frontend/dist /mounted"
- ls frontend_dist
tags:
- dind
How do I fix that?
To achieve what you want, you need to use a single job (to have the same build context) and specify docker:stable as the job image (along with docker:stable-dind as a service).
This setup is called docker-in-docker and this is the standard way to allow a GitLab CI script to run docker commands (see doc).
Thus, you could slightly adapt your .gitlab-ci.yml code like this:
sentry_job:
stage: sentry_job
image: docker:stable
services:
- docker:stable-dind
variables:
IMAGE: "${CI_REGISTRY_IMAGE}:latest"
before_script:
- docker login -u gitlab-ci-token -p "${CI_JOB_TOKEN}" registry.gitlab.cz
script:
- git pull "$IMAGE"
- mkdir -v frontend_dist
- docker run --rm -v "$PWD/frontend_dist:/mounted" --entrypoint="" "$IMAGE" /bin/sh -c "cp -v /frontend/dist /mounted"
- ls frontend_dist
- git pull getsentry/sentry-cli
- docker run --rm -v "$PWD/frontend_dist:/work" getsentry/sentry-cli
tags:
- dind
Note: the git pull commands are optional (they ensure Docker will use the latest version of the images).
Also, you may need to change the definition of variable IMAGE.

Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after

I'm having my first steps into Docker/CI/CD.
For that, I'm trying to deploy a raw create-react-app to my Digital Ocean droplet (Docker One-Click Application) using Gitlab CI. Those are my files:
Dockerfile.yml
# STAGE 1 - Building assets
FROM node:alpine as building_assets_stage
WORKDIR /workspace
## Preparing the image (installing dependencies and building static files)
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build
# STAGE 2 - Serving static content
FROM nginx as serving_static_content_stage
ENV NGINX_STATIC_FILE_SERVING_PATH=/usr/share/nginx/html
EXPOSE 80
COPY --from=building_assets_stage /workspace/build ${NGINX_STATIC_FILE_SERVING_PATH}
docker-compose.yml
## Use a Docker image with "docker-compose" installed on top of it.
image: tmaier/docker-compose:latest
services:
- docker:dind
variables:
DOCKER_CONTAINER_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: ${SECRETS_DOCKER_LOGIN_USERNAME}/${CI_PROJECT_NAME}:latest
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test that everything is setup correctly
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
stages:
- deploy
deploy:
stage: deploy
script:
## Login this machine into Docker registry, creates a production build and push it to the registry.
- docker login -u ${SECRETS_DOCKER_LOGIN_USERNAME} -p ${SECRETS_DOCKER_LOGIN_PASSWORD}
- docker build -t ${DOCKER_IMAGE_TAG} .
- docker push ${DOCKER_IMAGE_TAG}
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
# Everything works, exit.
- exit 0
only:
- master
In a nutshell, on Gitlab CI, I do the following:
(before_install) Install ssh agent and copy my private SSH key to this machine, so we can connect to the Digital Ocean Droplet;
(deploy) I build my image and push it to my public docker hub repository;
(deploy) I connect to my Digital Ocean Droplet via SSH, pull the image I've just built and run it.
The problem is that if I do everything from my computer's terminal, the container is created and the application is deployed successfully.
If I execute it from the Gitlab CI task, the container is generated but nothing is deployed because the container dies right after (click here to see CI job output).
I can guarantee that the container is being erase because if I manually SSH the server and docker ps -a, it doesn't listen anything.
I'm mostly confused by the fact that this image CMD is CMD ["nginx", "-g", "daemon off;"], which shouldn't make my container gets deleted because it has a process running.
What I'm doing wrong? I'm lost.
Thank you in advance.
My question was answered by d g - thank you very much!
The problem relies on the fact that I was connecting to my Digital Ocean Droplet via SSH and executing commands inside using its bash, when I should be passing the entire command to be executed as an argument to the ssh connection instruction.
Changed my .gitlab.yml file from:
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
To:
# Execute as follow:
# ssh -t digital-ocean-server "docker cmd1; docker cmd2;
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} "docker ps -q --filter \"name=${DOCKER_CONTAINER_NAME}\" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}; docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}"

Resources