GitLab CI Docker WORKDIR not being created - docker

I am trying to deploy my NodeJS repo to a DO droplet via GitLab CI. I have been following this guide to do so. What is odd is that the deployment pipeline seems to succeed but if I SSH into the box, I can see that the app is not running as has failed to find a package.json in /usr/src/app which is the WORKDIR my Dockerfile is pointing to.
gitlab-ci.yml
cache:
key: "${CI_COMMIT_REF_NAME} node:latest"
paths:
- node_modules/
- .yarn
stages:
- build
- release
- deploy
build:
stage: build
image: node:latest
script:
- yarn
artifacts:
paths:
- node_modules/
release:
stage: release
image: docker:latest
only:
- master
services:
- docker:dind
variables:
DOCKER_DRIVER: "overlay"
before_script:
- docker version
- docker info
- docker login -u ${CI_REGISTRY_USER} -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
script:
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
after_script:
- docker logout ${CI_REGISTRY}
deploy:
stage: deploy
image: gitlab/dind:latest
only:
- master
environment: production
when: manual
before_script:
- mkdir -p ~/.ssh
- echo "${DEPLOY_SERVER_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
script:
- printf "DB_URL=${DB_URL}\nDB_NAME=${DB_NAME}\nPORT=3000" > .env
- scp -r ./.env ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#${DEPLOYMENT_SERVER_IP} "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; docker-compose rm -sf scraper; docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; docker-compose up -d"
Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]
docker-compose.yml
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env
I'm using GitLab Shared Runners for my pipeline. My pipeline looks like it executes completely fine but for this symlink failure at the end:
...which I don't think is anything to worry about. If I SSH into my box & go to where the docker compose was copied & inspect:
Docker has not created /usr/src/app.
Versions:
Docker: 19.03.1
Docker-compose: 1.22.0
My DO box is Docker 1-click btw. Any help appreciated!
EDIT
I have altered my Dockerfile to attempt to force the dir creation so have added RUN mkdir -p /usr/src/app before the line declaring it as the working dir. This still does not create the directory...
When I look at the container status' (docker-compose ps), I can see that the containers are in an exit state & have exited with code either 1 or 254...any idea as to why?

Your compose file is designed for a development environment, where the code directory is replaced by a volume mount to the code on the developers machine. You don't have this persistent directory in production, nor should you be depending on code outside of the image in production, defeating the purpose of copying it into your image.
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
# Comment out or delete these lines, they do not belong in production
#volumes:
# - .:/usr/src/app
# - /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env

Related

gitlab variable is not accessible in the docker-compose.yml file

I am trying to create a CI/CD pipeline using gitlab and now facing an issue with the gitlab variable. This is not accessible inside docker compose file.
this is my gitlab ci yml file
step-production:
stage: production
before_script:
- export APP_ENVIRONMENT="$PRODUCTION_APP_ENVIRONMENT"
only:
- /^release.*$/
tags:
- release-tag
script:
- echo production env value is "$PRODUCTION_APP_ENVIRONMENT"
- sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo docker-compose -f docker-compose.prod.yml build --no-cache
- sudo docker-compose -f docker-compose.prod.yml up -d
when: manual
and this is my docker compose file
version: "3"
services:
redis:
image: redis:latest
app:
build:
context: .
environment:
- APP_ENVIRONMENT=${PRODUCTION_APP_ENVIRONMENT}
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/app
ports:
- "8000:8000"
restart: on-failure:5
# network_mode: "host"
Can someone help me on how to access the gitlab variable inside docker compose file ? I have spend more than a day on the same issue
The issue has been resolved by the following method
Edit the following line in gitlab ci yml file
sudo docker-compose -f docker-compose.prod.yml build --build-arg DB_NAME=$DEVELOPMENT_DB_NAME --build-arg DB_HOST=$DEVELOPMENT_DB_HOST --no-cache
Define the value of $DEVELOPMENT_DB_NAME and $DEVELOPMENT_DB_HOST in gitlab variables section
In the Docker file, add ARG and ENV sections as follows
ARG DB_NAME
ARG DB_HOST
ENV DB_NAME=${DB_NAME}
ENV DB_HOST=${DB_HOST}
Make sure that no environment variables with the same name are not defined in the docker-compose yml file
That's it !!!

Set secret variable when using Docker in TravisCI

I build a backend with NodeJS and would like to use TravisCI and Docker to run tests.
In my code, I have a secret env: process.env.SOME_API_KEY
This is my Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
My docker compose:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
And this is my TravisCI
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec api npm run test
I also set SOME_API_KEY='xxx' in my travis setting variables. However, it seems that the container doesn't receive the SOME_API_KEY.
How can I pass the SOME_API_KEY from travisCI to docker? Thanks
Containers in general do not inherit the environment from which they are run. Consider something like this:
export SOMEVARIABLE=somevalue
docker run --rm alpine sh -c 'echo $SOMEVARIABLE'
That will never print out the value of $SOMEVARIABLE because there is no magic process to import environment variables from your local shell into the container. If you want a travis environment variable exposed inside your docker containers, you will need to do that explicitly by creating an appropriate environment block in your docker-compose.yml. For example, I use the following docker-compose.yml:
version: "3"
services:
example:
image: alpine
command: sh -c 'echo $SOMEVARIABLE'
environment:
SOMEVARIABLE: "${SOMEVARIABLE}"
I can then run the following:
export SOMEVARIABLE=somevalue
docker-compose up
And see the following output:
Recreating docker_example_1 ... done
Attaching to docker_example_1
example_1 | somevalue
docker_example_1 exited with code 0
So you will need to write something like:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
environment:
SOME_API_KEY: "${SOME_API_KEY}"
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
I have a similar issue and solve it passing the environment variable to the container in docker-compose exec command. If the variable is in the Travis environment, you can do:
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec -e SOME_API_KEY=$SOME_API_KEY api npm run test

Rails dockerized: Continuous delivery using Gitlab and Digitalocean

I'm actually trying to setup continuous delivery for a Rails dockerized project, hosted on Gitlab.com. I followed this article which is not directly related to Rails environment, and that I tried to adapt... Obviously without any success :(
For the context, I created three different services: db, webpacker and app.
Following the above article, here are my .gitlab-ci.yml and docker-compose.staging2.yml (autodeploy):
image: docker
services:
- docker:dind
cache:
paths:
- node_modules
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
CONTAINER_CURRENT_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_LATEST_IMAGE: $CI_REGISTRY_IMAGE:latest
CONTAINER_STABLE_IMAGE: $CI_REGISTRY_IMAGE:stable
stages:
- test
- build
- release
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose --version
test:
stage: test
script:
- docker-compose build --pull
# Here we will run tests when available...
after_script:
- docker-compose down
- docker volume rm `docker volume ls -qf dangling=true`
build:
stage: build
script:
- docker build -t $CONTAINER_CURRENT_IMAGE . --pull
- docker push $CONTAINER_CURRENT_IMAGE
release-latest-image:
stage: release
only:
- feat-dockerisation
script:
- docker pull $CONTAINER_CURRENT_IMAGE
- docker tag $CONTAINER_CURRENT_IMAGE $CONTAINER_LATEST_IMAGE
- docker push $CONTAINER_LATEST_IMAGE
release-stable-image:
stage: release
only:
- feat-dockerisation
script:
- docker pull $CONTAINER_CURRENT_IMAGE
- docker tag $CONTAINER_CURRENT_IMAGE $CONTAINER_STABLE_IMAGE
- docker push $CONTAINER_STABLE_IMAGE
deploy_staging:
stage: deploy
only:
- feat-dockerisation
environment: production
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- which ssh-agent || (apk add openssh-client)
- eval $(ssh-agent -s)
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
script:
- scp -rp ./docker-compose.staging2.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};
docker-compose -f docker-compose.staging2.yml down;
docker pull $CONTAINER_LATEST_IMAGE;
docker-compose -f docker-compose.staging2.yml up -d"
version: '3'
services:
db:
image: postgres:11-alpine
ports:
- 5433:5432
environment:
POSTGRES_PASSWORD: postgres
webpacker:
image: registry.gitlab.com/soykje/beweeg-ror:latest
command: [sh, -c, "yarn && bin/webpack-dev-server"]
ports:
- 3035:3035
app:
image: registry.gitlab.com/soykje/beweeg-ror:latest
links:
- db
- webpacker
ports:
- 3000:3000
I'm getting started with Docker and CI/CD so... Can't find what I am doing wrong :/
After all the jobs are successfully completed on Gitlab CI/CD, when I try to access my app on the Docker droplet I get nothing... When I ssh on my droplet, everything seems ok, but I still cannot browse anything... Would anyone have an idea of what I am missing?
I feel I'm pretty close to achieve (maybe I'm wrong too...), so any help would be very welcome!
Thx in advance!

Test my go app in a container launched by docker compose with Gitlab CI

I have a Golang app, that depends a FTP Server.
So, In docker compose, I build a FTP service and I refer to it into my tests.
So, in my docker-compose.yml I have:
version: '3'
services:
mygoapp:
build:
dockerfile: ./Dockerfile.local
context: ./
volumes:
- ./volume:/go
- ./test_files:/var/test_files
networks:
mygoapp_network:
env_file:
- test.env
tty: true
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
mygoapp_network:
networks:
mygoapp_network:
external: true
In my gitlab-ci.yml I have
variables:
PACKAGE_PATH: /go/src/gitlab.com/xxx
VOLUME_PATH: /var/test_files
stages:
- test
# A hack to make Golang-in-Gitlab happy
.anchors:
- &inject-gopath
mkdir -p $(dirname ${PACKAGE_PATH})
&& ln -s ${CI_PROJECT_DIR} ${PACKAGE_PATH}
&& cd ${PACKAGE_PATH}
test:
image: docker:18
services:
- docker:dind
stage: test
# only:
# - production
before_script:
- touch test.env
- apk update
- apk upgrade
- apk add --no-cache py-pip
- pip install docker-compose
- docker network create mygoapp_network
- mkdir -p volume/log
script:
- docker-compose -f docker-local.yaml up --build -d
- docker exec project-0_mygoapp_1 ls /var/test_files
- docker exec project-0_mygoapp_1 echo $VOLUME_PATH
- docker exec project-0_mygoapp_1 go test ./... -v
All my services are up
But when I run
- docker exec project-0_myapp_1 echo $VOLUME_PATH
I can see $VOLUME_PATH is equal to /var/test_files
but inside code, when I do:
os.Getenv("VOLUME_PATH")
variable is empty
Also, in local, with a docker exec, variable is OK.
I also tried to put Variables into test definition, but it still doesn' work
EDIT: The only way I could do it is setting environment vars in docker compose, but it is not so great
Any idea how to fix it ?
The behaviour of your script is predictable - all environment variables are being expanded when they are met (unless they are in single quotes). So, your line
docker exec project-0_myapp_1 echo $VOLUME_PATH
is expanded before being executed, and $VOLUME_PATH is taken from gitlab runner, not from container.
The only way I see to get this script printing environment variable from inside container is putting script inside sh-file and calling that file:
doit.sh
echo $VOLUME_PATH
gitlab-ci.yml
docker exec project-0_myapp_1 doit.sh

docker-compose ADD a tarball outside (or inside) of build context always fails no matter what

I've got the following dir tree:
project_root
|- build_script.sh
|- dir_1/Dockerfile and docker-compose.yml
|- tarball_dir/
I've got the build_script.sh which calls docker-compose like so:
docker-compose -f ./dir_1/docker-compose.yml build --build-arg BUILD_ID=$BUILD_ID dev
This is the Dockerfile:
FROM elixir:1.5.2
ARG BUILD_ID
ENV APP_HOME /app
RUN mkdir $APP_HOME
# Copy release tarball and unpack
~ ADD ../dir_1/${BUILD_ID}.tar.gz $APP_HOME/
CMD $APP_HOME/bin/my_app foreground
And this is the docker-compose-yml:
version: '2'
services:
common:
build:
context: .
dockerfile: ./Dockerfile
networks:
- default
dev:
extends:
service: common
env_file:
- ./dev.env
environment:
POSTGRES_HOST: "postgres.dev"
MIX_ENV: dev
ports:
- "4000:4000"
depends_on:
- postgres
I want to be able to ADD (so that it copies and unpacks the tarball) into the image. However, after trying endless combinations of Docker context directories and trying to include the tarball into the image, I always get one of two errors:
ERROR: Service 'dev' failed to build: ADD failed: Forbidden path outside the build context: ../tarball_dir/tarball.tar.gz ()
or
ERROR: Service 'dev' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder887135091/tarball_dir/tarball.tar.gz: no such file or directory
I would like to keep this directory structure, if possible. I've managed to build and run the container with the following command:
docker build \
--build-arg BUILD_ID=$BUILD_ID \
-t my_app:$BUILD_ID \
-f dir_1/Dockerfile .
docker run \
--network my_app_network \
--env-file ./dir_1/$ENV.env \
my_app:$BUILD_ID
But I can't reproduce this same behaviour using docker-compose no matter what.
I'm on Mac OS Sierra and I'm using Docker Edge Version 17.10.0-ce-mac36 (19824) Channel: edge a7c7e01149

Resources