Set secret variable when using Docker in TravisCI - docker

I build a backend with NodeJS and would like to use TravisCI and Docker to run tests.
In my code, I have a secret env: process.env.SOME_API_KEY
This is my Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
My docker compose:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
And this is my TravisCI
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec api npm run test
I also set SOME_API_KEY='xxx' in my travis setting variables. However, it seems that the container doesn't receive the SOME_API_KEY.
How can I pass the SOME_API_KEY from travisCI to docker? Thanks

Containers in general do not inherit the environment from which they are run. Consider something like this:
export SOMEVARIABLE=somevalue
docker run --rm alpine sh -c 'echo $SOMEVARIABLE'
That will never print out the value of $SOMEVARIABLE because there is no magic process to import environment variables from your local shell into the container. If you want a travis environment variable exposed inside your docker containers, you will need to do that explicitly by creating an appropriate environment block in your docker-compose.yml. For example, I use the following docker-compose.yml:
version: "3"
services:
example:
image: alpine
command: sh -c 'echo $SOMEVARIABLE'
environment:
SOMEVARIABLE: "${SOMEVARIABLE}"
I can then run the following:
export SOMEVARIABLE=somevalue
docker-compose up
And see the following output:
Recreating docker_example_1 ... done
Attaching to docker_example_1
example_1 | somevalue
docker_example_1 exited with code 0
So you will need to write something like:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
environment:
SOME_API_KEY: "${SOME_API_KEY}"
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"

I have a similar issue and solve it passing the environment variable to the container in docker-compose exec command. If the variable is in the Travis environment, you can do:
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec -e SOME_API_KEY=$SOME_API_KEY api npm run test

Related

Can I use RUN command in docker-compose.yml?

Is it possible to RUN a command within docker-compose.yml file? So instead of having a dockerfile where I have something like this RUN mkdir foo, to have the same command withing the docker-compose.yml file
services:
server:
container_name: nginx
image: nginx:stable-alpine
volumes:
- ./public:/var/www/html/public
ports:
- "${PORT:-80}:80"
???: 'mkdir foo' // <--- sudo code

what is the point to run supervisor on top of docker container?

Im inheriting from an opensource project where i have this script to deploy two containers (docker and nginx) on a server:
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
The docker-compose.yml file is like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm wondering what is the point to use sudo supervisorctl restart react-wagtail-project?
If i put restart: always in the two containers i'm running, is it useful to run on top of that a supervisor command to check they are always on and running?
Or maybe is it for the possibility to create logs ?
Thank you

Can't connect other services when running docker image that is built from Dockerfile

I have a node project which uses Redis for queue purposes.
I required the Redis in compose file & it's working fine. But when I try to build the docker image from the Dockerfile and run that built image with docker run, it can't find/connect to the Redis.
My question is: If docker doesn’t include the images from the compose file when building the image from Dockerfile, how the built image can run?
Compose & Dockerfile are given below.
version: '3'
services:
oaq-web:
image: node:16.10-alpine3.13
container_name: oaq-web
volumes:
- ./:/usr/src/oaq
networks:
- oaq-network
working_dir: /usr/src/oaq
ports:
- "5000:5000"
command: npm run dev
redis:
image: redis:6.2
ports:
- "6379:6379"
networks:
- oaq-network
networks:
oaq-network:
driver: bridge
FROM node:16.10-alpine3.13
RUN mkdir -p app
COPY . /app
WORKDIR /app
RUN npm install
RUN npm run build
CMD ["npm", "start"]

Convert a docker run command to docker-compose - setting directory dependency

I have two docker run commands - the second container need to be ran in a folder created by the first. As in below
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/mainmyoh:v1 init myprojectname
cd myprojectname
The above myprojectname folder was created by the first container. I need to run the second container in this folder as below.
docker run -v $(pwd):/project \
-w /project \
-p 3000:3000 \
gcr.io/base-project/myoh:v1
Here is the docker-compose file I have so far:
version: '3.3'
services:
firstim:
volumes:
- '$(pwd):/projects'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- '$(pwd):/projects'
ports:
- 3000:3000
What need to change to achieve this.
You can make the two services use a shared named volume:
version: '3.3'
services:
firstim:
volumes:
- '.:/projects'
- 'my-project-volume:/projects/myprojectname'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- 'my-project-volume:/projects'
ports:
- 3000:3000
volumes:
my-project-volume:
Also, just an observation: in your example the working_dir: references /project while the volumes point to /projects. I assume this is a typo and this might be something you want to fix.
You can build a custom image that does this required setup for you. When secondim runs, you want the current working directory to be /project, you want the current directory's code to be embedded there, and you want the init command to have run. That's easy to express in Dockerfile syntax:
FROM gcr.io/base-project/mainmyoh:v1
WORKDIR /project
COPY . .
RUN init myprojectname
CMD whatever should be run to start the real project
Then you can tell Compose to build it for you:
version: '3.5'
services:
# no build-only first image
secondim:
build: .
image: gcr.io/base-project/mainmyoh:v1
ports:
- '3000:3000'
In another question you ask about running a similar setup in Kubernetes. This Dockerfile-based setup can translate directly into a Kubernetes Deployment/Service, without worrying about questions like "what kind of volume do I need to use" or "how do I copy the code into the cluster separately from the image".

Test my go app in a container launched by docker compose with Gitlab CI

I have a Golang app, that depends a FTP Server.
So, In docker compose, I build a FTP service and I refer to it into my tests.
So, in my docker-compose.yml I have:
version: '3'
services:
mygoapp:
build:
dockerfile: ./Dockerfile.local
context: ./
volumes:
- ./volume:/go
- ./test_files:/var/test_files
networks:
mygoapp_network:
env_file:
- test.env
tty: true
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
mygoapp_network:
networks:
mygoapp_network:
external: true
In my gitlab-ci.yml I have
variables:
PACKAGE_PATH: /go/src/gitlab.com/xxx
VOLUME_PATH: /var/test_files
stages:
- test
# A hack to make Golang-in-Gitlab happy
.anchors:
- &inject-gopath
mkdir -p $(dirname ${PACKAGE_PATH})
&& ln -s ${CI_PROJECT_DIR} ${PACKAGE_PATH}
&& cd ${PACKAGE_PATH}
test:
image: docker:18
services:
- docker:dind
stage: test
# only:
# - production
before_script:
- touch test.env
- apk update
- apk upgrade
- apk add --no-cache py-pip
- pip install docker-compose
- docker network create mygoapp_network
- mkdir -p volume/log
script:
- docker-compose -f docker-local.yaml up --build -d
- docker exec project-0_mygoapp_1 ls /var/test_files
- docker exec project-0_mygoapp_1 echo $VOLUME_PATH
- docker exec project-0_mygoapp_1 go test ./... -v
All my services are up
But when I run
- docker exec project-0_myapp_1 echo $VOLUME_PATH
I can see $VOLUME_PATH is equal to /var/test_files
but inside code, when I do:
os.Getenv("VOLUME_PATH")
variable is empty
Also, in local, with a docker exec, variable is OK.
I also tried to put Variables into test definition, but it still doesn' work
EDIT: The only way I could do it is setting environment vars in docker compose, but it is not so great
Any idea how to fix it ?
The behaviour of your script is predictable - all environment variables are being expanded when they are met (unless they are in single quotes). So, your line
docker exec project-0_myapp_1 echo $VOLUME_PATH
is expanded before being executed, and $VOLUME_PATH is taken from gitlab runner, not from container.
The only way I see to get this script printing environment variable from inside container is putting script inside sh-file and calling that file:
doit.sh
echo $VOLUME_PATH
gitlab-ci.yml
docker exec project-0_myapp_1 doit.sh

Resources