Environment variable not set in container image after PUSH and PULL - docker

I would like to see if we can find a way to persist env variables in images AFTER they are pulled from private docker registry. I have done the following steps
content of .env file
APP_DB_CACHE_UPDATE = 3600
content of docker-compose file
services:
app:
build: .
ports:
- "9000:9000"
environment:
- APP_DB_CACHE_UPDATE=${APP_DB_CACHE_UPDATE}
command to build and check container and its env vars
$ APP_DB_CACHE_UPDATE=3602 docker-compose up
$ docker exec -it <container_id> printenv
APP_DB_CACHE_UPDATE=3602
then I
push the image to private docker registry
I remove image from local machine
pull image from docker registry
$ docker tag app-name_app-name:latest localhost:5000/app-name_app-name:latest
$ docker push localhost:5000/app-name_app-name:latest
$ docker rmi localhost:5000/app-name_app-name:latest
$ docker pull localhost:5000/app-name_app-name:latest
and now I check the enviorment after running this image, i am unable to see APP_DB_CACHE_UPDATE env variable
docker exec -it <new container image> printenv

Docker compose does not build images with env vars set through "environment" or "env_file". It builds images first, then provides the environment variables to the container runtime. See environment specification:
If your service specifies a build option, variables defined in
environment are not automatically visible during the build.
Because of this, your image does not know about the environment variable specified in docker-compose file.
To set environment variables during/before the build, which will be persisted in the container after the image is built, you need to specify them in you build environment. You can do this in Dockerfile using ENV, for example:
ENV APP_DB_CACHE_UPDATE="3600"
See Dockerfile ENV specification.
The environment variables set using ENV will persist when a container
is run from the resulting image. You can view the values using docker
inspect, and change them using docker run --env =
Then tell compose to use your Dockerfile with:
build:
context: .
dockerfile: Dockerfile

Related

Docker compose ecs fails to deploy (fails when using docker compose up)

I am trying to determine why the cloudformation building of application fails when trying to create resources for BackgroundjobsService (Create failed in cloudformation). The only main differences from other services I have built is that it has no exposed ports and I am using ubuntu instead of php-apache image.
Dockerfile (I made it super simply (basically do nothing)
# Pulling Ubuntu image
FROM ubuntu:20.04
docker-compose.yml
services:
background_jobs:
image: 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler
restart: always
env_file: ../.env.${ENV}
build:
context: "."
How I deploy (I verified the enf files exist in the parent directory of job-scheduler).
cd job-scheduler
ENV=dev docker --context default compose build
docker push 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler:latest
ENV=dev docker --context tcetra-dev compose up
I don't know how to find any sort of error logs but the task defination gets created and all my env vars are in there.

Docker stack deploy ability to pass env variables into the deploy yaml?

I run a docker swarm and i use gitlab ci to do the build and deployment of the images, the biggest headache i face is incrementing the image version numbers in the deployment yaml.
So for example if i do a deploy on gitlab and build the relevant image like this:
docker build --no-cache --platform linux/amd64 -t myregistry/myimage:$CI_COMMIT_TAG -f docker/php-backend/Dockerfile .
I am creating the image version number by using the git tag, which works fine. I then transfer the latest deploy.yaml file to the server and make it run:
sudo docker stack deploy --with-registry-auth -c live-deploy.yaml my-stack-name
The issue here is, inside my live-deploy.yaml i have to manually update the image name with the new version that was built.
Is there a way (and so far i can't find it) to pass a variable into the yaml from the command line when deploying so it knows what version number to use? A bit like passing in environment variables with docker compose.
You can play with environment variables to acheive this automation. Example follows:
Sample docker-compose/stack file :-
version: '3.3'
services:
registry:
restart: always
image: ${MyImageName}
ports:
- 5000:5000
And when you want to deploy something pass env value along with the imperative command... your command modifies to :-
MyImageName=myregistry/myimage:$CI_COMMIT_TAG docker stack deploy --with-registry-auth -c live-deploy.yaml my-stack-name
You can even keep the image name constant and just have a variable just for the tag if that is the degree of automation required.

docker-compose Equivalent to Docker Build --secret Argument

We have used the technique detailed here to expose host environment variables to Docker build in a secured fashion.
# syntax=docker/dockerfile:1.2
FROM golang:1.18 AS builder
# move secrets out of the build process (and docker history)
RUN --mount=type=secret,id=github_token,dst=/app/secret_github_token,required=true,uid=10001 \
export GITHUB_TOKEN=$(cat /app/secret_github_token) && \
<nice command that uses $GITHUB_TOKEN>
And this command to build the image:
export DOCKER_BUILDKIT=1
docker build --secret id=github_token,env=GITHUB_TOKEN -t cool-image-bro .
The above works perfectly.
Now we also have a docker-compose file running in CI that needs to be modified. However, even if I confirmed that the ENV vars are present in that job, I do not know how to assign the environment variable to the github_token named secret ID.
In other words, what is the equivalent docker-compose command (up --build, or build) that can accept a mapping of an environment variable with a secret ID?
Turns out I was a bit ahead of the times. docker compose v.2.5.0 brings support for secrets.
After having modified the Dockerfile as explained above, we must then update the docker-compose to defined secrets.
docker-compose.yml
services:
my-cool-app:
build:
context: .
secrets:
- github_user
- github_token
...
secrets:
github_user:
file: secrets_github_user
github_token:
file: secrets_github_token
But where are those files secrets_github_user and secrets_github_token coming from? In your CI you also need to export the environment variable and save it to the default secrets file location. In our project we are using Tasks so we added these too lines.
Note that we are running this task from our CI, so you could do it differently without Tasks for example.
- printenv GITHUB_USER > /root/project/secrets_github_user
- printenv GITHUB_TOKEN > /root/project/secrets_github_token
We then update the CircleCI config and add two environment variable to our job:
.config.yml
name-of-our-job:
environment:
DOCKER_BUILDKIT: 1
COMPOSE_DOCKER_CLI_BUILD: 1
You might also need a more recent Docker version, I think they introduced it in a late 19 release or early 20. I have used this and it works:
steps:
- setup_remote_docker:
version: 20.10.11
Now when running your docker-compose based commands, the secrets should be successfully mounted through docker-compose and available to correctly build or run your Dockerfile instructions!

Running docker container in Jenkins container, How can I set the volume from host?

I'm running a container with jenkins using "docker outside of docker". My docker compose is:
---
version: '2'
services:
jenkins-master:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/urandom:/dev/random
- /home/jj/jenkins/jenkins_home/:/var/jenkins_home
ports:
- "8080:8080"
So all containers launched from "jenkins container" are running in host machine.
But when I try to run docker-compose in "jenkins container" in a job thats needs a volume, it takes the path from host instead of jenkins. I mean, when I run docker-compose with
volumes:
- .:/app
It is mounted in /var/jenkins_home/workspace/JOB_NAME in the host but I want that it is mounted in /home/jj/jenkins/jenkins_home/workspace/JOB_NAME
Any idea for doing this with a "clean" mode?
P.D.: I did a workaround using environments variables.
Docker on the host will map the path as is from the request, and docker-compose will make the request with the path it sees inside the container. This leaves you with a few options:
Don't use host volumes in your builds. If you need volumes, you can use named volumes and use docker io to read in and out of those volumes. That would look like:
tar -cC data . | docker run -i --rm -v app-data:/target busybox /bin/sh -c "tar -xC /target". You'd reverse the docker/tar commands to pull data back out.
Make the path on the host match that of the container. On your host, if you have access to make a symlink in var, you can ln -s /home/jj/jenkins/jenkins_home /var/jenkins_home and then update your compose file to have the same path (you may need to specify /var/jenkins_home/. to follow the symlink).
Make the path of the container match that of the host. This may be the easiest option, but I'm not positive it would work (depends on where compose thinks it's running). Your Dockerfile for the jenkins master can include the following:
RUN mkdir -p /home/jj/jenkins \
&& ln -s /var/jenkins_home /home/jj/jenkins/jenkins_home
ENV JENKINS_HOME /home/jj/jenkins/jenkins_home
If the easy option doesn't work, you can rebuild the image from jenkins and change the JENKINS_HOME variable to match your environment.
Make your compose paths absolute. You can add some code to set a variable:
export CUR_DIR=$(pwd | sed 's#/var/jenkins_home#/home/jj/jenkins/jenkins_home#'). Then you can set your volume with that variable:
volumes:
- ${CUR_DIR:-.}:/app

Jenkins inside docker loses configuration when container is restarted

I have followed the next guide https://hub.docker.com/r/iliyan/jenkins-ci-php/ to download the docker image with Jenkins.
When I start my container using docker start CONTAINERNAME command, I can access to Jenkins from localhost:8080.
The problem comes up when I change Jenkins configuration and restart Jenkins using docker stop CONTAINERNAME and docker start CONTAINERNAME, my Jenkins doesn't contain any of my previous configuration changes..
How can I persist the Jenkins configuration?
You need to mount the Jenkins configuration as a volume, the -v flag will do just that for you. (you can ignore the --privileged flag in my example unless you plan on building docker images inside your jenkins docker image)
docker run --privileged --name='jenkins' -d -p 6999:8080 -p 50000:50000 -v /home/jan/jenkins:/var/jenkins_home jenkins:latest
The -v flag will mount your /var/jenkins_home outside your container in /home/jan/jenkins maintaining it between rebuilds.
--name so that you have a fixed name for the container to start / stop it from.
Then next time you want to run it, simply call
docker start jenkins
My understanding is that the init script
/sbin/tini -- /usr/local/bin/jenkins.sh
is reseting jenkins configuration on startup within the folder provided through the
JENKINS_HOME env var,
wether mounted outside the docker vm or not.
It is but possible to store the configuration on github using
configure/"Configure System"/"SCM Sync configuration"/Git
section.
See possible detailed configuration here
You can use this docker-compose file:
version: '3.1'
services:
jenkins:
image: jenkins:latest
container_name: jenkins
restart: always
environment:
TZ: GMT
volumes:
- ./jenkins_host:/var/jenkins_home
ports:
- 8080:8080
tty: true
You only need to share the jenkins volume ./jenkins_host:/var/jenkins_home with host folder
Besides the obvious, like running parameters that clear up the image that you should disable, you can do a few things:
use docker commit and reuse the commited container
mount the part where you write to the local file system with docker volumes
my favorite : use command :
docker container restart containername
Depending on your needs you can pick one.
I use the latter for example when testing jenkins plugins and it retains the data inside.
Source of the latter that is also useful for updates:
https://jimkang.medium.com/how-to-start-a-new-jenkins-container-and-update-jenkins-with-docker-cf628aa495e9

Resources