Docker ENV vars updated in container but not in application - docker

I've updated an environment variable in my Dockerfile, restarted with docker compose up -d
Adding in a shell file to be run on container start, with the line echo $MY_VAR, echoes the appropriate value, however, when I go open the browser console within my application and type env, it only prints out my previous env.
I've tried clearing my cache, force rebuilding of the image via the -d flag on docker compose up, deleting the old images, literally anything and everything, yet somehow the old env remains.
My Dockerfile:
FROM node:17.4.0-alpine3.14
WORKDIR /code
CMD ["bin/run"]
ENV \
MY_VAR='abcdef' \
VERSION='development'
COPY package*.json ./
RUN npm install
COPY src src
COPY cogs.js ./
COPY bin bin
RUN bin/build
My Docker Compose
version: "3.9"
services:
balancer:
image: nginx:1.19.7-alpine
ports:
- 80:80
volumes:
- ./src/nginx.conf:/etc/nginx/nginx.conf
networks:
default:
aliases:
- www.dev.mydomain.com
app: &app
build:
context: "../app"
volumes:
- ../app/bin:/code/bin
- ../app/package-lock.json:/code/package-lock.json
- ../app/package.json:/code/package.json
- ../app/src:/code/src
- app-dist:/code/dist
environment:
MY_VAR: abcdef
VERSION: 'development'
app-watch:
<<: *app
command: ["bin/watch"]
volumes:
app-dist:
Where I use it in my app; config.js
const { env } = globalThis;
export default {
myVar: env.MY_VAR,
version: env.VERSION
};
Updated docker vars (STRIPE_PUBLIC_KEY === MY_VAR)
I'm honestly completely confused as to how the variables can be updated when I echo $MY_VAR in my bin/run script, but logging out the env in browser returns an outdated version of the env.

I think you should not put the variable in both, Dockerfile and docker-compose.yml (unless you explicitly need it that way to build the app), but either in docker-compose.yml or in a .env file.
Start by docker compose build if the images depend on the env vars during build stage.
Docker detects the changes when running docker compose up but if you want to force recreate, use the --force-recreate flag. (-d is used to detach the container from the session).
docker compose restart is not suitable at that point, because:
If you make changes to your docker-compose.yml configuration these
changes are not reflected after running docker compose restart command.
Also make sure to do hard refresh on the website you are looking at the results using CTRL+R (in most browsers).

Related

Accessing shell environment variables from docker-compose?

How do you access environment variables exported in Bash from inside docker-compose?
I'm essentially trying to do what's described in this answer but I don't want to define a .env file.
I just want to make a call like:
export TEST_NAME=test_widget_abc
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_1
and have it pass TEST_NAME to the command inside my Dockerfile, which runs a unittest suite like:
ENV TEST_NAME ${TEST_NAME}
CMD python manage.py test $TEST_NAME
My goal is to allow running my docker container to execute a specific unittest without having to rebuild the entire image, by simply pulling in the test name from the shell at container runtime. Otherwise, if no test name is given, the command will run all tests.
As I understand, you can define environment variables in a .env file and then reference them in your docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
args:
- TEST_NAME=$TEST_NAME
context: ..
dockerfile: Dockerfile
but that doesn't pull from the shell.
How would you do this with docker-compose?
For the setup you describe, I'd docker-compose run a temporary container
export COMPOSE_PROJECT_NAME=myproject
docker-compose run app_test python manage.py test_widget_abc
This uses all of the setup from the docker-compose.yml file except the ports:, and it uses the command you provide instead of the Compose command: or Dockerfile CMD. It will honor depends_on: constraints to start related containers (you may need an entrypoint wrapper script to actually wait for them to be running).
If the test code is built into your "normal" image you may not even need special Compose setup to do this; just point docker-compose run at your existing application service definition without defining a dedicated service for the integration tests.
Since Compose does (simple) environment variable substitution you could also provide the per-execution command: in your Compose file
version: "3.6"
services:
app_test:
build: ..
command: python manage.py $TEST_NAME # uses the host variable
Or, with the Dockerfile you have, pass through the host's environment variable; the CMD will run a shell to interpret the string when it starts up
version: "3.6"
services:
app_test:
build: ..
environment:
- TEST_NAME # without a specific value here passes through from the host
These would both work with the Dockerfile and Compose setup you show in the question.
Environment variables in your docker-compose.yaml will be substituted with values from the environment. For example, if I write:
version: "3"
services:
app_test:
image: docker.io/alpine:latest
environment:
TEST_NAME: ${TEST_NAME}
command:
- env
Then if I export TEST_NAME in my local environment:
$ export TEST_NAME=foo
And bring up the stack:
$ docker-compose up
Creating network "docker_default" with the default driver
Creating docker_app_test_1 ... done
Attaching to docker_app_test_1
app_test_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
app_test_1 | HOSTNAME=be3c12e33290
app_test_1 | TEST_NAME=foo
app_test_1 | HOME=/root
docker_app_test_1 exited with code 0
I see that TEST_NAME inside the container has received the value from my local environment.
It looks like you're trying to pass the environment variable into your image build process, rather than passing it in at runtime. Even if that works once, it's not going to be useful, because docker-compose won't rebuild your image every time you run it, so whatever value was in TEST_NAME at the time the image was built is what you would see inside the container.
It's better to pass the environment into the container at run time.

Is it even possible to convert my docker-compose.yml to heroku.yml?

So I'm trying to deploy my app to Heroku.
Here is my docker-compose.yml
version: '3'
#Define services
services:
#Back-end Spring Boot Application
entaurais:
#The docker file in scrum-app build the jar and provides the docker image with the following name.
build: ./entauraIS
container_name: backend
#Environment variables for Spring Boot Application.
ports:
- 8080:8080 # Forward the exposed port 8080 on the container to port 8080 on the host machine
depends_on:
- postgresql
postgresql:
image: postgres:13
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_USER=postgres
- POSTGRES_DB=entauracars
ports:
- "5433:5433"
expose:
- "5433"
entaura-front:
build: ./entaura-front
container_name: frontend
ports:
- "4200:4200"
volumes:
- /usr/src/app/node_modules
My frontend Dockerfile:
FROM node:14.15.0
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 4200
CMD [ "npm", "start" ]
My backend Dockerfile:
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM openjdk:11-jre-slim
COPY --from=build /usr/src/app/target/entauraIS.jar /usr/app/entauraIS.jar
ENTRYPOINT ["java","-jar","/usr/app/entauraIS.jar"]
As far as I'm aware heroku needs it's own heroku.yml file, but with the examples I've seen I have no idea how to convert it to my sitaution. Any help is appreaciated, I am completely lost with Heroku.
One of the examples of heroku.yml that I looked at:
build:
docker:
web: Dockerfile
run:
web: npm run start
release:
image: web
command:
-npm run migrate up
docker-compose.yml to heroku.yml
docker-compose has some similar fields that heroku.yml. You could create manually.
It will be awesome the creation of some npm module to convert the docker-compose to heroku.yml. You just need to read the docker-compose.yml and pick some values to create a heroku.yml. Check this to know how read and write yml files.
docker is not required in heroku
If you are looking for a platform to deploy your apps and avoid infrastructure nightmares, heroku is an option for you.
Even more, if your application are standard (java & nodejs), does not need crazy configurations to build and is self-contained (no private libraries), you don't need docker :D
If your nodejs package.json has the standard scripts: start and build, it will run in heroku just perform git push to heroku without Dockerfile. Heroku will detect the nodejs, version and your app will start.
If your java has the spring-boot standard configurations, is the same, just push your code to heroku. In this case, previously to the push, add the postgress add-on manually and use environment variables in your application.properties jdbc url.
one process by app in heroku
If you have api + frontend you will need two apps in heroku. Also your api will need the postgress add-on
Heroku does not work like docker-compose, I mean : one host with all of your apps: front + api + db
Docker
If you want to use Docker, just put the Dockerfile and git push. Heroku will detect that docker is required and will perform the standard commands : docker build ... docker run... so, no extra configuration is required
heroku.yml
If you Docker is mandatory for your apps, and the standard docker build ... and docker run... is not enough for your apps, you wil need heroku.yml
You will need one heroku.yml by each app in heroku.
One advantage of this could be that the manually addition of postgress add-on will not required because is defined in heroku.yml

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

Docker not mapping changes from local project to the container in windows

I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?

docker-compose: use file from volume in Dockerfile

I defined a volume in my docker-compose.yml. I want to use one of these files from the volume in my Dockerfile, but I get the error: "No such file or directory"
If I create the container without the access to the files in the Dockerfile I will see all files from the volume inside the container at the specific location from the docker-compose.yml file.
Is this how it should work or do I something wrong? I think I am missing something.
repository: https://github.com/Lightshadow244/OwnMusicWeb
docker-compose.yml:
version: '3'
services:
ownmusicweb:
build: .
container_name: ownmusicweb
hostname: ownmusicweb
volumes:
- ~/OwnMusicWeb/ownmusicweb:/ownmusicweb
ports:
- 83:8000
tty: true
Dockerfile:
FROM ubuntu:latest
WORKDIR /ownmusicweb
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "python-pip"]
RUN ["pip", "install", "--upgrade", "pip"]
RUN ["pip", "install", "Django", "eyeD3", "djangorestframework", "markdown", "django-filter"]
RUN ["python", "/ownmusicweb/manage.py", "migrate"]
RUN ["python", "/ownmusicweb/manage.py", "runserver", "0.0.0.0:8000"]
Summarising discussion in comments:
RUN directive has no access to volume because it's not mounted yet. Docker creates build context only, which is neccessary to use ADD directive. But in this way the files will remain in the compiled container so you will need a rebuild to update those.
After build is finished, triggered by "build: ." in docker-compose.yml, docker launches the container and adds a volume. But it's too late in your case.
Suggested mechanism is to use ENTRYPOINT with a scipt which launches your stuff. It's being executed after the build in the phase of launch, so you'll have access to the volume.
Another approach, which seems to me a bit cleaner is to use command directive of docker-compose. You can put the same script inside. It depends of the way you're doing deployment and the way you're using docker in the development environment.

Resources