I have a Python app that uses environment variables and I want make dev\prod setup with one Dockerfile and one docker-compose.yml file (only change env file with environment variables).
Here are the files I use to start the application:
Dockerfile:
FROM python:3.7-slim-buster
RUN apt-get update
WORKDIR /usr/src/app
RUN mkdir /usr/src/app/excel_users_dump
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
RUN python /usr/src/app/myblumbot/main.py
docker-compose.yml:
version: '3'
services:
bot:
build:
context: .
environment:
ENV: ${ENV}
PYTHONPATH: ${PYTHONPATH}
PYTHONBUFFERED: ${PYTHONBUFFERED}
volumes:
- states:/var/myblumbot_states
volumes:
states:
.env (in the same directory as docker-compose.yml)
PYTHONBUFFERED=1
PYTHONPATH=/usr/src/app
ENV=DEV
When I'm running docker-compose up
command, it builds and tells me that I didn't have some environment variables so application can't start.
env = os.environ['ENV']
KeyError: 'ENV'
But if I add ENV VAR value in Dockerfile, everything works good.
How can I pass variables from docker-compose and .env file?
When you have a setup with both a Dockerfile and a docker-compose.yml file like you show, things run in two phases. During the first phase the image is built, and during the second the container actually gets run. Most of the settings in docker-compose.yml don't have an effect during the build stage; that includes network settings, environment variables, and published ports.
In your Dockerfile you're running your application in a RUN step. That happens as part of the build, not the execution phase; the image that finally gets generated is the filesystem that results after your application exits. Since it's during the build phase, environment variable settings don't take effect.
If you change RUN to CMD, then this will get recorded in the image, and after the build completes, it will run as the main container process with environment variable and other settings.
(In comments you suggest ENTRYPOINT. This will work too, for the same reasons, but it makes a couple of tasks like getting a debug shell harder, and there's a standard Docker first-time setup pattern that needs ENTRYPOINT for its own purposes. I'd prefer CMD here.)
Try to follow the docs:
Compose supports declaring default environment variables in an
environment file named .env placed in the folder where the
docker-compose command is executed (current working directory)
Try to use ENTRYPOINT python /usr/src/app/myblumbot/main.py instead of RUN...
Related
My directory structure looks like this.
|
|
--- Dockerfile
| --- .env
Content of .env file looks like this.
VERSION=1.2.0
DATE=2022-05-10
I want to access VERSION and DATE as environment variable both during build time and run time. So ENV is the one I should use. I know that.
How exactly can I do that ?
I tried using RUN command in Dockerfile like
RUN export $(cat .env)
But, it can only be accessed during runtime and not build time.
So, how can this be achieved with ENV ?
I can do it manually like
ENV VERSION 1.2.0
ENV DATE 2022-05-10
But, it is inefficient when I have many environment variables.
P.S. I cannot use docker-compose because the image is going to be used by kubernetes pods, so.
You could firstly export these variables as environmetal variables to your bash
source .env
Then use --build-arg flag to pass them to your docker build
docker image build --build-arg VERSION=$VERSION --build-arg DATE=$DATE .
Next in your dockerfile
ARG VERSION
ARG DATE
ENV version=$VERSION
ENV date=$DATE
As a result, for example you can access the variables in your build phase as VERSION and in your container as version
What I've read from others says that there is no "docker build --env-file...." option/apparatus. As such, this situation makes a good argument for shifting more of the content of the dockerfile to a shell script that the dockerfile simply copies and runs, as you can source the .env file that way.
greetings.sh
#!/bin/sh
source variables.env
echo Foo $copyFileTest
variables.env
export copyFileTest="Bar"
dockerfile
FROM alpine:latest
COPY variables.env .
RUN source variables.env && echo Foo $copyFileTest #outputs: Foo Bar
COPY greetings.sh .
RUN chmod +x /greetings.sh
RUN /greetings.sh #outputs: Foo Bar
RUN echo $copyFileTest #does not work, outputs nothing
You can specify the env_file in the docker-compose.dev.yml file as follows:
# docker-compose.dev.yml
services:
app:
...
env_file:
- .env.development
and you have to have a .env.development file containing all the environment variables you need to pass to the docker image.
e.g.:
# .env.development
REACT_APP_API_URL="https://some-api-url.com/api/graphql"
im unable to find an easy solution, but probably i'm just searching for the wrong things:
I have a docker-compose.yml which contains a tomcat that is built by the contents of the /tomcat folder. In /tomcat there is a Dockerfile, a .war and a server.xml.
The Dockerfile is based on tomcat:9, and copys the server.xml and .war files into the right directories.
If I do docker-compose up, everything is running fine. But i would love to find a way to update the connectors within the server.xml, without pruning the image, adjusting the server.xml and start it again.
It would be perfect to put a $CONNECTOR_CONFIG in the server.xml, and provide an variables.env to docker-compose where the $CONNECTOR_CONFIG variable is set to like ""
I know i could adjust the server.xml within the Dockerfile with sed, but this way the image must be pruned everytime i want to change something right?
Is there a way that i can later just edit the variables.env and docker-compose down/up?
Regards,
EdFred
A useful pattern here is to use the image's ENTRYPOINT as a wrapper script that does first-time setup. If that script ends with exec "$#" then it will execute the image's CMD as normal. You can use this to do things like rewrite configuration files based on environment variables.
#!/bin/sh
# docker-entrypoint.sh
# Replace any environment variable references in server.xml.tmpl.
# (Assumes the image has the full GNU tool set.)
envsubst <"$CATALINA_BASE/conf/server.xml.tmpl" >"$CATALINA_BASE/conf/server.xml"
# Run the standard container command.
exec "$#"
Normally in a tomcat image you wouldn't include a CMD since the base image knows how to start Tomcat. The Docker Hub tomcat image page has a mention of it, or you can click through to find the original Dockerfile. You need to know this since specifying an ENTRYPOINT in a derived Dockerfile will reset the CMD.
Your Dockerfile then needs to COPY this script in and set up the ENTRYPOINT and CMD.
# Dockerfile
FROM tomcat:9
COPY myapp.war /usr/local/tomcat/webapps/
COPY server.xml.tmpl /usr/local/tomcat/conf/
COPY docker-entrypoint.sh /usr/local/tomcat/bin/
# ENTRYPOINT _MUST_ be JSON-array form
ENTRYPOINT ["docker-entrypoint.sh"]
# Duplicate from base image
CMD ["catalina.sh", "run"]
You can verify this by hand using a docker run command. Any command you specify after the image name gets run instead of the CMD; but the main container command is still constructed by passing that command as arguments to the alternate ENTRYPOINT and so your wrapper script will run.
docker run --rm \
-e CONNECTOR_CONFIG=test-connector-config \
my-image \
cat /usr/local/tomcat/conf/server.xml
In your final Compose setup, you can include the configuration as an environment: variable.
version: '3.8'
services:
myapp:
build: .
ports: ['8080:8080']
environment:
CONNECTOR_CONFIG: ...
envsubst is a GNU tool that replaces $ENVIRONMENT_VARIABLE references in text files. It's very useful for this specific case, but you can do the same work with sed or another text-processing tool, especially if you don't have the GNU tools available (in particular if you have an Alpine-based image).
I want to build a multi container docker app with docker compose. My project structure looks like this:
docker-compose.yml
...
webapp/
...
Dockerfile
api/
...
Dockerfile
Currently, I am just trying to build and run the webapp via docker compose up with the correct build context. When building the webapp container directly via docker build, everything runs smoothly.
However, with my current specifications in the docker-compose.yml the line COPY . /webapp/ in webapp/Dockerfile (see below) copies the whole parent project to the container, i.e. the directory which contains the docker-compose.yml, and not just the webapp/ sub directory.
For some reason the line COPY requirements.txt /webapp/ works as expected.
What is the correct way of specifying the build context in docker compose? Why is the . in the Dockerfile interpretet as relative to the docker-compose.yml, while the requirements.txt is relative to the Dockerfile as expected? What am I missing?
Here are the contents of the docker-compose.yml:
version: "3.8"
services:
frontend:
container_name: "pc-frontend"
volumes:
- .:/webapp
env_file:
- ./webapp/.env
build:
context: ./webapp
ports:
- 5000:5000
and webapp/Dockerfile:
FROM python:3.9-slim
# set environment variables
ENV PYTHONWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
# set working directory
WORKDIR /webapp
# copy dependencies
COPY requirements.txt /webapp/
# install dependencies
RUN pip install -r requirements.txt
# copy project
COPY . /webapp/ # does not work as intended
# add entrypoint to app
# ENTRYPOINT ["start-gunicorn.sh"]
CMD [ "ls", "-la" ] # for debugging
# expose port
EXPOSE 5000
The COPY directive is (probably) working the way you expect. But, you have volumes: that are overwriting the image content with something else. Delete the volumes: block.
The image build sequence is working exactly the way you expect. build: { context: ./webapp } uses the webapp subdirectory as the build context and sends it to the Docker daemon. When the Dockerfile for example COPY requirements.txt . it comes out of this directory. If you, for example, docker-compose run frontend pip freeze, you should see the installed Python packages.
After the image is built, Compose starts a container, and at that point volumes: take effect. When you say volumes: ['.:/webapp'], here the . before the colon refers to the directory containing the docker-compose.yml file (and not the webapp subdirectory), and then it hides everything in the /webapp directory in the container. So you're replacing the image's /webapp (which had been built from the webapp subdirectory) with the current directory on the host (one directory higher).
You should usually be able to successfully combine an ordinary host-based development environment and a Docker deployment setup. Use a non-Docker Python virtual environment to build the application and run its unit tests, then use docker-compose up --build to run integration tests and the complete application. With a setup like this, you don't need to deal with the inconveniences of the Python runtime being "somewhere else" as you're developing, and you can safely remove the volumes: block.
I have a local project directory structure like:
config
test
docker-compose.yaml
DockerFile
pip-requirements.txt
src
app
app.py
I'm trying to use Docker to spin up a container to run app.py. Simple in concept, but this has proven extraordinarily difficult. I'm keeping my Docker files in a separate sub-folder because I plan on having a large number of different environments, and I don't want to clutter my top-level folder with dozens of files like Dockerfile.1, Dockerfile.2, etc.
My docker-compose.yaml looks like:
version: '3'
services:
worker:
image: myname:mytag
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./src/app:/usr/local/myproject/src/app
My Dockerfile looks like:
FROM python:2.7
# Set the working directory.
WORKDIR /usr/local/myproject/src/app
# Copy the current directory contents into the container.
COPY src/app /usr/local/myproject/src/app
COPY pip-requirements.txt pip-requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
# Define environment variable
ENV PYTHONUNBUFFERED 1
CMD ["./app.py"]
If I run from the top-level directory of my project:
docker-compose -f config/test/docker-compose.yaml up
it succeeds in building the image, but fails when attempting to run the image with the error:
ERROR: for worker Cannot start service worker: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./app.py\": stat ./app.py: no such file or directory": unknown
If I inspect the image's filesystem with:
docker run --rm -it --entrypoint=/bin/bash myname:mytag
it correctly dumps me into /usr/local/myproject/src/app. However, this directory is empty, explaining the runtime error. Why is this empty? Shouldn't the COPY statement and volumes have populated the image with my application code?
For one, you're clobbering the data set by including the content during the build stage and then using docker-compose to overlay a directory on top of it. Let's first discuss the differences between the Dockerfile (Image) and the Docker-compose (Runtime)
Normally, you would use the COPY directive in the dockerfile to copy a component of your local directory into the image so that it is immutable. In most application deployments, this means we bundle our entire application into the directory and prepare it to run. This means that it is not dynamic (Meaning changes you make to the code after that are not visible in the container) but is a gain in terms of security.
Docker-compose is a runtime specification meaning, "Once I have an image, I want to programmatically define how it runs". By defining a volume here, you're saying "I want the local directory (From the perspective of the compose file) /src/app to be overlaid onto /usr/local/myproject/src/app
Thus anything you built into the image doesn't really matter. You're adding another layer on top of the image which will take precedance over what was built into the image.
It may also be something to do with you specifying the Workdir already and then specifying a ./ reference in the CMD. Would be worth trying it as just CMD ["app.py"]
What happens if you
Build the image: docker build -t "test" .
Run the image manually : "docker run --rm -it test
I have the following entryfile
FROM <image-of-nodejs>
COPY docker/node/entry.sh /var/entries/entry.sh
RUN apt-get update
RUN apt-get install ant -y
CMD ["/var/entries/entry.sh"]
the image is used by a docker-compose file:
version: "3.3"
services:
my_node:
build:
context: ./
dockerfile: docker/node/Dockerfile-build-dev
volumes:
- type: bind
source: ./
target: /var/proj
and the entry.sh file is the following:
#!/bin/bash
export QNAMAKER_SUB_KEY=b13615t
If I then start the image and I enter the docker, I won't find my env variable set:
docker-compose up --force-recreate -d
docker-compose run my_node bash
root#9c081bedde65:/# echo ${QNAMAKER_SUB_KEY}
<empty>
I would prefer to set my variables throug my script in place of the ENV Dockerfile command. What's wrong?
There are a couple of things going on here.
First, docker-compose run doesn't run a command inside the container you started with docker-compose up. It starts a new container to run a one-off command. You probably want docker-compose exec.
The reason you don't see the variable when using docker-compose run is that you are overriding your CMD by providing a new command (bash) on the docker-compose run command line.
You could consider:
Using ENV statements in your Dockerfile.
Using the environment key in your docker-compose.yml
The former will embed the information into your image, while the latter would mean that the variable would be unset if you didn't explicitly set it in your docker-compose.yaml file (or using -e on the docker run command line).
You may be able to accomplish your goal using an ENTRYPOINT script and setting the value there, but that won't impact the environment visible to you when using docker exec (or docker-compose exec).