Passing environment variables to application in a Docker container through docker-compose - docker

I want to pass environment variables to the applications containerized in Docker through docker-compose. This is a VS 2017 15.3 solution Tools for Docker.
In my docker.compose.yml file I have:
app.web:
image: app.web
env_file:
- ./path.to.project/config.env
build:
context: ./path.to.project
dockerfile: Dockerfile
In config.env I have:
TEST=Compose
But when I try to read the variables using Environment.GetEnvironmentVariable("TEST"); I always get null.
If I set a non-existent file in env_file it complains when I run it, so I give for granted that is locating the file.
If I set the variable this way:
app.web:
image: app.web
environment:
- TEST=ComposeLiteral
build:
context: ./path.to.project
dockerfile: Dockerfile
I get "ComposeLiteral" when evaluating "TEST".
Which is the correct way of passing a file with environment variables to the application?

The problem is that the config.env file I was using was starting with the UTF8 BOM, making text editors to show me the same content, but causing docker compose to read something else.
When you add a text file through Visual Studio, the BOM is added.
I created an example project with the problem.
Then in the log is possible to see how the wrong parsing is happening:
config.env:
environment:
NUGET_FALLBACK_PACKAGES: /root/.nuget/fallbackpackages
"\uFEFFTEST": Compose
config2.env
environment:
NUGET_FALLBACK_PACKAGES: /root/.nuget/fallbackpackages
TEST: Compose
https://github.com/docker/compose/issues/5220

I am not able to reproduce your problem. If I start with your Dockerfile:
version: "2"
services:
app.web:
image: app.web
env_file:
- ./path.to.project/config.env
build:
context: ./path.to.project
dockerfile: Dockerfile
And in path.to.project I use the following Dockerfile:
FROM alpine
RUN apk add --update python3
COPY dumpenv.py /dumpenv.py
CMD python3 /dumpenv.py
Which runs a simple web server that does nothing but dump environment variables:
import http.server
import os
class Handler(http.server.BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
content = []
for k, v in os.environ.items():
content.append('{:20} {}'.format(k, v))
self.wfile.write(bytes('\n'.join(content), 'utf-8'))
return
server = http.server.HTTPServer(('0.0.0.0', 8080), Handler)
server.serve_forever()
If I docker-compose up this environment and then connect to the service, I see:
PWD /
TEST ComposeLiteral
HOSTNAME 5388e6e2717a
SHLVL 1
HOME /root
PATH ...
You can find this complete example online here. If you see different behavior using this same example, could you update your question to include a complete reproducer?

Related

Accessing shell environment variables from docker-compose?

How do you access environment variables exported in Bash from inside docker-compose?
I'm essentially trying to do what's described in this answer but I don't want to define a .env file.
I just want to make a call like:
export TEST_NAME=test_widget_abc
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_1
and have it pass TEST_NAME to the command inside my Dockerfile, which runs a unittest suite like:
ENV TEST_NAME ${TEST_NAME}
CMD python manage.py test $TEST_NAME
My goal is to allow running my docker container to execute a specific unittest without having to rebuild the entire image, by simply pulling in the test name from the shell at container runtime. Otherwise, if no test name is given, the command will run all tests.
As I understand, you can define environment variables in a .env file and then reference them in your docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
args:
- TEST_NAME=$TEST_NAME
context: ..
dockerfile: Dockerfile
but that doesn't pull from the shell.
How would you do this with docker-compose?
For the setup you describe, I'd docker-compose run a temporary container
export COMPOSE_PROJECT_NAME=myproject
docker-compose run app_test python manage.py test_widget_abc
This uses all of the setup from the docker-compose.yml file except the ports:, and it uses the command you provide instead of the Compose command: or Dockerfile CMD. It will honor depends_on: constraints to start related containers (you may need an entrypoint wrapper script to actually wait for them to be running).
If the test code is built into your "normal" image you may not even need special Compose setup to do this; just point docker-compose run at your existing application service definition without defining a dedicated service for the integration tests.
Since Compose does (simple) environment variable substitution you could also provide the per-execution command: in your Compose file
version: "3.6"
services:
app_test:
build: ..
command: python manage.py $TEST_NAME # uses the host variable
Or, with the Dockerfile you have, pass through the host's environment variable; the CMD will run a shell to interpret the string when it starts up
version: "3.6"
services:
app_test:
build: ..
environment:
- TEST_NAME # without a specific value here passes through from the host
These would both work with the Dockerfile and Compose setup you show in the question.
Environment variables in your docker-compose.yaml will be substituted with values from the environment. For example, if I write:
version: "3"
services:
app_test:
image: docker.io/alpine:latest
environment:
TEST_NAME: ${TEST_NAME}
command:
- env
Then if I export TEST_NAME in my local environment:
$ export TEST_NAME=foo
And bring up the stack:
$ docker-compose up
Creating network "docker_default" with the default driver
Creating docker_app_test_1 ... done
Attaching to docker_app_test_1
app_test_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
app_test_1 | HOSTNAME=be3c12e33290
app_test_1 | TEST_NAME=foo
app_test_1 | HOME=/root
docker_app_test_1 exited with code 0
I see that TEST_NAME inside the container has received the value from my local environment.
It looks like you're trying to pass the environment variable into your image build process, rather than passing it in at runtime. Even if that works once, it's not going to be useful, because docker-compose won't rebuild your image every time you run it, so whatever value was in TEST_NAME at the time the image was built is what you would see inside the container.
It's better to pass the environment into the container at run time.

"key cannot contain a space" error while running docker compose

I am trying to deploy my django app to app engine using dockerfile and for that after following a few blogs such as these, I created a docker-compose.yml file but when I run the docker compose up command or docker-compose -f docker-compose-deploy.yml run --rm gcloud sh -c "gcloud app deploy" I get an error key cannot contain a space. See below:
For example:
$ docker compose up
key cannot contain a space
$ cat docker-compose.yml
version: '3.7'
services:
app:
build:
context: .
ports: ['8000:8000']
volumes: ['./app:/app']
Can someone please help me to fix this error? I have tried yamllint to validate the yaml file for any space/indentation type of error and it doesn't show any error to me.
EDIT:
Here is the content for file in the longer command:
version: '3.7'
services:
gcloud:
image: google/cloud-sdk:338.0.0
volumes:
- gcp-creds:/creds
- .:/app
working_dir: /app
environment:
- CLOUDSDK_CONFIG=/creds
volumes:
gcp-creds:
Ok this is resolved finally! After beating my head around, I was able to finally resolve this issue by doing the following things:
Unchecked the option to use "Docker Compose v2" from my docker desktop settings. Here is the setting in Docker Desktop
Closed the docker desktop app and restarted it.
Please try these steps in case you face the issue. Thanks!
Just adding another alt answer here that I confirmed worked for me when following the steps above did not. My case is slightly different, but as Google brought me here first I thought I'd leave a note.
Check your env var values for spaces!
This may only be applicable if you are using env_var files (appreciate that OP is not in the minimal example, hence saying this is different).
Unescaped spaces in variables will cause this cryptic error message.
So, given a compose file like this:
version: '3.7'
services:
gcloud:
image: google/cloud-sdk:338.0.0
volumes:
- gcp-creds:/creds
- .:/app
working_dir: /app
env_file:
- some_env_file.env
If some_env_file.env looks like this:
MY_VAR=some string with spaces
then we get the cryptic key cannot contain a space.
If instead we change some_env_file.env to be like this:
MY_VAR="some string with spaces"
then all is well.
The issue has been reported to docker-compose.
Google brought me here first, and when your suggestion sadly didn't work for me, it then took me to this reddit thread, where I found out the above.
Docker Compose (at least since v2) automatically parses .env files before processing the docker-compose.yml file, regardless of any env_file setting within the yaml file. If any of the variables inside your .env file contains spaces, then you will get the error key cannot contain a space.
Two workarounds exist at this time:
Rename your .env file to something else, or
Create an alternate/empty .env file, e.g. .env.docker and then explicitly set the --env-file parameter, i.e. docker compose --env-file .env.docker config.
Track the related issues here:
https://github.com/docker/compose/issues/6741
https://github.com/docker/compose/issues/8736
https://github.com/docker/compose/issues/6951
https://github.com/docker/compose/issues/4642
https://github.com/docker/compose/commit/ed18cefc040f66bb7f5f5c9f7b141cbd3afbbc89
https://docs.docker.com/compose/env-file/
One more thing to be care about - since Compose V2, this error may be raise in case you have inline comments in the env file used by Compose. For example
---
version: "3.7"
services:
backend:
build:
context: .
dockerfile: Dockerfile
env_file: .app.env
and that .app.env is like this
RABBIT_USER=user # RabbitMQ user
the same error may occur. To fix just move comment to its own line
# RabbitMQ user
RABBIT_USER=user

How to access a website running in a container when you´re using network_mode: host

I have a very tricky topic because I need to access a private DB in AWS. In order to connect to this DB, first I need to create a bridge like this:
ssh -L 127.0.0.1:LOCAL_PORT:DB_URL:PORT -N -J ACCOUNT#EMAIL.DOMAIN -i ~/KEY_LOCATION/KEY_NAME.pem PC_USER#PC_ADDRESS
Via 127.0.0.1:LOCAL_PORT:DB_URL I can connect to the DB in my Java app. Let´s say the port is 9991 for this case.
My docker files more or less look this:
docker-compose.yml
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Dockerfile
FROM openjdk:11
RUN mkdir /home/app/
WORKDIR /home/app/
RUN mkdir logs
COPY ./target/MY_JAVA_APP.jar .
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "MY_JAVA_APP.jar"]
The image runs properly. However, if I try:
using localhost:8080/MY_APP fails
using 127.0.0.1/MY_APP fails
getting the container's IP and use it later fails
using host.docker.internal/MY_APP fails
I´m wondering how I can test my app. I know it´s running because I get a successful message in the console and the new data was added to the DB, but I don´t know how I can test it or access it. Any idea of the proper way to do it? Thanks.
P.S.:
I´m running my Images in Docker Desktop for Windows.
I have another case using tomcat 9 and running CMD ["catalina.sh", "run"] and I know it's working because I get this message in the console:
INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [9905] milliseconds
But I cannot access it again.
I'm not really sure what the issue is based on the above information since I cannot replicate the system on my own machine.
However, these are some places to look:
you might be running into an issue similar to this: https://github.com/docker/for-mac/issues/1031 because of the networking magic you are doing with ssh and AWS DB
you should try specifying either a build/Dockerfile or an image, and avoid specifying both
version: '3.4'
services:
api:
image: fanmixco/example:v0.01 # choose using an image
build: # or building from a Dockerfile
context: . # but not both
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Hope that helps 🤞🏻 and good luck 🍀
I guess you need to bind the port of your container.
Try to add the 'port' property to your docker-compose file
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
port:
- 8080:8080
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Have a look on https://docs.docker.com/compose/compose-file/compose-file-v3/#endpoint_mode

Docker compose with name other than dockerfile

I have used docker to create CLI interfaces where I test my code. These are named reasonably as:
proj_root/.../docks/foo.dockerfile
proj_root/.../docks/bar.dockerfile
Because there is more than one dock involved, the top level "Dockerfile" at the project root is unreasonable. Although I can't copy ancestor directories when building in docker, I can clone my entire repo.
So my project architecture works for me.
Next, I look up docker-compose because I need to match my docker cards up against a postgres db and expose some ports.
However, docker-compose seems to be anchored to the hard-coded '"Dockerfile" in the current working directory' user concept from the perspective of the command line interface.
But! I see the error message implies the tool is capable of looking for an arbitrarily named dockerfile:
ERROR: Cannot locate specified Dockerfile: Dockerfile
The question is: how do I set docker-compose off looking for foo.dockerfile rather than ./Dockerfile?
In your docker-compose, under the service:
services:
serviceA:
build:
context: <folder of your project>
dockerfile: <path and name to your Dockerfile>
As mentioned in the documentation of docker-compose.yml, you can overwrite the Dockerfile filename within the build properties of your docker-compose services.
For example:
version: 3
services:
foo:
image: user/foo
build:
context: .../docks
dockerfile: foo.Dockerfile
bar:
image: user/bar
build:
context: .../docks
dockerfile: bar.Dockerfile

Dockerfiles not found when running docker-compose on windows (boot2docker)

I'm hoping that I've just missed something terribly obvious, but here's the situation I'm faced with.
Problem
Running docker-compose on windows after following docker-compose install steps from the website
docker-compose.yml file works fine on unix systems (have tested on Mac)
Currently fails immediately on Windows when it cannot locate any Dockerfiles for the services defined in the yml file. Here's the error:
NOTE: The image above might be a bit confusing. The environment variable below is called GOPATH, but the folder on my colleague's computer is also called GOPATH. This gives the impression that the env var isn't set correctly, but it is indeed.
version: '3'
services:
renopost:
depends_on:
- reno-cassandra
- reno-kafka
- reno-consul
build:
context: ${GOPATH}/src/renopost
dockerfile: ${GOPATH}/src/renopost/docker/dev/Dockerfile
container_name: renopost
image: renopost
ports:
- "4000:4000"
volumes:
- ${GOPATH}/src/renopost:/go/src/renopost
Above is a snippet of the docker-compose.yml file that is being run. The GOPATH env variable is indeed set and when following the directory path listed, I can confirm the file exists in that location.
Is there some interaction here with the OracleBoxVM that boot2docker uses where it isn't actually finding that file?

Resources