docker-compose set environment variable doesn't work - docker

I use docker and also use docker-compose for tie each container.
In my python flask code, refer environment variable like this.
import os
from app import db, create_app
app = create_app(os.getenv('FLASK_CONFIGURATION') or 'development')
if __name__ == '__main__':
print(os.getenv('FLASK_CONFIGURATION'))
app.run(host='0.0.0.0', debug=True)
And docker-compose.yml here.
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-prod
ports:
- '80:80'
networks:
- backend
links:
- web_project
depends_on:
- web_project
environment:
- FLASK_CONFIGURATION=production
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-prod
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web-prod/dockerfile
container_name: web_project
hostname: web_project_prod
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
environment:
- FLASK_CONFIGURATION=production
networks:
backend:
driver: 'bridge'
I set FLASK_CONFIGURATION=production via environment command.
But when I execute, maybe FLASK_CONFIGURATION=production doesn't work.
I also tried to ENV FLASK_CONFIGURATION production to each dockerfile. (doesn't work too)
Strange thing is, When I enter to my container via bash(docker exec -it bash) and check the environment variable with export, it was set perfectly.
Is there any wrong code in my docker settings?
Thanks.

[SOLVED]
It is caused by supervisor.
When using supervisor, it's shell is isolated with original.
So we have to define our environment variables into supervisor.conf

Your flask code is looks ok, and as you said ... in bash this ENV variable exists,
My advice to you is to find way to put this variable to .env file in your project.
I will explain why i'm saying it regarding similar issue that i had with cron:
The cron run in his "own world" because the system run and execute it, and because of it he don't share those ENV variables that the bash of the main container process holding.
So i assume (please give feed back if not) that flask run too in similar way in his "own world" and don't have access to those ENV that Docker set.
So, there for, i created bash script that read all ENV variable and write them to the .env file of the project, this script run after the container created.
In this way, no matter from where and how you run the code/script ... those ENV variables will always be exists.

Related

Issue in docker compose with volume undefined

I get the below error when I run docker-compose up, any pointers why I am getting this error
service "mysqldb-docker" refers to undefined volume mysqldb: invalid compose project
Also, is there a way to pass the $ENV value in CLI to docker-compose up , currently I have a ENV variable that specified dev, uat or prod that I use to specify the db name. Are there better alternatives to do this other than create a .env file explicitly for this
version: '3.8'
services:
mysqldb-docker:
image: '8.0.27'
restart: 'unless-stopped'
ports:
- "3309:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-$ENV
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3309/reco-tracker-$ENV"
depends_on: [mysqldb-docker]
You must define volumes at the top level like this:
version: '3.8'
services:
mysqldb-docker:
# ...
volumes:
- mysqldb:/var/lib/mysql
volumes:
mysqldb:
You can pass environment variables from your shell straight through to a service’s containers with the ‘environment’ key by not giving them a value
https://docs.docker.com/compose/environment-variables/#pass-environment-variables-to-containers
web:
environment:
- ENV
but from my tests you cant write $ENV in the compose file and expect it to read your env
for this you need to call docker-compose that way :
docker-compose run -e ENV web python console.py
see this : https://docs.docker.com/compose/environment-variables/#set-environment-variables-with-docker-compose-run

Running shell script against Localstack in docker container

I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'

Docker: Why does my project have a .env file?

I'm working on a group project involving Docker that has a .env file, which looks like this:
DATABASE_URL=xxx
DJANGO_SETTINGS_MODULE=xxx
SECRET_KEY=xxx
Couldn't this just be declared inside the Dockerfile? If so, what is the advantage of making a .env file?
Not sure if I'm going in the right direction with this, but this Docker Docs page says (emphasis my own):
Your configuration options can contain environment variables. Compose
uses the variable values from the shell environment in which
docker-compose is run. For example, suppose the shell contains
POSTGRES_VERSION=9.3 and you supply this configuration:
db:
`image: "postgres:${POSTGRES_VERSION}"`
When you run docker-compose up with this configuration, Compose looks for the POSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example, Compose resolves the image to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an empty string. In the example above, if POSTGRES_VERSION is not set, the value for the image option is postgres:.
You can set default values for environment variables using a .env file, which Compose automatically looks for. Values set in the shell environment override those set in the .env file.
If we're using a .env file, then wouldn't I see some ${...} syntax in our docker-compose.yml file? I don't see anything like that, though.
Here's our docker-compose.yml file:
version: '3'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
env_file: .env.dev
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./server:/app
ports:
- "8500:8000"
depends_on:
- db
stdin_open: true
tty: true
db:
image: postgres
client:
build:
context: ./client
dockerfile: Dockerfile
command: bash -c "npm install; npm run start"
volumes:
- ./client:/app
- /app/node_modules
ports:
- "3000:3000"
depends_on:
- server
Idea there is probably to have a place to keep secrets separated from docker-compose.yml, which you then can keep in VCS and/or share.

golang program running fine outside of docker, but exiting with 0 when dockerized

I have the following docker-compose.yml file:
version: "3.3"
services:
api:
build: ./api
expose:
- '8080'
container_name: 'api'
ports:
- "8080:8080"
depends_on:
- db
stdin_open: true
tty: true
networks:
- api-net
db:
build: ./db
expose:
- '27017'
container_name: 'mongo'
ports:
- "27017:27017"
networks:
- api-net
networks:
api-net:
driver: bridge
and the Dockerfile for the api container is as follows:
FROM iron/go:dev
RUN mkdir /app
COPY src/main/main.go /app/
ENV SRC_DIR=/app
ADD . $SRC_DIR
RUN go get goji.io
RUN go get gopkg.in/mgo.v2
# RUN cd $SRC_DIR; go build -o main
CMD ["go", "run", "/app/main.go"]
If I run the code for main.go outside of a container it runs as expected, however if I try to run the container as part of docker-compose I get an exit 0. I have seen other threads on stackoverflow that have suggested using stdin_open and tty, but these have not helped. I have also tried creating an .env file in the same directory I issue docker-compose up from with COMPOSE_HTTP_TIMEOUT=8000 in it and this has not worked either. I am looking for helped and suggestions as to what I need to do in order for my api container to stay up.
I know that --verbose can be issued with docker-compose, however I'm not sure what I should be looking for in the output that this produces.
I finally managed to get to the bottom of this, in my code which worked outside of a container I had:
http.ListenAndServe("localhost:8080", mux)
the fix was to simply remove localhost such that I now have:
http.ListenAndServe(":8080", mux)

Set extra host in environment variables

I'm using docker compose to run my application. And for do that I need to set the hosts inside container (it's depends on the environment i'm running).
My approach was:
Create an environment file and set the variable:
#application.env
SERVER_IP=10.10.9.134
My docker compose file looks like:
version: '2'
services:
api:
container_name: myApplication
env_file:
- application.env
build: ./myApplication/
entrypoint: ./docker/api-startup.sh
ports:
- "8080:8080"
depends_on:
- redis
extra_hosts: &extra_hosts
myip: $SERVER_IP
But my problem is that the variable SERVER_IP is never replaced.
When I run docker-compose config I see:
services:
api:
build:
context: /...../myApplication
container_name: myApplication
depends_on:
- redis
entrypoint: ./docker/api-startup.sh
environment:
SERVER_IP: 10.10.9.134
extra_hosts:
myip: ''
ports:
- 8080:8080
I've tried to replace the variable reference using $SERVER_IP or ${SERVER_IP} but it didn't work.
I created a file .env, added single line HOST=test.example.com, then did this in docker-compose:
extra_hosts:
- myip:${HOST}
docker-compose config then shows
extra_hosts:
myip: test.example.com
To do this I followed the documentation from Docker-compose environment variables the section about .env file
UPDATE
According to the Docker documentation,
Note: If your service specifies a build option, variables defined in
environment files will not be automatically visible during the build.
Use the args sub-option of build to define build-time environment
variables.
It basically means if you place your variables in .env file, you can use them for substitution in docker-compose.yml, but if you use env_file option for the particular container, you can only see the variables inside the Docker container, not during the build. It is also logical, env_file replaces docker run --env-file=FILE ... and nothing else.
So, you can only place your values into .env. Alternatively, as William described, you can use host's environment variables.
EDIT
Try the following:
version: '2'
services:
api:
container_name: myApplication
env_file:
- application.env
build: ./myApplication/
entrypoint: ./docker/api-startup.sh
ports:
- "8080:8080"
depends_on:
- redis
extra_hosts:
- "myip:${SERVER_IP}"
Ensure curly bracers and that the environment variable exists on the host os.

Resources