Environment variable not set in container when using `environment` key - docker

Following these instructions, I tried setting an environment variable with -e TC=3 and in the compose file like so:
services:
balancer:
environment:
- TC=3
But the variable is not set when the container is run.
Does anyone see what I'm doing wrong?
I'm using:
docker-compose 1.23.1, build b02f1306
Docker 18.06.1-ce, build e68fc7a

The way you are setting the environment is correct. With this compose file
version: '2'
services:
test:
environment:
- HELLO=WORLD
image: alpine
command: env
I got this output
$ docker-compose -f test-compose.yml up
Creating network "sandbox_default" with the default driver
Creating sandbox_test_1 ... done
Attaching to sandbox_test_1
test_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
test_1 | HOSTNAME=e2eb1a0da23e
test_1 | HELLO=WORLD
test_1 | HOME=/root
sandbox_test_1 exited with code 0
If you want to be able to override a variable written in compose file, then you need to use ${var_name} syntax, e.g.
environment:
- HELLO=${hello_value}

Related

Docker stack conditional environment variable

I have a docker stack file that is deployed across my swarm, which have many nodes.
For a specific reason one of the nodes (lets call it Node A) has a connection to the outside (internet), and the others dont, so when deploying a container on the other nodes, I need to set the HTTP_PROXY environment variable.
Question is: how do I set this ONLY on the nodes with a specific label (and not on the A node)
docker-compose.yml
version: '3.6'
services:
app:
image: my_image
ports:
- "8033:8000"
environment:
- HTTP_PROXY=proxy.server.com:3128
- HTTPS_PROXY=proxy.server.com:3128
deploy:
replicas: 10
placement:
constraints: [node.labels.app_server == app_server]
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
The only alternative so far would be to have the stack deployed with different variables, and place a constraint on deployment. But I am trying to avoid it.
How about setting those environment variables in the necessary hosts and passing them through to your container like so:
version: '3'
services:
app:
image: ubuntu
environment:
- HTTP_PROXY
- HTTPS_PROXY
They will only be set in the container if they are also set in the host environment. See documentation here.
Example/More Info:
# When HTTP_PROXY is set in the host environment, value is passed through to the container.
$ HTTP_PROXY=test docker-compose run app env | grep -i proxy
Creating some-nodes-only_app_run ... done
HTTP_PROXY=test
# When HTTP_PROXY is not set in the host environment, nothing is set in container.
$ docker-compose run app env | grep -i proxy
Creating some-nodes-only_app_run ... done
You could also write an entrypoint script to set the proxy when needed. I would recommend checking the connectivity of the container and then falling back to a proxy if necessary, but if you want to do it based on the hostname you could use something like this:
entrypoint.sh
#!/bin/bash
PROXY_HOSTNAME=some-host
if [ -f /etc/host_hostname ]; then
HOST_HOSTNAME=$(cat /etc/host_hostname)
if [ "$HOST_HOSTNAME" = "$PROXY_HOSTNAME" ]; then
echo "Setting fallback http proxy"
export HTTP_PROXY=${FALLBACK_HTTP_PROXY}
fi
fi
exec $#
Dockerfile
# test:latest
FROM ubuntu
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
docker-compose.yml
version: '3'
services:
app:
image: test:latest
environment:
- FALLBACK_HTTP_PROXY=test
volumes:
- /etc/hostname:/etc/host_hostname:ro
Example run:
$ docker-compose run app env | grep -i http_proxy
Creating some-nodes-only_app_run ... done
Setting fallback http proxy
FALLBACK_HTTP_PROXY=test
HTTP_PROXY=test

Docker-Compose Passing command args by .env

I often need to start Odoo server with different arguments.
So in docker-compose.yml, which is versioned, I specified the following:
version: '3'
services:
web:
image: odoo:12.0
command: odoo ${ODOO_ARGS}
and created a .env file with:
ODOO_ARGS="--update=all"
This works well with a single argument, but it doesn't handle multiple arguments. For example if I try the following:
ODOO_ARGS="--database=myDb --update=all --stop-after-init"
the command will be evaluated as: odoo --database="myDb --update=all --stop-after-init"
I pretty sure it is a syntax issue, so I'd like to know how to pass multiple arguments to the command option through .env file.
It actually evaluates to odoo "--database=myDb --update=all --stop-after-init" just because you put quotes in the env file. Here is an example with several arguments in one string:
docker-compose.yml
version: "3.7"
services:
test:
image: debian:buster
command: find ${ARGS}
.env
ARGS=/ -name bash
Running this you'll get:
test_1 | /bin/bash
test_1 | /usr/share/lintian/overrides/bash
test_1 | /usr/share/menu/bash
test_1 | /usr/share/doc/bash

Setting JVM_ARGS for Payara server in docker-compose

How can I set a Payara servers jvm-option in docker compose?
For the application I use the payara/server-full:5.194 container.
I know that I have to use the JVM_ARGS ENV Variable, but I don't know how exactly.
This is a piece of my docker-compose.yml file:
version: "3.5"
services:
myApplication:
build:
context: .
dockerfile: Dockerfile
image: myApplication:latest
environment:
POSTBOOT_COMMANDS: /tmp/init_asadmin_commands.sh
JVM_ARGS: -DmyApplication.home.dir=/msc/srv/myApplication
ports:
- "8080:8080"
- "4848:4848"
Does anybody has an example for me? (I haven't found a good usage)
Thanks!
Edit(1):
This is the error what i get on docker-compse up:
sed: -e expression #1, char 54: unknown option to `s'
And this is what I found, how the container uses it:
COMMAND=`echo "$OUTPUT"\
| sed -n -e '2,/^$/p'\
| sed 's/glassfish.jar/glassfish.jar '"$JVM_ARGS"'/' `

docker-compose ps does not show running services

I have a docker-compose stack launched on a remote machine, through gitlab CI/CD (a runner connects to the docker engine on the remote machine and performs the deploy with docker-compose up -d).
When I connect to that machine from my laptop, using eval docker-machine env REMOTE_ADDRESS, I can see the docker processes running (with docker ps), while the services stack appears to be empty (docker-compose ps).
I am not able to use docker-compose down to stop the stack, and trying docker-compose up -d gives me the error
ERROR: for feamp_postgres Cannot create container for service postgres: Conflict. The container name "/feamp_postgres" is already in use by container "40586885...". You have to remove (or rename) that container to be able to reuse that name.
The reverse is also true, I can start the stack from my local laptop (using docker-machine), but then the CI/CD pipeline fails when trying to execute docker-compose up -d with the same error.
This happens using the latest versions of docker and docker-compose, both on the laptop (OSX) and on the runner (ubuntu 18.04).
In other circumstances (~10 other projects) this has worked smoothly.
This is the docker-compose.yml file I am using.
version: "3.7"
services:
web:
container_name: feamp_web
restart: always
image: guglielmo/fpa/feamp:latest
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
environment:
- ...
volumes:
- public:/app/public
- data:/app/data
- uwsgi_spooler:/var/lib/uwsgi
- weblogs:/var/log
command: /usr/local/bin/uwsgi --socket=:8000 ...
nginx:
container_name: feamp_nginx
restart: always
...
postgres:
container_name: feamp_postgres
restart: always
image: postgres:11-alpine
...
redis:
container_name: feamp_redis
restart: always
image: redis:latest
...
volumes:
...
networks:
default:
external:
name: webproxy
Normally I can up the stack from my local laptop and manage it from the CI/CD pipeline on gitlab, or vice-versa.
This diagram should help visualise the situation.
+-----------------+
| |
| Remote server |
| |
+----|--------|---+
| |
| |
docker-compose ps| |docker-compose up -d
| |
| |
+-------------------+ | | +--------------------+
| | | | | |
| Docker client 1 ---------+ +--------- Docker client 2 |
| | | |
+-------------------+ +--------------------+
Connection to the remote server's docker engine are executed through docker-machine.
It appears that specifying the project name when invoking docker-compose commands, solves the issue.
This can be done using the -p parameter in the command line or the COMPOSE_PROJECT_NAME environment variable, on both clients.
For some reasons, this was not needed in previous projects.
It may be a change in docker (I changed from 18 to 19), or something else, I still do not know the details.
Instead of using docker-compose ps you can try docker ps -a and work from there.
Assuming you are ok simply discarding the containers, you can brute-force removal by calling:
docker rm -f 40586885
docker network rm webproxy
In my case, the problem was the project name into .env file
docker compose 3.7, docker latest version (2022-07-17)
COMPOSE_PROJECT_NAME = demo.sitedomain-testing.com
The compose project name do not support dots (.).
I just replaced by hifen and it worked well.
COMPOSE_PROJECT_NAME = demo-sitedomain-testing-com
In addition, container name also, do not support dots.

Linked container IP not in hosts

I'm trying to configure a simple LAMP app.
Here is my Dockerfile
FROM ubuntu
# ...
RUN apt-get update
RUN apt-get -yq install apache2
# ...
WORKDIR /data
And my docker-compose.yml
db:
image: mysql
web:
build: .
ports:
- 80:80
volumes:
- .:/data
links:
- db
command: /data/run.sh
After docker-compose build & up I was expecting to find db added to my /etc/hosts (into the web container), but it's not there.
How can this be explained ? What am I doing wrong ?
Note1: At up time, I see only Attaching to myapp_web_1, shouldn't I see also myapp_db_1 ?
Note2: I'm using boot2docker
Following #Alexandru_Rosianu's comment, I checked
$ docker-compose logs db
error: database is uninitialized and MYSQL_ROOT_PASSWORD not set
Did you forget to add -e MYSQL_ROOT_PASSWORD=... ?
Since I now set the variable MYSQL_ROOT_PASSWORD
$ docker-compose up
Attaching to myapp_db_1, myapp_web_1
db_1 | Running mysql_install_db
db_1 | ...
I can see the whole db log and the db host effectively set in web's /etc/hosts

Resources