Naming Dask Worker with Docker Swarm Templating - docker

I'm currently using Docker Swarm to deploy/manage multiple Dask Workers across a cluster. For easier debugging I'd like to be able to name the workers based on what node in the Swarm it is running on.
The dask-worker command has a --name parameter, however, Docker's templating doesn't seem to work in the entrypoint or cmd options. e.g.
...
worker:
image: myapp:latest
restart: always
entrypoint: ["dask-worker", "tcp://scheduler:8786", "--name", "{{.Node.Hostname}}"]
deploy:
mode: global
...
Unfortunately, the {{.Node.Hostname}} templating only appears to work in the environment section of a docker-compose.yml file. So my next option was to try and set it via an environment variable like this:
...
worker:
image: myapp:latest
restart: always
entrypoint: ["dask-worker", "tcp://scheduler:8786"]
environment:
DASK_DISTRIBUTED__WORKER__NAME: "{{.Node.Hostname}}"
deploy:
mode: global
...
I've also been unsuccessful with this, as I assume the workers name cannot be set via an environment variable - though I've not been able to find exhaustive documentation of all the supported environment variable config names for dask, so could be a typo or incorrect guess.
Finally, I tried to take the environment variable and bring it back into the entrypoint command via bash templating. This also did not work as it appeared that the environment variable was not set at the time this command was evaluated:
...
worker:
image: myapp:latest
restart: always
entrypoint: ["sh", "-c", "dask-worker tcp://scheduler:8786 --name $FOO"]
environment:
FOO: "{{.Node.Hostname}}"
deploy:
mode: global
...
I'm out of ideas at this point, and wondering if anyone could figure out a way.
Thanks.

You can use service template variables on service environments and hostname, and if my memory server right, inside volume declarations.
Thus said, wouldn't service.worker.hostname: "{{.Node.Hostname}}" do the trick? Also, your last attempt might work, if you espace $FOO in your command. If my memory serves right, you need to escape the $ sign with another $ sign in front. So instead of $FOO, try $$FOO - otherwise it will applay variable substitution on the host during container start and NOT when executing the command in the container.
If it does not, you can still write a small entrypoint script that wrapps your command and uses the environment variable you declared.

Related

Passing env variables from docker-compose.yml to the client-side Next.js

This is quite silly, but I can't successfully pass my environment vars into my Next.js service (run with docker-compose up). Anyone can see the bug?
docker-compose.yml
services:
...
nextjs-client:
image: nextjs-client
ports: "3000:3000"
environment:
- NEXT_PUBLIC_API_HOST=192.168.0.9:8080
At my nextjs-client sourcecode I try to access it with process.env.NEXT_PUBLIC_API_HOST, but it's undefined.
Your syntax looks fine,
Try to exec the container and printenv to see if the variable exists.
No need to say, your code should run in the same container, of course.
If it exists, maybe a spelling issue. Check also the process.env declaration vs the docker environment declaration.
Try also to docker-compose down to remove the container, might be also a caching issue.
It might be easier to maintain the environment variables with .env file and docker compose --env-file .env, but it is not the problem, just a tip.

Expose docker port based on environment variable in compose

I am using docker compose to set up application environments. There are two distinct environments, test and production.
In a test environment, I need to expose additional ports (for debugging). These ports should remain closed in a production environment.
I would also like to use the same image and docker-compose.yml file. Using the same image is no problem but I am struggeling with the compose file. In it, I would like to open or close a port based on an environment variable.
The current setup is pretty much the standard, like this:
# ...
ports:
- "8080:8080" # HTTP Server port
- "9301:9301" # debug port
# ...
In this example, both ports are always exposed. Is it possible to expose the port 9301 only if a certain environment variable, say EXPOSE_DEBUG, is set?
You can use profiles or a second compose file.
services:
app-prod:
&app
image: busybox
profiles:
- production
ports:
- 8080:8080
app-dev:
<<: *app
profiles:
- development
ports:
- 8080:8080
- 9090:9090
Then you can use the below command or an environment variable to set the profile, COMPOSE_PROFILES.
docker compose --profile <profile-name> up
Alternatively, you can use a second compose file and override the ports.
# compose.yaml
services:
app:
image: busybox
ports:
- 8080:8080
# compose.dev.yaml
services:
app:
ports:
- 8080:8080
- 9090:9090
Then you can use the file after the main file to patch it:
docker compose -f compose.yaml -f compose.dev.yaml up
The file(s) to use can also be controls with an environment variable, COMPOSE_FILE.
If you name the file compose.override.yaml, docker will automatically use it, so you don't have to point to it with the -f flag. Be careful that you don't add this file to your production system, if you choose to do this.
You could also bind the debug port to the loopback interface so that you can only access it locally.
ports:
- 8080:8080
- 127:0.0.1:9090:9090
The solution I usually use in my projects is to make a bash script that writes the docker-compose.yml based on the value of the environment variable. But you could write it with any other programming language as well.
Conditional statements (if else) are not supported in docker compose.
Use additional software like jinja-compose adding Jinja2 logic to docker-compose
Use just two different files (dc-dev.yml and dc-prod.yml) and give them as arg (docker compose -f)
Generate docker-compose.yml programmatically by yourself
Use profiles (Was to slow, see answer of the fool)
To just maintain dev/prod environments in my opinion solution 2 is the most efficient in terms of effort.
To follow your approach:
You can set port mapping by envs like:
.env-File or add them in docker compose up -e command
PORT1="8080:8080"
PORT2="9301:9301"
docker-compse.yml
services:
container1:
ports:
- ${PORT1}
- ${PORT2}
But afaik there is no way to omit one of them

Passing env variables from one service to another inside docker-compose.yml

Is there a way to pass environment variables from one service to the other inside docker-compose.yml ?
services:
testService:
environment:
TEST_KEY: 1234
testServiceTests:
environment:
TEST_KEY: I want to pull in the value 1234 here from service1
No.
However, there's an alternative. You may provide environment variables to all the services within the Docker Compose file by exposing them either from your shell, when you run the Compose or by using a special .env file, See documentation.
Using this approach, you would have a global (for the Compose) environment variable, say GLOBAL_TEST_KEY (it needn't have a different name) and you would be able to share this across multiple services:
services:
testService:
environment:
TEST_KEY: ${GLOBAL_TEST_KEY}
testServiceTests:
environment:
TEST_KEY: ${GLOBAL_TEST_KEY}
And then: docker-compose run -e GLOBAL_TEST_KEY="Some value" ....
Or, create a file called .env alongside docker-compose.yaml and, in .env:
GLOBAL_TEST_KEY="Some value"
And then: docker-compose run ...
NOTE No need to reference .env as it's included by default

Docker-compose redis: start with fixture?

I am using docker-compose to create a redis container. However, I need it to start with some default key values. Is this possible?
You need to modify your DockerCompose file, You can also add from some file which contains key value but here is the simplest example that adds and get key in DockerCompose file.
version: '2'
services:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
command:
- /bin/sh
- -c
- |
nohup redis-server &
sleep 5
echo "adding some default key value"
redis-cli SET docker awesome
echo "Get docker key value"
redis-cli GET docker
# this will keep container running
tail -f /dev/null
There are several approaches but be aware that, by default, services start in an arbitrary order using Docker Compose and, even if you use depends_on this only checks that containers are running (e.g. redis) and not that they've completed some initialization process.
1. Easiest: Pre-create
See the option to run the redis image with persistent storage:
https://hub.docker.com/_/redis/
Using this approach, you'd either mount a local directory into the container's /data directory or create a (data) volume and use that. Then, you'd pre-populate the redis server by running the redis-cli against it.
One hack to doing this is to your planned docker-compose.yml file but docker-compose --file=/path/to/docker-compost.yaml up redis where redis is the name of the redis service too. You'll need to ensure the redis service is accessible from the host --ports: 6379:6379 perhaps so that the external redis-cli can access it.
This approach works well for local-only use but does not facilitate deploying the solution elsewhere.
2. Resilient: Test for keys
Docker Compose -- to my knowledge -- doesn't offer an elegant equivalent to Kubernetes' init containers which are run before the dependent container.
With Docker Compose, you could include an initialization (run once) redis-cli to populate the server but you must then augment any clients to check that this has completed or for the existence of this data before starting (successfully).
The simplest solution for this is for the redis clients to fail and restart: always if the redis keys aren't present.
A more advanced solution would be to define a healthcheck for the existence of the redis keys and then depends_upon: ... condition: service_healthy (see link)
See also startup order in Docker Compose described here

How to pass environment variables to docker-compose's applications

I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/

Resources