I have a docker stack file that is deployed across my swarm, which have many nodes.
For a specific reason one of the nodes (lets call it Node A) has a connection to the outside (internet), and the others dont, so when deploying a container on the other nodes, I need to set the HTTP_PROXY environment variable.
Question is: how do I set this ONLY on the nodes with a specific label (and not on the A node)
docker-compose.yml
version: '3.6'
services:
app:
image: my_image
ports:
- "8033:8000"
environment:
- HTTP_PROXY=proxy.server.com:3128
- HTTPS_PROXY=proxy.server.com:3128
deploy:
replicas: 10
placement:
constraints: [node.labels.app_server == app_server]
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
The only alternative so far would be to have the stack deployed with different variables, and place a constraint on deployment. But I am trying to avoid it.
How about setting those environment variables in the necessary hosts and passing them through to your container like so:
version: '3'
services:
app:
image: ubuntu
environment:
- HTTP_PROXY
- HTTPS_PROXY
They will only be set in the container if they are also set in the host environment. See documentation here.
Example/More Info:
# When HTTP_PROXY is set in the host environment, value is passed through to the container.
$ HTTP_PROXY=test docker-compose run app env | grep -i proxy
Creating some-nodes-only_app_run ... done
HTTP_PROXY=test
# When HTTP_PROXY is not set in the host environment, nothing is set in container.
$ docker-compose run app env | grep -i proxy
Creating some-nodes-only_app_run ... done
You could also write an entrypoint script to set the proxy when needed. I would recommend checking the connectivity of the container and then falling back to a proxy if necessary, but if you want to do it based on the hostname you could use something like this:
entrypoint.sh
#!/bin/bash
PROXY_HOSTNAME=some-host
if [ -f /etc/host_hostname ]; then
HOST_HOSTNAME=$(cat /etc/host_hostname)
if [ "$HOST_HOSTNAME" = "$PROXY_HOSTNAME" ]; then
echo "Setting fallback http proxy"
export HTTP_PROXY=${FALLBACK_HTTP_PROXY}
fi
fi
exec $#
Dockerfile
# test:latest
FROM ubuntu
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
docker-compose.yml
version: '3'
services:
app:
image: test:latest
environment:
- FALLBACK_HTTP_PROXY=test
volumes:
- /etc/hostname:/etc/host_hostname:ro
Example run:
$ docker-compose run app env | grep -i http_proxy
Creating some-nodes-only_app_run ... done
Setting fallback http proxy
FALLBACK_HTTP_PROXY=test
HTTP_PROXY=test
Related
I'm trying to generate multiple instances of the same git repository with a generic docker-compose.yml file and multiple .env files.
For this somewhere in the code I generate a temporary folder which contains:
.env:
APP_PORT="3000"
APP_NAME="app-name"
REPO_NAME="repo-name"
docker-compose.yml
version: '3.6'
services:
web-app:
image: golang:alpine
environment:
- APP_PORT
- APP_NAME
- REDIS_HOST=db-app
ports:
- ${APP_PORT}:${APP_PORT}
volumes:
- /opt/docker/repositories/${REPO_NAME}:/app
command: sh -c "cd /app && go run ./"
db-app:
image: redis:alpine
then running docker-compose config in this directory gives me the following output :
services:
db-app:
image: redis:alpine
web-app:
command: sh -c "cd /app && go run ./"
environment:
APP_NAME: app-name
APP_PORT: '3000'
REDIS_HOST: db-app
image: golang:alpine
ports:
- published: 3000
target: 3000
volumes:
- /opt/docker/repositories/repo-name:/app:rw
version: '3.6'
This did not only interpolate env variables, it also changed some fields such as ports with published and target, and a :rw at the end of my volume.
This is all done in Go, and when I try to unmarshal the output into a Go struct with yaml fields, it is not recognized as a valid docker-compose file because of the ports field (which is supposed to be an array of strings).
How can I make it so docker-compose config only replaces the ${APP_PORT} with its value and not add these extra unwanted fields ?
Reading the source code, I found this in the config types:
def legacy_repr(self):
return normalize_port_dict(self.repr())
Which is the representation you need. So I searched for legacy_repr in the source code and found this:
if 'ports' in service_dict:
service_dict['ports'] = [
p.legacy_repr() if p.external_ip or version < VERSION else p
for p in service_dict['ports']
]
So apparently, to trigger the use of the legacy representation, you either need to have an external IP address or need to do something with the version. I tried to downgrade the docker-compose.yaml file version but it didn't change anything (maybe it's the docker-compose CLI's version instead).
Reading the spec of the docker-compose config file, in the ports section, you can specify the IP address in the short syntax:
[HOST:]CONTAINER[/PROTOCOL] where:
HOST is [IP:](port | range)
CONTAINER is port | range
PROTOCOL to restrict port to specified protocol. tcp and udp values are defined by the specification, Compose implementations MAY offer support for platform-specific protocol names.
So a solution is to replace ${APP_PORT}:${APP_PORT} by:
0.0.0.0:${APP_PORT}:${APP_PORT}
By setting the external IP address to 0.0.0.0 you are not restricting anything and you force the use of the legacy representation.
I am trying to make a Dockerfile and docker-compose.yml for a webapp that uses elasticsearch. I have connected elasticsearch to the webapp and exposed it to host. However, before the webapp runs I need to create elasticsearch indices and fill them. I have 2 scripts to do this, data_scripts/createElasticIndex.js and data_scripts/parseGenesToElastic.js. I tried adding these to the Dockerfile with
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD ["npm", "start"]
but after I run docker-compose up there are no indexes made. How can I fill elasticsearch before running the webapp?
Dockerfile:
FROM node:11.9.0
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY package*.json ./
# Install any needed packages specified in requirements.txt
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
#
RUN npm build
RUN npm i natives
# Bundle app source
COPY . .
# Make port 80 available to the world outside this container
EXPOSE 80
# Run app.py when the container launches
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD [ "node", "servers/PredictionServer.js"]
CMD [ "node", "--max-old-space-size=8192", "servers/PWAServerInMem.js"]
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: webapp
ports:
- "1337:1337"
- "4000:85"
depends_on:
- redis
- elasticsearch
networks:
- redis
- elasticsearch
volumes:
- "/data:/data"
environment:
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- ELASTICSEARCH_URL=http://elasticsearch:9200"
- ELASTICSEARCH_HOST=elasticsearch
redis:
image: redis
networks:
- redis
ports:
- "6379:6379"
expose:
- "6379"
elasticsearch:
image: elasticsearch:2.4
ports:
- 9200:9200
- 9300:9300
expose:
- "9200"
- "9300"
networks:
- elasticsearch
networks:
redis:
driver: bridge
elasticsearch:
driver: bridge
A Docker container only ever runs one command. When your Dockerfile has multiple CMD lines, only the last one has any effect, and the rest are ignored. (ENTRYPOINT here is just a different way to provide the single command; if you specify both ENTRYPOINT and CMD then the entrypoint becomes the main process and the command is passed as arguments to it.)
Given the example you show, I'd run this in three steps:
Start only the database
docker-compose up -d elasticsearch
Run the "seed" jobs. For simplicity I'd probably run them locally
ELASTICSEARCH_URL=http://localhost:9200 node data_scripts/createElasticIndex.js
(using your physical host's name from the point of view of a script running directly on the physical host, and the published port from the container) but if you prefer you can also run them via the Docker setup
docker-compose run web data_scripts/createElasticIndex.js
Once the database is set up, start your whole application
docker-compose up -d
This will leave the running Elasticsearch unaffected, and start the other containers.
An alternate pattern, if you're confident you want to run these "seed" or migration jobs on every single container start, is to write an entrypoint script. The basic pattern here is to start your server via CMD as you have it now, but to write a script that does first-time setup, ending in exec "$#" to run the command, and make that your container's ENTRYPOINT. This could look like
#!/bin/sh
# I am entrypoint.sh
# Stop immediately if any of these scripts fail
set -e
# Run the migration/seed jobs
node data_scripts/createElasticIndex.js
node data_scripts/parseGenesToElastic.js
# Run the CMD / `docker run ...` command
exec "$#"
# I am Dockerfile
FROM node:11.9.0
...
COPY entrypoint.sh ./ # if not already copied
RUN chmod +x entrypoint.sh # if not already executable
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["npm", "start"]
Since the entrypoint script really is just a shell script, you can use arbitrary logic for this, for instance only running the seed job based on the command, if [ "$1" == npm ]; then ... fi but not for debugging shells (docker run --rm -it myimage bash).
Your Dockerfile also looks like you might be trying to start three different servers (PredictionServer.js, PWAServerInMem.js, and whatever npm start starts); you can run these in three separate containers from the same image and specify the command: in each docker-compose.yml block.
Your docker-compose.yml will be simpler if you remove the networks: (unless it's vital to you that your Elasticsearch and Redis can't talk to each other; it usually isn't) and the expose: declarations (which do nothing, especially in the presence of ports:).
I faced the same issue, and I started my journey using the same approach posted here.
I was redesigning some queries that required me frequently index settings and properties mapping changes, plus changes in the dataset that I was using as an example.
I searched for a docker image that I could easily add to my docker-compose file to allow me to change anything in either the index settings or in the dataset example. Then, I could simply run docker-compose up, and I'd see the changes in my local kibana.
I found nothing, and I ended up creating one on my own. So I'm sharing here because it could be an answer, plus I really hope to help someone else with the same issue.
You can use it as follow:
elasticsearch-seed:
container_name: elasticsearch-seed
image: richardsilveira/elasticsearch-seed
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- INDEX_NAME=my-index
volumes:
- ./my-custom-index-settings.json:/seed/index-settings.json
- ./my-custom-index-bulk-payload.json:/seed/index-bulk-payload.json
You can simply point your index settings file - which should have both index settings + type mappings as usual and point your bulk payload file that should contain your example data.
More instruction at elasticsearch-seed github repository
We could even use it in our E2E and Integrations tests scenarios running in our CI pipelines.
I am quite new to docker but am trying to use docker compose to run automation tests against my application.
I have managed to get docker compose to run my application and run my automation tests, however, at the moment my application is running on localhost when I need it to run against a specific domain example.com.
From research into docker it seems you should be able to hit the application on the hostname by setting it within links, but I still don't seem to be able to.
Below is the code for my docker compose files...
docker-compose.yml
abc:
build: ./
command: run container-dev
ports:
- "443:443"
expose:
- "443"
docker-compose.automation.yml
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && DISPLAY=:1.0 && ENVIRONMENT=qa BASE_URL=https://example.com npm run automation"
links:
- abc:example.com
volumes:
- /tmp:/tmp/
and am using the following command to run...
docker-compose -p tests -f docker-compose.yml -f docker-compose.automation.yml up --build
Is there something I'm missing to map example.com to localhost?
If the two containers are on the same Docker internal network, Docker will provide a DNS service where one can talk to the other by just its container name. As you show this with two separate docker-compose.yml files it's a little tricky, because Docker Compose wants to isolate each file into its own separate mini-Docker world.
The first step is to explicitly declare a network in the "first" docker-compose.yml file. By default Docker Compose will automatically create a network for you, but you need to control its name so that you can refer to it from elsewhere. This means you need a top-level networks: block, and also to attach the container to the network.
version: '3'
networks:
abc:
name: abc
services:
abc:
build: ./
command: run container-dev
ports:
- "443:443"
networks:
abc:
aliases:
- example.com
Then in your test file, you can import that as an external network.
version: 3
networks:
abc:
external: true
name: abc
services:
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && npm run automation"
environment:
DISPLAY: "1.0"
ENVIRONMENT: qa
BASE_URL: "https://example.com"
networks:
- abc
Given the complexity of what you're showing for the "test" container, I would strongly consider running it not in Docker, or else writing a shell script that launches the X server, checks that it actually started, and then runs the test. The docker-compose.yml file isn't the only tool you have here.
I was wondering if there is a way to use environment variables taken from the host where the container is deployed, instead of the ones taken from where the docker stack deploy command is executed. For example imagine the following docker-compose.yml launched on three node Docker Swarm cluster:
version: '3.2'
services:
kafka:
image: wurstmeister/kafka
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
deploy:
mode: global
environment:
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=${JMX_HOSTNAME} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=1099"
The JMX_HOSTNAME should be taken from the host where the container is actually deployed and should not be the same value for every container.
Is there a correct way to do this?
Yes, this works when you combine two concepts:
Swarm node labels, of which Hostname is one of the built-in ones.
Swarm service go templates, which also work in stack files.
This would pull in the hostname to the ENV value of DUDE for each container to be the host that it's running on:
version: '3.4'
services:
nginx:
image: nginx
environment:
DUDE: "{{.Node.Hostname}}"
deploy:
replicas: 3
It works if you run the docker command through env.
env JMX_HOSTNAME="${JMX_HOSTNAME}" docker stack deploy -c docker-compose.yml mystack
Credit to GitHub issue that pointed me in the right direction.
I found another way for when you have many environment variables. The same method also works with docker-compose up
sudo -E docker stack deploy -c docker-compose.yml mystack
instead of
env foo="${foo}" bar="${bar}" docker stack deploy -c docker-compose.yml mystack
sudo -E man description;
-E, --preserve-env
Indicates to the security policy that the user wishes to
preserve their existing environment variables. The
security policy may return an error if the user does not
have permission to preserve the environment.
On my image I want to set some environment variables eg: MY_VAR where it will have a static value eg: MY_VAR=12 but I do NOT want to be able to set it via docker's -e param or via docker-compose.yml's environment section.
Furthermore I do not want to be as build argument when i do either docker build or docker-compose build
How can I do that?
You can do that from an entrypoint script.
In your Dockerfile:
ENTRYPOINT ["/entrypoint.sh"]
Example entrypoint.sh:
#!/bin/sh
export VAR=foobar
exec /usr/bin/python "$#"
To be more flexible and allow setting it with the -e option:
export VAR=${VAR:-"foobar"}
...
The best solution for your question is to include an env_file on your docker-compose build
version: '3.2'
services:
db:
restart: always
image: postgres:alpine
volumes:
- backup-data:/var/lib/postgresql/data
env_file:
- ./env/.dev
Then in your env_file:
POSTGRES_USER=my_user
POSTGRES_PASSWORD=my_password
POSTGRES_DB=my_db