Docker compose passing parameters to set as environment variables of Dockerfile - docker

The following is my Dockerfile
FROM openjdk:11.0.7-jre-slim
ARG HTTP_PORT \
NODE_NAME \
DEBUG_PORT \
JMX_PORT
ENV APP_ROOT=/root \
HTTP_PORT=$HTTP_PORT \
NODE_NAME=$NODE_NAME \
DEBUG_PORT=$DEBUG_PORT \
JMX_PORT=$JMX_PORT
ADD spring-boot-app.jar $APP_ROOT/spring-boot-app.jar
ADD Config $APP_ROOT/Config
ADD start.sh $APP_ROOT/start.sh
WORKDIR ${APP_ROOT}
CMD ["/root/start.sh"]
Contents of start.sh as follows:
#!/bin/bash
java -Dnode.name=$NODE_NAME -Dapp.port=$HTTP_PORT -agentlib:jdwp=transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jar spring-boot-app.jar
I am able to run using same image with different params as follows:
docker run -p 9261:9261 -p 65054:65054 -p 8080:8080 -itd --name=app-1 -e HTTP_PORT=8080 -e NODE_NAME=NODE1 -e DEBUG_PORT=9261 -e JMX_PORT=65054 my-image
docker run -p 9221:9221 -p 65354:65354 -p 8180:8180 -itd --name=app-2 -e HTTP_PORT=8180 -e NODE_NAME=NODE2 -e DEBUG_PORT=9221 -e JMX_PORT=65354 my-image
How to achieve this using docker-compose? I have tried the following but it is not working.
version: '3.1'
services:
app-alpha:
image: my-image
environment:
- HTTP_PORT:8080
- NODE_NAME:NODE1
- DEBUG_PORT:9261
- JMX_PORT:65054
ports:
- 9261:9261
- 65054:65054
- 8080:8080
app-beta:
image: my-image
environment:
- HTTP_PORT:8180
- NODE_NAME:NODE2
- DEBUG_PORT:9221
- JMX_PORT:65354
ports:
- 9221:9221
- 65354:65354
- 8180:8180

Replace = instead : So your variables looks:
environment:
- HTTP_PORT=8080
- NODE_NAME=NODE1
- DEBUG_PORT=9261
- JMX_PORT=65054

Related

How to Pass Job Parameters to aws-glue-libs Docker Container?

I'm running and developing AWS Glue Job on Docker Container (https://gallery.ecr.aws/glue/aws-glue-libs) and I need to pass Job Parameter so that I can catch it using getResolvedOptions as in production. other than that I also need to give --additional-python-modules job parameter to install some libraries.
I know I can using pip inside the container, but I want to make it as similar as possible with the production. I also use docker-compose to run the container
version: '3.9'
services:
datacop:
image: public.ecr.aws/glue/aws-glue-libs:glue_libs_4.0.0_image_01
container_name: aws-glue
tty: true
ports:
- 4040:4040
- 18080:18080
environment:
- AWS_PROFILE=${AWS_PROFILE}
- DISABLE_SSL=true
volumes:
- ~/.aws:/home/glue_user/.aws
- ./workspace:/home/glue_user/workspace
I don't use docker-compose, but docker run. I'm just adding the params last in the command to spark-submit.
glue_main.py is my script I want to execute. Also --JOB_NAME <some name> is required, if that is in use inside the script, which it's usually are.
/home/glue_user/spark/bin/spark-submit glue_main.py --foo bar --baz 123 --JOB_NAME foo_job
This is my full command:
docker run --rm -i \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
-e AWS_REGION=${AWS_REGION} \
-e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
-p 4040:4040 \
-e DISABLE_SSL="true" \
-v $(pwd)/${LOCAL_DEPLOY_DIR}:${REMOTE_DEPLOY_DIR} \
--workdir="${REMOTE_DEPLOY_DIR}" \
--entrypoint "/bin/bash" \
amazon/aws-glue-libs:glue_libs_4.0.0_image_01 \
/home/glue_user/spark/bin/spark-submit ${SCRIPT_NAME} --py-files site-packages.zip ${JOB_ARGS}

What is the equivalent of ‍-h in docker-compose?

I want convert docker run to docker-compose with -h parameter
What is the equivalent of ‍‍‍‍-h in docker-compose?
My docker run command:
docker run --rm -p 8080:80/tcp -p 1935:1935 -p 3478:3478
-p 3478:3478/udp bigbluebutton -h webinar.mydomain.com
My docker-compose
version: "3"
services:
bigbluebutton:
build: .
container_name: "bigbluebutton"
restart: unless-stopped
ports:
- 1935:1935
- 3478:3478
- 3478:3478/udp
- 8080:80
networks:
public:
networks:
public:
external:
name: public
Anything that appears after the docker run image name is the Compose command:.
docker run \
--rm -p 8080:80/tcp -p 1935:1935 \ # Docker options
-p 3478:3478 -p 3478:3478/udp \ # More Docker options
bigbluebutton \ # Image name
-h webinar.mydomain.com # Command
services:
bigbluebutton:
build: .
command: -h webinar.mydomain.com
ports: ['8080:80', '1935:1935', '3478:3478', '3478:3478/udp']

InfluxDB on Docker-Compose can't read SSL cert file

I'm having some troubles trying to configure SSL with InfluxDB v1.8 running on Docker Compose.
I followed the official documentation to enable HTTPS with self-signed certificate, but the container crashes with the following error:
run: open server: open service: open "/etc/ssl/influxdb-selfsigned.crt": no such file or directory
It works if I run this configuration using docker run command:
docker run -p 8086:8086 -v $PWD/ssl:/etc/ssl \
-e INFLUXDB_DB=db0 \
-e INFLUXDB_ADMIN_USER=admin \
-e INFLUXDB_ADMIN_PASSWORD=supersecretpassword \
-e INFLUXDB_HTTP_HTTPS_ENABLED=true \
-e INFLUXDB_HTTP_HTTPS_CERTIFICATE="/etc/ssl/influxdb-selfsigned.crt" \
-e INFLUXDB_HTTP_HTTPS_PRIVATE_KEY="/etc/ssl/influxdb-selfsigned.key" \
-d influxdb
My docker-compose.yml is the following:
version: "3"
services:
influxdb:
image: influxdb
ports:
- "8086:8086"
volumes:
- influxdb:/var/lib/influxdb
- ./ssl:/etc/ssl/
environment:
- INFLUXDB_DB=db0
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=supersecretpassword
- INFLUXDB_HTTP_HTTPS_ENABLED=true
- INFLUXDB_HTTP_HTTPS_CERTIFICATE="/etc/ssl/influxdb-selfsigned.crt"
- INFLUXDB_HTTP_HTTPS_PRIVATE_KEY="/etc/ssl/influxdb-selfsigned.key"
- INFLUXDB_HTTP_AUTH_ENABLED=true
volumes:
influxdb:
If I set INFLUXDB_HTTP_HTTPS_ENABLED to false, I can see that cert and key files are mounted as they should in /etc/ssl in the container ( docker exec -it airq_influxdb_1 ls -la /etc/ssl )
Do you have any idea why this is happening and how to solve it?
The environment variables passed in the docker-compose.yml are strings. You don't need to pass the quotes.
The influx DB is looking for the certificate under "/etc/ssl/influxdb-selfsigned.crt"...literally
Simply remove the quotes and the DB will start:
...
- INFLUXDB_HTTP_HTTPS_ENABLED=true
- INFLUXDB_HTTP_HTTPS_CERTIFICATE=/etc/ssl/influxdb-selfsigned.crt
- INFLUXDB_HTTP_HTTPS_PRIVATE_KEY=/etc/ssl/influxdb-selfsigned.key
...

How to convert a docker run -it bash command into a docker-compose?

Given the following command:
docker run -dit -p 9080:9080 -p 9443:9443 -p 2809:2809 -p 9043:9043 --name container_name --net=host myimage:latest bash
How to convert it into an equivalent docker-compose.yml file?
In docker-compose in -it flags are being reflected by following:
tty: true
stdin_open: true
Equivalent to docker run --net=host is this:
services:
web:
...
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
So your final docker-compose should look like this:
version: '3'
services:
my_name:
image: myimage:latest
container_name: my_name
ports:
- "9080:9080"
- "9443:9443"
- "2809:2809"
- "9043:9043"
command: bash
tty: true
stdin_open: true
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
Compose file version 3 reference
Last but not least if you want to run it in the detached mode just add -d flag to docker-compose command:
docker-compose up -d
You can’t directly. Docker Compose will start up some number of containers that are expected to run more or less autonomously, and there’s no way to start typing commands into one of them. (What would you do if you had multiple containers that you wanted to start that were all just trying to launch interactive bash sessions?)
A better design would be to set up your Docker image so that its default CMD launched the actual command you were trying to run.
FROM some_base_image:x.y
COPY ...
CMD myapp.sh
Then you should be able to run
docker run -d \
-p 9080:9080 \
-p 9443:9443 \
-p 2809:2809 \
-p 9043:9043 \
--name container_name \
myimage:latest
and your application should start up on its own, successfully, with no user intervention. That’s something you can translate directly into Docker Compose syntax and it will work as expected.

Build from linuxserver\deluge

I'd like to be able to use a Dockerfile with the linuxserver\deluge image but I'm unsure what is the correct way to do this in a docker-compose.yaml file.
docker create \
--name=deluge \
--net=host \
-e PUID=1001 \
-e PGID=1001 \
-e UMASK_SET=<022> \
-e TZ=<timezone> \
-v </path/to/deluge/config>:/config \
-v </path/to/your/downloads>:/downloads \
--restart unless-stopped \
linuxserver/deluge
Can someone help me convert this please so that I can use a Dockerfile
Thanks :)
The following docker-compose.yml file is similar to your command :
version: "3"
services:
deluge:
container_name: deluge
image: linuxserver/deluge
environment:
- PUID=1001
- PGID=1001
- UMASK_SET=<022>
- TZ=<timezone>
volumes:
- </path/to/deluge/config>:/config
- </path/to/your/downloads>:/downloads
restart: unless-stopped
network_mode: host
Documentation is a great place to find the mapping between docker options and docker-compose syntax. Here is a recap of what have been used for this example :
--name => container_name
-e => environment (array of key=value)
-v => volumes (array of volume_or_folder_on_host:/path/inside/container)
--restart <policy> => restart: <policy>
--net=xxxx => network_mode
You can now run docker-compose up to start all your services (only deluge here) instead of your docker run command.

Resources