How to Pass Job Parameters to aws-glue-libs Docker Container? - docker

I'm running and developing AWS Glue Job on Docker Container (https://gallery.ecr.aws/glue/aws-glue-libs) and I need to pass Job Parameter so that I can catch it using getResolvedOptions as in production. other than that I also need to give --additional-python-modules job parameter to install some libraries.
I know I can using pip inside the container, but I want to make it as similar as possible with the production. I also use docker-compose to run the container
version: '3.9'
services:
datacop:
image: public.ecr.aws/glue/aws-glue-libs:glue_libs_4.0.0_image_01
container_name: aws-glue
tty: true
ports:
- 4040:4040
- 18080:18080
environment:
- AWS_PROFILE=${AWS_PROFILE}
- DISABLE_SSL=true
volumes:
- ~/.aws:/home/glue_user/.aws
- ./workspace:/home/glue_user/workspace

I don't use docker-compose, but docker run. I'm just adding the params last in the command to spark-submit.
glue_main.py is my script I want to execute. Also --JOB_NAME <some name> is required, if that is in use inside the script, which it's usually are.
/home/glue_user/spark/bin/spark-submit glue_main.py --foo bar --baz 123 --JOB_NAME foo_job
This is my full command:
docker run --rm -i \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
-e AWS_REGION=${AWS_REGION} \
-e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
-p 4040:4040 \
-e DISABLE_SSL="true" \
-v $(pwd)/${LOCAL_DEPLOY_DIR}:${REMOTE_DEPLOY_DIR} \
--workdir="${REMOTE_DEPLOY_DIR}" \
--entrypoint "/bin/bash" \
amazon/aws-glue-libs:glue_libs_4.0.0_image_01 \
/home/glue_user/spark/bin/spark-submit ${SCRIPT_NAME} --py-files site-packages.zip ${JOB_ARGS}

Related

How to write the good syntax of --user u=in a docker-Compose file

With influxdb2 and telegraf docker container, I want to read some value from a device by modbutcp. For that I use the telegraf modbus plugin.
When I use the telegraf run command
docker run -d --name=telegraf \
-v $(pwd)/telegraf.conf:/etc/telegraf/telegraf.conf \
-v /var/run/docker.sock:/var/run/docker.sock \
--net=influxdb-net \
--user telegraf:$(stat -c '%g' /var/run/docker.sock) \
--env INFLUX_TOKEN=EcoDMFzGnFkeCLsHiyoaTA-m3VXHl_RG7QqYt6Wt7D5Bdq6Bk9BQlmdO2S47OXaOA-wIz2dLu1aebiZCf2JmFQ==\
telegraf
Everything is ok, I get my device values in influxdb dashboard.
Now I want to use a docker-compose.yml file.
I have a problem with the following command part:
--user telegraf:$(stat -c '%g' /var/run/docker.sock)
My yml file
telegraf:
image: telegraf:latest
container_name: telegraf2
volumes:
- ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock
environment:
INFLUX_TOKEN : Lweb-ZjlKzpA6VFSPqNC5CLy86ntIlvGbqMGUvIS1zrA==
user: telegraf$("stat -c '%g' /var/run/docker.sock")
When run the command docker-compose up -d I have an error
Error response from daemon: unable to find user telegraf$("stat -c '%g' /var/run/docker.sock"): no matching entries in passwd file
Can you tell where is my mistake. Why with the first method it's ok and not with the second.

Docker compose passing parameters to set as environment variables of Dockerfile

The following is my Dockerfile
FROM openjdk:11.0.7-jre-slim
ARG HTTP_PORT \
NODE_NAME \
DEBUG_PORT \
JMX_PORT
ENV APP_ROOT=/root \
HTTP_PORT=$HTTP_PORT \
NODE_NAME=$NODE_NAME \
DEBUG_PORT=$DEBUG_PORT \
JMX_PORT=$JMX_PORT
ADD spring-boot-app.jar $APP_ROOT/spring-boot-app.jar
ADD Config $APP_ROOT/Config
ADD start.sh $APP_ROOT/start.sh
WORKDIR ${APP_ROOT}
CMD ["/root/start.sh"]
Contents of start.sh as follows:
#!/bin/bash
java -Dnode.name=$NODE_NAME -Dapp.port=$HTTP_PORT -agentlib:jdwp=transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jar spring-boot-app.jar
I am able to run using same image with different params as follows:
docker run -p 9261:9261 -p 65054:65054 -p 8080:8080 -itd --name=app-1 -e HTTP_PORT=8080 -e NODE_NAME=NODE1 -e DEBUG_PORT=9261 -e JMX_PORT=65054 my-image
docker run -p 9221:9221 -p 65354:65354 -p 8180:8180 -itd --name=app-2 -e HTTP_PORT=8180 -e NODE_NAME=NODE2 -e DEBUG_PORT=9221 -e JMX_PORT=65354 my-image
How to achieve this using docker-compose? I have tried the following but it is not working.
version: '3.1'
services:
app-alpha:
image: my-image
environment:
- HTTP_PORT:8080
- NODE_NAME:NODE1
- DEBUG_PORT:9261
- JMX_PORT:65054
ports:
- 9261:9261
- 65054:65054
- 8080:8080
app-beta:
image: my-image
environment:
- HTTP_PORT:8180
- NODE_NAME:NODE2
- DEBUG_PORT:9221
- JMX_PORT:65354
ports:
- 9221:9221
- 65354:65354
- 8180:8180
Replace = instead : So your variables looks:
environment:
- HTTP_PORT=8080
- NODE_NAME=NODE1
- DEBUG_PORT=9261
- JMX_PORT=65054

Drone.io do not trigger git push

I am trying to add a dockerized drone.io to join my existing gitea (also in docker container)
Drone is working and see each of my repo. I enable drone on one of them called my-app for the test.
As drone need a file called .drone.yml, I created one & filled it with some basic code to use pipeline & start some tests
kind: pipeline
name: default
steps:
- name: test
image: maven:3-jdk-10
commands:
- mvn install
- mvn test
Finally I have push it but nothing seem to happen on drone
Here is how I stared my containers
docker run \
--volume=/var/run/docker.sock:/var/run/docker.sock \
--volume=data:/data \
--env=DRONE_GITEA_SERVER=https://... \
--env=DRONE_GIT_ALWAYS_AUTH=false \
--env=DRONE_RUNNER_CAPACITY=2 \
--env VIRTUAL_PORT=80 \
--env VIRTUAL_HOST=my.domain \
--env LETSENCRYPT_HOST="my.domain" \
--env LETSENCRYPT_EMAIL="me#email.com" \
--restart=always \
--detach=true \
--name=drone \
drone/drone:1
docker run --name git -v /home/leix/gitea:/data -e VIRTUAL_PORT=3000 -e VIRTUAL_HOST=other.domain -e LETSENCRYPT_HOST="other.domain" -e LETSENCRYPT_EMAIL="me#email.com" -d gitea/gitea
I expect drone to run test on git push
I finally found a solution but I don't know why it's working but I used Docker-Compose instead of docker run & it's working pretty well

Build from linuxserver\deluge

I'd like to be able to use a Dockerfile with the linuxserver\deluge image but I'm unsure what is the correct way to do this in a docker-compose.yaml file.
docker create \
--name=deluge \
--net=host \
-e PUID=1001 \
-e PGID=1001 \
-e UMASK_SET=<022> \
-e TZ=<timezone> \
-v </path/to/deluge/config>:/config \
-v </path/to/your/downloads>:/downloads \
--restart unless-stopped \
linuxserver/deluge
Can someone help me convert this please so that I can use a Dockerfile
Thanks :)
The following docker-compose.yml file is similar to your command :
version: "3"
services:
deluge:
container_name: deluge
image: linuxserver/deluge
environment:
- PUID=1001
- PGID=1001
- UMASK_SET=<022>
- TZ=<timezone>
volumes:
- </path/to/deluge/config>:/config
- </path/to/your/downloads>:/downloads
restart: unless-stopped
network_mode: host
Documentation is a great place to find the mapping between docker options and docker-compose syntax. Here is a recap of what have been used for this example :
--name => container_name
-e => environment (array of key=value)
-v => volumes (array of volume_or_folder_on_host:/path/inside/container)
--restart <policy> => restart: <policy>
--net=xxxx => network_mode
You can now run docker-compose up to start all your services (only deluge here) instead of your docker run command.

convert docker run to docker-compose.yml special args

I have the folllowing commands that I need to convert to docker-compose
docker run \
-p 993:993 \
-p 587:587 \
-v /home/vmail:/home/vmail \
-e MAILNAME="somedomain.com"
-v /etc/postfix
-v /etc/dovecot
-v /etc/ssl
-v /etc/opendkim
-v /var/log/container:/var/log
email
--email youremail#somedomain.com
How do I pass the --email arg to the ENTRYPOINT using docker-compose?
Docker compose has an entrypoint property which you can use.
...
entrypoint:
-email=youremail#somedomain.com
You could use
command: my_app --email youremail#somedomain.com
I need the same thing, actually how to map ports in compose like docker run -p provides. I still want to use the docker-compose up to start tho.
I think the ports option on docs page is the answer.
Add this to your compose yaml file:
ports:
- "127.0.0.1:<host-port>:<container-port>"

Resources