How to run `args` as `command` in kubernetes - docker

I have a python script I want to run in a kubernetes job. I have used a configMap to upload it to the container located for example in dir/script.py.
The container is run normally with the args["load"].
I have tried using a postStart lifecycle in the Job manifest but it appears not to run.
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /usr/bin/python /opt/config-init/db/tls_generator.py
Below is the snippet of the manifest
containers:
- name: {{ template "gluu.name" . }}-load
image: gluufederation/config-init:4.0.0_dev
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /usr/bin/python /opt/config-init/db/tls_generator.py
volumeMounts:
- mountPath: /opt/config-init/db/
name: {{ template "gluu.name" . }}-config
- mountPath: /opt/config-init/db/generate.json
name: {{ template "gluu.fullname" . }}-mount-gen-file
subPath: generate.json
- mountPath: /opt/config-init/db/tls_generator.py
name: {{ template "gluu.fullname" . }}-tls-script
envFrom:
- configMapRef:
name: {{ template "gluu.fullname" . }}-config-cm
args: [ "load" ]
How can I run the tls_generator.py scipt after the args["load"].
The dockerFile part looks like
ENTRYPOINT ["tini", "-g", "--", "/app/scripts/entrypoint.sh"]
CMD ["--help"]

You are using Container Lifecycle Hooks, to be more specific PreStop.
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others.
If you want to execute a command when pod is starting you should consider using PostStart.
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
Another option would be using Init Containers, and here are few ideas with examples:
Wait for a Service to be created, using a shell one-line command like:
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
Register this Pod with a remote server from the downward API with a command like:
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d >'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
Wait for some time before starting the app container with a command like
sleep 60
Please read the documentation on how to use Init containers for more details.

My end goal was to run tls_generator.py right after the load command has completed. This is what I came with and is working fine.
command: ["/bin/sh", "-c"]
args: ["tini -g -- /app/scripts/entrypoint.sh load && /usr/bin/python
/scripts/tls_generator.py"]
In this case the default command when running "tini -g -- /app/scripts/entrypoint.sh" will be the --help command. But adding load passes it as a command.

Related

Is it possible to wrap a docker image, and do works before and after it's origin operations?

I'm trying to use sonarsource/sonar-scanner-cli as a kubernetes container, so I do this in a yaml:
- name: "sonarqube-scan-{{ .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.harbor.cache }}/sonarsource/sonar-scanner-cli"
env:
# Sonarqube's URL
- name: SONAR_HOST_URL
valueFrom:
secretKeyRef:
name: sonarqube
key: sonar-url
# Auth token of sonarqube's bot
- name: SONAR_LOGIN
valueFrom:
secretKeyRef:
name: sonar-bot
key: sonar-token
volumeMounts:
- mountPath: /usr/src
name: initrepo
Now, I want to do some pre-setup before the sonarsource/sonar-scanner-cl regular run and parse the docker container's run output for some other works. If this is a shell script, I do like:
$before.sh
$sonar-scanner-cl.sh | after.sh
I guess I can build a new docker image which is FROM sonarsource/sonar-scanner-cl, and put processes in before.sh and after.sh in its run script, but I don't know how to call the original sonarsource/sonar-scanner-cl run commands. What are the actual commands?
Or, alternatively, does kubernetes have a way to do this?
Here's how one can modify container commands without building another image.
Pull the image
docker pull sonarsource/sonar-scanner-cli
Inspect it
docker inspect sonarsource/sonar-scanner-cli
You should get something like this:
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"sonar-scanner\"]" <- arguments
],
"WorkingDir": "/usr/src",
"Entrypoint": [ <- executable
"/usr/bin/entrypoint.sh"
],
Entrypoint is what will be executed and CMD [...] (not Cmd) are the arguments for the executable. In a human-friendly format that equals to:
# entrypoint args
/usr/bin/entrypoint.sh sonar-scanner
Now in this case we have a script that is being executed so there are two options.
Option 1: Modify the entrypoint script and mount it at launch
Run this to save the script on your machine:
docker run --rm --entrypoint="" sonarsource/sonar-scanner-cli /bin/cat /usr/bin/entrypoint.sh > entrypoint.sh
Modify entrypoint.sh as you like, then put its contents into a configMap.
Mount the file from the configMap instead of /usr/bin/entrypoint.sh (don't forget to set mode to 0755)
Option 2: Change entrypoint and arguments in resource definition
Note that this may not work with some images (ones with no shell inside).
- name: "sonarqube-scan-{{ .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.harbor.cache }}/sonarsource/sonar-scanner-cli"
command: # this is entrypoint in k8s API
- /bin/sh
- -c
args: # this goes instead of CMD
- "before.sh && /usr/bin/entrypoint.sh sonar-scanner && after.sh"
# | original command and args |

docker compose to delay container build and start

i have couple of container running in sequence.
i am using depends on to make sure the next one only starts after current one running.
i realize one of container has some cron job to be finished ,
so the next container has the proper data to be imported....
in this case, i cannot just rely on depends on parameter.
how do i delay the next container to starts? say wait for 5 minutes.
sample docker compose:
test1:
networks:
- test
image: test1
ports:
- "8115:8115"
container_name: test1
test2:
networks:
- test
image: test2
depends_on:
- test1
ports:
- "8160:8160"
You can use entrypoint script, something like this (need to install netcat):
until nc -w 1 -z test1 8115; do
>&2 echo "Service is unavailable - sleeping"
sleep 1
done
sleep 2
>&2 echo "Service is up - executing command"
And execute it by command instruction in service (in docker-compose file) or in the Dockerfile (CMD directive).
I added this in the Dockerfile (since it was just for a quick test):
CMD sleep 60 && node server.js
A 60 seconds sleep did the trick, since the node.js part was executing before a database dump init script could finish executing fully.

Invalid ENTRYPOINT when deploying Docker image inside Google Cloud

I´m getting this error when I run 'gcloud builds submit --config cloudbuild.yaml' from gcloud-cli.
Step #1: Deploying...
Step #1: Setting IAM Policy.....................................done
Step #1: Creating Revision.....................................................failed
Step #1: Deployment failed
Step #1: ERROR: (gcloud.run.deploy) Cloud Run error: Invalid argument error. Invalid ENTRYPOINT. [name: "gcr.io/customerapi-275705/quickstart-image#sha256:0d1965181fa4c2811c3fcbd63d68de5b4c348ee5b62615594946dea48fee9735"
Step #1: error: "Command \"/quickstart.sh\": invalid mode \"-rw-rw-rw-\" for /quickstart.sh"
Step #1: ].
Finished Step #1
The file is supposed to have '+x' (read/execute) permissions set by chmod. The Windows equivalent would be '/grant User:F'.
Step #1: error: "Command \"/quickstart.sh\": invalid mode \"-rw-rw-rw-\" for /quickstart.sh"
-rw-rw-rw seems about right to me. What am I missing?
This is in my Dockerfile
FROM alpine
COPY quickstart.sh /
CMD ["\/quickstart.sh"]
And this is my cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'myservice'
- '--image'
- 'gcr.io/$PROJECT_ID/quickstart-image'
- '--region'
- 'europe-north1'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
I believe that in the Cloud Build environment sandbox your quickstart.sh does not have execution permissions, which you can check adding this step to your Cloud Build cloudbuild.yaml config file:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
ls -lart
I am not sure if the Cloud Build sandbox will allow you to give execution permissions to a bash script but you might try to do it by adding another step with chmod +x quickstart.sh.
I was having the same problem a few hours ago, I fixed it by adding an exec form ENTRYPOINT to the end of the Dockerfile.
I tried the shell form ENTRYPOINT, but it didn't work, presumably because of the following:
The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop <container>.
GCP probably needs to pass some command line arguments.
source

Docker-in-Docker issues with connecting to internal container network (Anchore Engine)

I am having issues when trying to connect to a docker-compose network from inside of a container. These are the files I am working with. The whole thing runs when I ./run.sh.
Dockerfile:
FROM docker/compose:latest
WORKDIR .
# EXPOSE 8228
RUN apk update
RUN apk add py-pip
RUN apk add jq
RUN pip install anchorecli
COPY dockertest.sh ./dockertest.sh
COPY docker-compose.yaml docker-compose.yaml
CMD ["./dockertest.sh"]
docker-compose.yaml
services:
# The primary API endpoint service
engine-api:
image: anchore/anchore-engine:v0.6.0
depends_on:
- anchore-db
- engine-catalog
#volumes:
#- ./config-engine.yaml:/config/config.yaml:z
ports:
- "8228:8228"
..................
## A NUMBER OF OTHER CONTAINERS THAT ANCHORE-ENGINE USES ##
..................
networks:
default:
external:
name: anchore-net
dockertest.sh
echo "------------- INSTALL ANCHORE CLI ---------------------"
engineid=`docker ps | grep engine-api | cut -f 1 -d ' '`
engine_ip=`docker inspect $engineid | jq -r '.[0].NetworkSettings.Networks."cws-anchore-net".IPAddress'`
export ANCHORE_CLI_URL=http://$engine_ip:8228/v1
export ANCHORE_CLI_USER='user'
export ANCHORE_CLI_PASS='pass'
echo "System status"
anchore-cli --debug system status #This line throws error (see below)
run.sh:
#!/bin/bash
docker build . -t anchore-runner
docker network create anchore-net
docker-compose up -d
docker run --network="anchore-net" -v //var/run/docker.sock:/var/run/docker.sock anchore-runner
#docker network rm anchore-net
Error Message:
System status
INFO:anchorecli.clients.apiexternal:As Account = None
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.19.0.6:8228
Error: could not access anchore service (user=user url=http://172.19.0.6:8228/v1): HTTPConnectionPool(host='172.19.0.6', port=8228): Max retries exceeded with url: /v1
(Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Steps:
run.sh builds container image and creates network anchore-net
the container has an entrypoint script, which does multiple things
firstly, it brings up the docker-compose network as detached FROM inside the container
secondly, nstalls anchore-cli so I can run commands against container network
lastly, attempts to get a system status of the anchore-engine (d.c network) but thats where I am running into HTTP request connection issues.
I am dynamically getting the IP of the api endpoint container of anchore-engine and setting the URL of the request to do that. I have also tried passing those variables from command line such as:
anchore-cli --u user --p pass --url http://$engine_ip/8228/v1 system status but that throws the same error.
For those of you who took the time to read through this, I highly appreciate any input you can give me as to where the issue may be lying. Thank you very much.

Ansible - playbook dynamic verbosity

I want to build a docker image from a Dockerfile. I can do this by using the bash like this:
[root#srv01 ~]# docker build -t appname/tomcat:someTag /root/Documents/myDockerfiles/tomcat
The good thing about having the image build using the bash is, that it prints to stdout what it executes step-by-step:
Step 1 : FROM tomcat:8.0.32-jre8
8.0.32-jre8: Pulling from library/tomcat
fdd5d7827f33: Already exists
...
When using Ansible in the following fashion from bash:
[root#localhost ansiblescripts]# ansible-playbook -vvvvv build-docker-image.yml:
Where the file build-docker-image.yml contains this content:
- name: "my build-docker-image.yml playbook"
hosts: myHost
tasks:
- name: "simple ping"
ping:
- name: "build the docker image"
become: yes
become_method: root
become_method: su
command: /bin/docker build -t something/tomcat:ver1 /home/docker/tomcat
#async: 1
#poll: 0
It waits for the whole build command to finish and then prints all the stdout as verbose output together in one piece.
Commenting in async:1 and poll:0 doesn't solve my problem, since it doesn't print the stdout at all.

Resources