Invalid ENTRYPOINT when deploying Docker image inside Google Cloud - docker

I´m getting this error when I run 'gcloud builds submit --config cloudbuild.yaml' from gcloud-cli.
Step #1: Deploying...
Step #1: Setting IAM Policy.....................................done
Step #1: Creating Revision.....................................................failed
Step #1: Deployment failed
Step #1: ERROR: (gcloud.run.deploy) Cloud Run error: Invalid argument error. Invalid ENTRYPOINT. [name: "gcr.io/customerapi-275705/quickstart-image#sha256:0d1965181fa4c2811c3fcbd63d68de5b4c348ee5b62615594946dea48fee9735"
Step #1: error: "Command \"/quickstart.sh\": invalid mode \"-rw-rw-rw-\" for /quickstart.sh"
Step #1: ].
Finished Step #1
The file is supposed to have '+x' (read/execute) permissions set by chmod. The Windows equivalent would be '/grant User:F'.
Step #1: error: "Command \"/quickstart.sh\": invalid mode \"-rw-rw-rw-\" for /quickstart.sh"
-rw-rw-rw seems about right to me. What am I missing?
This is in my Dockerfile
FROM alpine
COPY quickstart.sh /
CMD ["\/quickstart.sh"]
And this is my cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'myservice'
- '--image'
- 'gcr.io/$PROJECT_ID/quickstart-image'
- '--region'
- 'europe-north1'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'

I believe that in the Cloud Build environment sandbox your quickstart.sh does not have execution permissions, which you can check adding this step to your Cloud Build cloudbuild.yaml config file:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
ls -lart
I am not sure if the Cloud Build sandbox will allow you to give execution permissions to a bash script but you might try to do it by adding another step with chmod +x quickstart.sh.

I was having the same problem a few hours ago, I fixed it by adding an exec form ENTRYPOINT to the end of the Dockerfile.
I tried the shell form ENTRYPOINT, but it didn't work, presumably because of the following:
The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop <container>.
GCP probably needs to pass some command line arguments.
source

Related

cloud build does not recognize build directory argument

I am trying to build a Cloud Run job with a trigger from Cloud Build and secrets from Secret Manager. I managed to get the trigger that I use to build my Dockerfile to run, but the build itself fails with the following error:
BUILD
Starting Step #0 - "build image"
Step #0 - "build image": Already have image (with digest): gcr.io/cloud-builders/docker
Step #0 - "build image": "docker build" requires exactly 1 argument.
Step #0 - "build image": See 'docker build --help'.
Step #0 - "build image":
Step #0 - "build image": Usage: docker build [OPTIONS] PATH | URL | -
Step #0 - "build image":
Step #0 - "build image": Build an image from a Dockerfile
Finished Step #0 - "build image"
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
What I have already tried:
Verified that there is a build directory in the command;
Rearranged the order of build arguments just in case;
I also tried breakout syntax (with '|' as one of the arguments), but it did not work out - the image was not built at all.
UPDATED: I tried running the build without --build-args and it started actually building! Looks like a bug.
Here is my cloudbuild.yaml:
steps:
- id: "build image"
name: "gcr.io/cloud-builders/docker"
entrypoint: 'bash'
args:
['-c', 'docker build --build-arg CONTAINER_PRIVATE_KEY=$$PRIVATE_KEY --build-arg CONTAINER_PUBLIC_KEY=$$PUBLIC_KEY -t gcr.io/${PROJECT_ID}/${_JOB_NAME} .']
secretEnv: [ 'PRIVATE_KEY', 'PUBLIC_KEY' ]
- id: "push image"
name: "gcr.io/cloud-builders/docker"
args: [ "push", "gcr.io/${PROJECT_ID}/${_JOB_NAME}" ]
- id: "deploy to cloud run"
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
[
'beta', 'run', '${_JOB_NAME}',
'--image', 'gcr.io/${PROJECT_ID}/${_JOB_NAME}',
'--region', '${_REGION}',
'--set-env-vars', "BUCKET=${_BUCKET}",
'--set-env-vars', "MNT_DIR=${_MNT_DIR}"
]
images:
- "gcr.io/${PROJECT_ID}/${_JOB_NAME}"
availableSecrets:
secretManager:
- versionName: "projects/${_PROJECT_ID_NUMBER}/secrets/${_CONTAINER_PRIVATE_KEY_SECRET_NAME}/versions/latest"
env: "PRIVATE_KEY"
- versionName: "projects/${_PROJECT_ID_NUMBER}/secrets/${_CONTAINER_PUBLIC_KEY_SECRET_NAME}/versions/latest"
env: "PUBLIC_KEY"
So, after extensive testing and trying out various options I have managed to figure out what was causing the issue, below is the correct argument string (it goes in the args):
["-c", "docker build --build-arg 'CONTAINER_PRIVATE_KEY=$$PRIVATE_KEY' --build-arg 'CONTAINER_PUBLIC_KEY=$$PUBLIC_KEY' -t gcr.io/${PROJECT_ID}/${_JOB_NAME} ."]
The problem was lack of single quotes around build-args' values. Basically, in this context a build-arg value is a single string, not a key-value pair

Cloud Builds Failulre , unable to find logs to see what is going on

i am kicking off a dataflow flex template using a cloud build. In my cloud build file i am attempting to do 3 things
build an image
publish it
run a flex template job using that image
this is my yaml file
substitutions:
_IMAGE: my_logic:latest4
_JOB_NAME: 'pipelinerunner'
_TEMP_LOCATION: ''
_REGION: us-central1
_FMPKEY: ''
_PYTHON_VERSION: '3.8'
# checkout this link https://github.com/davidcavazos/python-docs-samples/blob/master/dataflow/gpu-workers/cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
[ 'build'
, '--build-arg=python_version=$_PYTHON_VERSION'
, '--tag=gcr.io/$PROJECT_ID/$_IMAGE'
, '.'
]
# Push the image to Container Registry.
- name: gcr.io/cloud-builders/docker2
args: [ 'push', 'gcr.io/$PROJECT_ID/$_IMAGE' ]
- name: gcr.io/$PROJECT_ID/$_IMAGE
entrypoint: python
args:
- /dataflow/template/main.py
- --runner=DataflowRunner
- --project=$PROJECT_ID
- --region=$_REGION
- --job_name=$_JOB_NAME
- --temp_location=$_TEMP_LOCATION
- --sdk_container_image=gcr.io/$PROJECT_ID/$_IMAGE
- --disk_size_gb=50
- --year=2018
- --quarter=QTR1
- --fmpkey=$_FMPKEY
- --setup_file=/dataflow/template/setup.py
options:
logging: CLOUD_LOGGING_ONLY
# Use the Compute Engine default service account to launch the job.
serviceAccount: projects/$PROJECT_ID/serviceAccounts/$PROJECT_NUMBER-compute#developer.gserviceaccount.com
And this is the command i am launching
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
The error message i am getting is this
Logs are available at [https://console.cloud.google.com/cloud-build/builds/0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7?project=111111111].
ERROR: (gcloud.beta.builds.submit) build 0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7 completed with status "FAILURE
but i cannot access the logs from the URL mentioned above
I cannot see the logs, so i am unable to see what is wrong, but i stongly suspect somethign in my run.yaml is not quite right
Note: before this, i was building the image myself by launching this command
gcloud builds submit --project=$PROJECT_ID --tag $TEMPLATE_IMAGE .
and my run.yaml just contained 1 step, the last one, and everything worked fine
But i am trying to see if i can do everything in the yaml file
Could anyone advise on what might be incorrect? I dont have much experience with yaml files for cloud build
thanks and regards
Marco
I guess the pipeline does not work because (in the second step) the container: gcr.io/cloud-builders/docker2 does not exist (check https://gcr.io/cloud-builders/ - there is a docker container, but not a docker2one).
This second step pushes the final container to the registry and, it is a dependence of the third step, which will fail too.
You can build the container and push it to the container registry in just one step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE_NAME', '<path_to_docker-file>']
images: ['gcr.io/$PROJECT_ID/$IMAGE_NAME']
Ok, sorted, the problem was the way i was launching the build command
this is the original
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
apparently when i removed the --no-source all worked fine.
I think i copied and pasted the command without really understanding it
regards

How to run `args` as `command` in kubernetes

I have a python script I want to run in a kubernetes job. I have used a configMap to upload it to the container located for example in dir/script.py.
The container is run normally with the args["load"].
I have tried using a postStart lifecycle in the Job manifest but it appears not to run.
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /usr/bin/python /opt/config-init/db/tls_generator.py
Below is the snippet of the manifest
containers:
- name: {{ template "gluu.name" . }}-load
image: gluufederation/config-init:4.0.0_dev
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /usr/bin/python /opt/config-init/db/tls_generator.py
volumeMounts:
- mountPath: /opt/config-init/db/
name: {{ template "gluu.name" . }}-config
- mountPath: /opt/config-init/db/generate.json
name: {{ template "gluu.fullname" . }}-mount-gen-file
subPath: generate.json
- mountPath: /opt/config-init/db/tls_generator.py
name: {{ template "gluu.fullname" . }}-tls-script
envFrom:
- configMapRef:
name: {{ template "gluu.fullname" . }}-config-cm
args: [ "load" ]
How can I run the tls_generator.py scipt after the args["load"].
The dockerFile part looks like
ENTRYPOINT ["tini", "-g", "--", "/app/scripts/entrypoint.sh"]
CMD ["--help"]
You are using Container Lifecycle Hooks, to be more specific PreStop.
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others.
If you want to execute a command when pod is starting you should consider using PostStart.
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
Another option would be using Init Containers, and here are few ideas with examples:
Wait for a Service to be created, using a shell one-line command like:
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
Register this Pod with a remote server from the downward API with a command like:
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d >'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
Wait for some time before starting the app container with a command like
sleep 60
Please read the documentation on how to use Init containers for more details.
My end goal was to run tls_generator.py right after the load command has completed. This is what I came with and is working fine.
command: ["/bin/sh", "-c"]
args: ["tini -g -- /app/scripts/entrypoint.sh load && /usr/bin/python
/scripts/tls_generator.py"]
In this case the default command when running "tini -g -- /app/scripts/entrypoint.sh" will be the --help command. But adding load passes it as a command.

Gitlab ci exits lftp command when ending connection with error

I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.

Ansible - playbook dynamic verbosity

I want to build a docker image from a Dockerfile. I can do this by using the bash like this:
[root#srv01 ~]# docker build -t appname/tomcat:someTag /root/Documents/myDockerfiles/tomcat
The good thing about having the image build using the bash is, that it prints to stdout what it executes step-by-step:
Step 1 : FROM tomcat:8.0.32-jre8
8.0.32-jre8: Pulling from library/tomcat
fdd5d7827f33: Already exists
...
When using Ansible in the following fashion from bash:
[root#localhost ansiblescripts]# ansible-playbook -vvvvv build-docker-image.yml:
Where the file build-docker-image.yml contains this content:
- name: "my build-docker-image.yml playbook"
hosts: myHost
tasks:
- name: "simple ping"
ping:
- name: "build the docker image"
become: yes
become_method: root
become_method: su
command: /bin/docker build -t something/tomcat:ver1 /home/docker/tomcat
#async: 1
#poll: 0
It waits for the whole build command to finish and then prints all the stdout as verbose output together in one piece.
Commenting in async:1 and poll:0 doesn't solve my problem, since it doesn't print the stdout at all.

Resources