I have pods that are of kind Cronjob running in parallel. They complete task and run again after fixed interval of 20 minutes as per cron expression. I noticed that some pods are restarting 2-3 times before completing task.
I checked details in kubectl describe pod command and found that pod exit code 2 when it restart due to some error:
Last State: Terminated
Reason: Error
Exit Code: 2
I searched about exit code 2 and found that it is misuse of a shell builtin commands. How I can find which shell builtin is misused. How to debug cause of exit code 2.
Thanks in advance.
An exit code of 2 indicates either that the application chose to return that error code, or (by convention) there was a misuse of a shell built-in. Check your pod’s command specification to ensure that the command is correct. If you think it is correct, try running the image locally with a shell and run the command directly.
Refer to this link for more information.
You can get logs with
kubectl logs my-pod
Post output here if you can't fix it.
Related
I received this error message which means something is erroring inside a bash script executed by the Dockerfile.
As an example, if something inside test.sh errors:
RUN test.sh
# 16 ERROR: executor failed running [/bin/sh -c test.sh]: exit code: 127
Question
What is the recommended way to gain visibility over the exact error message (i.e. to find out what's gone wrong) and to diagnose which line(s) of a bash script executed from a Dockerfile are problematic? Can docker be made to provide the output of the bash script so the exact error message is provided? Rather than just the somewhat cryptic:
executor failed running exit code: 127
as seen here.
What I know so far
One way to diagnose which line(s) is playing up is to survey the script, assess which line(s) might be causing problems, and comment out the suspect line and everything after it. If the error goes away, you've found the (first) line that is a problem, and it can be addressed. Rinse and repeat until the script is error-free. But this seems more manual than one would hope.
I've followed the Vapor tutorial to create a hello app. In Xcode, when I run the Run scheme on my Mac, the app starts and runs as I can see by opening http://localhost:8080/. After making some changes in the code, I stop the Run scheme and I expect the Vapor server to shutdown. However, it continues to serve requests.
Message from debugger: The LLDB RPC server has exited unexpectedly. Please file a bug if you have reproducible steps.
Program ended with exit code: -1
Obviously when I make some changes and run the Run scheme again, I get the following runtime error:
Swift/ErrorType.swift:200: Fatal error: Error raised at top level: bind(descriptor:ptr:bytes:) failed: Address already in use (errno: 48)
Program ended with exit code: 9
How do I stop or restart the server?
This is a long standing issue with Xcode/LLDB. You have a few options:
attach to the process and stop it via Xcode
run killall Run
run lsof -i :8080 to find the process connected to port 8080 and then kill <process_id> (this is useful if you're running multiple apps side by side and only want to terminate the orphaned one)
This is quite frustrating and to be honest I dont know why this happens, but I do the following to terminate the process:
In Xcode,
Go to Debug -> Attach to Process
At the very top of the sub-menu is: Likley targets section with an entry Run (nnnn). It will have an icon of the Terminal application
Click to attach
Then stop the Xcode Run in the usual way.
Out of interest, the next time you run your vapor app, if you open the Debug Navigator, at the top you will see the Terminal icon with Run PID nnnn. Where nnnn is the PID. If you go to Debug -> Attach to Process again, you can see this at the top of the sub-menu as before. But you wont be able to attach to it because it is already being debugged.
Hope this helps you or someone in the future.
I want to run a Kubernetes CronJob for a PHP script. Job executes properly but status of the POD remains running and after few minutes it becomes Error. It should be Completed status. Tried with different options but couldn't be able to resolve the issue.
Here is my CronJob Yaml file
Here is the output of kubectl get pods
Here is the log output inside the container.
Ignore the PHP exception. Issue is there regardless of the exception.
The state of the pod sets to completed when the running process / the application or the container returns exit code 0.
If in case it's returning a non-zero exit code it usually sets it to state Error.
If you want the pod set to completed status, just make sure the application at the end returning an exit code which is 0.
OPINION: It's something which is usual cases should be/are handled by the application itself.
I'm attaching docs for the k8s jobs.
I'm trying to tie scripts from an existing pipeline on docker into my snakemake pipeline. I have the docker pipeline set up using singularity and it works. For instance,
singularity exec docker://mypipeline some_command.sh file.bam out_file.bam
works perfectly when I run it interactively on the command line. Similarly, when I incorporate the exact same command into my Snakefile it also works:
rule myrule:
input:
"file.bam"
output:
"out_file.bam"
shell:
"singularity exec docker://mypipeline some_command.sh {input} {output}"
However, when I try to follow this tutorial https://reproducibility.sschmeier.com/container/index.html#using-a-container-in-our-workflow to incorporate the container into my workflow as follows
singularity: "docker://mypipeline"
rule myrule:
input:
"file.bam"
output:
"out_file.bam"
shell:
"some_command.sh {input} {output}"
And I run snakemake -p --use-singularity --cores 1 I get the following output
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 myrule
1
[Sun May 17 15:28:11 2020]
rule myrule:
input: file.bam
output: out_file.bam
jobid: 0
some_command.sh file.bam out_file.bam
Activating singularity image myImage.simg
Then I get a very long report that I'm not sure what to make of, followed by this error message
Waiting at most 5 seconds for missing files.
MissingOutputException in line 3 of Snakefile:
Job completed successfully, but some output files are missing. Missing files after 5 seconds:
out_file.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: .snakemake/log/2020-05-17T152810.484310.snakemake.log
My questions:
Why does one work and not the other/how can I get the last example to work?
Is it good practice to declare singularity: "docker://... upfront or does it not matter?
Error message suggests singularity command got executed successfully but snakemake doesn't see the output file. Is the output file out_file.bam shown in your code same as the one you actually use, or you removed some filepath? I would suggest adding --verbose flag to snakemake and reviewing the actual singularity command that snakemake executes.
Environment: Win 10 home, gcloud sdk v240.0 kubectl added as a gcloud sdk component, Jenkins 2.169
I am running a Jenkins pipeline in which I call a windows batch file as a post-build action.
In that batch file, I am running:
kubectl set image deployment/py-gmicro py-gmicro=%IMAGE_NAME%
I get this
error: the server doesn't have a resource type deployment
However, if I run the batch file directly from the command prompt, it works fine. Looks like it has an issue only if I run it from Jenkins.
Looked at a similar thread on stackoverflow, however that user was using bitbucket (instead of Jenkins).
Also, there was no certified answer on that thread. I cannot continue on that thread since I am not allowed to comment (50 reputation required)
Just was answered on this thread
I've had this error fixed by explicitly setting the namespace as an argument, e.g.:
kubectl set image -n foonamespace deployment/ms-userservice.....
Refrence:
https://www.mankier.com/1/kubectl-set-image#--namespace