jenkins exit status not zero error (status code 125) - jenkins

At some point while operating Jenkins, I spit out status 125 and no error message appears. I can't find the error for #125.
The ssh forwarding test between the deployment server and the main server passed.
SSH: Connecting from host [ip-122.222.222]
SSH: Connecting with configuration [test2] ...
SSH: EXEC: completed after 7,807 ms
SSH: Disconnecting configuration [test2] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [125]]
Finished: UNSTABLE

Finally, I found the answer.
The answer was on the distribution server, a Docker-related error, and an error occurred on the distribution server, so only code for the error appeared on the Jenkins server.
If you get an error about #125, check the docker on the distribution server.

According to the ssh man page :
ssh exits with the exit status of the remote command or with 255 if an error occurred.
So 125 might be the exit status of the command you were executing through ssh or just (assuming your shell is Bash), that the operation was cancelled. See Bash error codes

Related

Cannot store heroku credentials in container login

I am trying to login to Heroku container with the command heroku container:login and I am encountering the following error:
Error saving credentials: error storing credentials - err: exit status 1, out: `Post "http://ipc/registry/credstore-updated": dial unix backend.sock: connect: no such file or directory`
▸ Login failed with: 1
I was able to login successfully weeks ago, but I did an upgrade to my Mac and I am not sure if this changed the behavior.
I am running Docker version 4.13 on my Mac OS Ventura. Has anybody encountered any similar issue?
I had to reset docker to its default factory configurations, then I udpated docker to 4.14.1 version and I ran heroku container:login.

Hyperledger Fabric: Error trying to deploy javascript chaincode on Ubuntu

After raising the network and creating the channel, I try to display the javascript chaincode but it returns an error.
The specific error is:
Error: Chaincode install failed with status 500 - error in simulation failed to execute transaction d66197cd7608c1b939d3b78dd3b46e72a8afdf45cd80f86fb025bbbbfc4abd52: error sending: timeout expired while executing transaction
Someone know a solution?????

OpenShift 4 error: Error reading manifest

during OpenShift installation from a local mirror registry, after I started the bootstrap machine i see the following error in the journal log:
release-image-download.sh[1270]:
Error: error pulling image "quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129":
unable to pull quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: unable to pull image:
Error initializing source docker://quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129:
(Mirrors also failed: [my registry:5000/ocp4/openshift4#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: Error reading manifest
sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129 in my registry:5000/ocp4/openshift4: manifest unknown: manifest unknown]):
quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: error pinging docker registry quay.io:
Get "https://quay.io/v2/": dial tcp 50.16.140.223:443: i/o timeout
Does anyone have any idea what it can be?
The answer is here in the error:
... dial tcp 50.16.140.223:443: i/o timeout
Try this on the command line:
$ podman pull quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129
You'll need to be authenticated to actually download the content (this is what the pull secret does). However, if you can't get the "unauthenticated" error then this would more solidly point to some network configuration.
That IP resolves to a quay host (you can verify that with "curl -k https://50.16.140.223"). Perhaps you have an internet filter or firewall in place that's blocking egress?
Resolutions:
fix your network issue, if you have one
look at doing an disconnected /airgap install -- https://docs.openshift.com/container-platform/4.7/installing/installing-mirroring-installation-images.html has more details on that
(If you're already doing an airgap install and it's your local mirror that's failing, then your local mirror is failing)

Jenkins slave pod on kubernetes randomly failing

I have set a Jenkins master (on a VM) and this is provisioning jnlp slaves as kubernetes pods.
In very rare occasions, the pipeline fails, with this message:
java.io.IOException: Pipe closed
at java.io.PipedInputStream.checkStateForReceive(PipedInputStream.java:260)
at java.io.PipedInputStream.receive(PipedInputStream.java:226)
at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
at java.io.OutputStream.write(OutputStream.java:75)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.setupEnvironmentVariable(ContainerExecDecorator.java:510)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:474)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:333)
at hudson.Launcher$ProcStarter.start(Launcher.java:455)
Viewing kubernetes logs Stackdriver in Stackdriver, one can see that the pod does manage to connect to the master, e.g.
Handshaking
Agent discovery successful
Trying protocol: JNLP4-Connect
Remote Identity confirmed: <some_hash_here>
Connecting to <jenkins-master-url>:49187
started container
loading plugin ...
but after a while it fails and here are the relevant logs:
org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave$SlaveDisconnector call
INFO: Disabled slave engine reconnects.
hudson.remoting.jnlp.Main$CuiListener status
Terminated
hudson.remoting.Request$2 run
Failed to send back a reply to the request hudson.remoting.Request$2#336ec321: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#29d0e8b2:JNLP4-connect connection to <jenkins-master-url>/<public-ip-of-jenkins-master>:49187": channel is already closed
"Processing signal 'terminated'"
.
.
.
How can I further troubleshoot this random error?
Can you take a look at the Kubernetes Pod-Events with Stackdriver? We had a similar behavior with a different CI-System (GitlabCI). Our builds where also randomly failing. It turned out that the JVM inside the Container exceeded its memory limitation and was killed by Kubernetes (OOMKilled) and the CI-System recognised this as a build error.

Gitlab Runner fails with ERROR: Job failed (system failure): Internal error occurred: connection reset by peer

I get this error from time to time when running builds with my dedicated runners, running on GKE.
What could be the problem here?
Is it related to the Gitlab instance or is it more a problem on the cluster side?
ERROR: Job failed (system failure): Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.27/exec/f9ee0d021b8a6d7660d2334456a93f61108835077574545bcf00a484b45f5247/json: read unix #->/var/run/docker.sock: read: connection reset by peer
See the same thing randomly. There are some jobs more likely than others to trigger it but a retry usually works fine

Resources