I have a jenkins jobs that executes with 'Publish over SSH'. The job connects to the remote server, transfers files and runs and ansible playbook.
The playbook runs as intended, confirmed by the logs. However at the end of the job an error is returned, failing the job. It's causing problems as it's preventing the pipeline from working correctly.
SSH: EXEC: completed after 402,593 ms
SSH: Disconnecting configuration [server] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [2]]
Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
[Run Playbook] $ /bin/sh -xe /tmp/jenkins1528195779014969962.sh
+ echo Finished
Finished
Finished: UNSTABLE
Is there a setting missing to allow this to pass?
never used the 'Publish over SSH' you are referingto, but I can recommend Jenkins Ansible Plugin. I am running several playbooks in pipeline stages here successfully from labeled build slaves (have one dedicated slave that has Ansible installed) targeting Linux hosts on cloud infrastructure via SSH.
Especially in combination with the ANSI color plugin the output very readable.
If you cannot try that plugin, check whats the return code of the playbook run shell call.
Related
I have successfully set up a build with Jenkins (version 2.375.1) that is triggered by a GitHub web-hook. Dockerized Jenkins is running on an Ubuntu VM locally.
If I push from my local machine to GitHub, then this will initiate a build using build step: "Execute shell script on remote host using ssh" on the target AWS and run some steps to install the application. However, if I leave it too long the job times out. If I make a change and push again, or if I just hit build now, then the build is successful.
It seems like the connection from Jenkins to AWS is going to sleep and it requires the first attempt to "wake it up". I can't find any reference to this behaviour anywhere.
At the end of the console output....
[SSH] executing...
[SSH] Exception:Session.connect: java.net.SocketTimeoutException: Read timed out
com.jcraft.jsch.JSchException: Session.connect: java.net.SocketTimeoutException: Read timed out
at com.jcraft.jsch.Session.connect(Session.java:565)
at org.jvnet.hudson.plugins.CredentialsSSHSite.createSession(CredentialsSSHSite.java:132)
at org.jvnet.hudson.plugins.CredentialsSSHSite.executeCommand(CredentialsSSHSite.java:208)
at org.jvnet.hudson.plugins.SSHBuilder.perform(SSHBuilder.java:104)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:526)
at hudson.model.Run.execute(Run.java:1900)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:107)
at hudson.model.Executor.run(Executor.java:449)
Build step 'Execute shell script on remote host using ssh' marked build as failure
Finished: FAILURE
I have a Jenkins Job DSL job that worked well until about january (it is not used that often). Last week, the job failed with the error message ERROR: java.io.IOException: Failed to persist config.xml (no Stack trace, just that message). There were no changes to the job since the last successful execution in january.
[...]
13:06:22 Processing provided DSL script
13:06:22 New run name is '#15 (Branch_B20_2_x)'
13:06:22 ERROR: java.io.IOException: Failed to persist config.xml
13:06:22 [WS-CLEANUP] Deleting project workspace...
13:06:22 [WS-CLEANUP] Deferred wipeout is used...
13:06:22 [WS-CLEANUP] done
13:06:22 Finished: FAILURE
I thougt that between january and noew, maybe some plugin was updated and the DSL script is now wrong, so I changed my DSL script to the most easy one I could imagine (example from job-dsl plugin page):
job('example') {
steps {
shell('echo Hello World!')
}
}
But the job still fails with the exact same error.
I checked the jenkins logs but nothing to see.
I am running jenkins in a docker swarm container and each job is executed in an own build agent conatiner using docker-swarm-plugin (no changes to that either, worked in january).
The docker deamon logs also show no errors.
The filesystem for the workspace of jenkins also is not full and the user in the build agent container has write access to taht file system.
It even does not work, when I mount an empty tmpfs to the workspace.
Does anyone have an idea what goes wrong or at least a hint where to continue searching for that error?
Jenkins version: 2.281
job-dsl plugin version: 1.77
Docker version: 20.10.4
Problem was solved by updating jenkins to 2.289
Seems like there war some problem with the combination of the versions before. I will keep you updated if some of the next updates chnages anything.
Can you please help, I have the following scenario and I went through many videos, blogs but could not find anything matching with my use-case
Requirement:
To write a CI\CD pipeline in GitLab, which can facilitate the following stages in this order
- verify # unit test, sonarqube, pages
- build # package
- publish # copy artifact in repository
- deploy # Deploy artifact on runtime in an test environment
- integration # run postman\integration tests
All other stages are fine and working but for the deploy stage, because of a few restrictions I have to submit an existing Jenkins job using Jenkin remote API with the following script but the problem that script returns an asynchronous response and start the Jenkins job and deploy stage completes and it moves to next stage (integration).
Run Jenkins Job:
image: maven:3-jdk-8
tags:
- java
environment: development
stage: deploy
script:
- artifact_no=$(grep -m1 '<version>' pom.xml | grep -oP '(?<=>).*(?=<)')
- curl -X POST http://myhost:8081/job/fpp/view/categorized/job/fpp_PREP_party/build --user mkumar:1121053c6b6d19bf0b3c1d6ab604f22867 --data-urlencode json="{\"parameter\":[{\"name\":\"app_version\",\"value\":\"$artifact_no\"}]}"
Note: Using GitLab CE edition and Jenkins CI project service is not available.
I am looking for a possible way of triggering the Jenkins job from the pipeline and only on successful completion of the Jenkins job my integration stage starts executing.
Thanks for the help!
Retrieving the status of a Jenkins job that is triggered programmatically through the remote access API is notorious for not being quite convoluted.
Normally you would expect to receive in the response header, under the Location attribute, a url that you can poll to get the status of your request, but unfortunately there are some in-between steps to reach that point. You can find a guide in this post. You may also have a look in this older post.
Once you have the url, you can pool and parse the status job and either sh "exit 1" or sh "exit 0" in your script to force the job that is invoking the external job to fail or succeed, depending on how you want to assert the result of the remote job
I am sitting behind corporate proxy.
So basically test is just page.goto("https://") I created Freestyle project and made it execute code in command line:
cd C:/Testing
npm test -- Jenkins
If I run these commands on machine where Jenkins is running, everything works and test passes. However, if I run it using Jenkins, this error appears:
Building in workspace C:\Program Files (x86)\Jenkins\workspace\Main Project
[Main Project] $ cmd /c call C:\Windows\TEMP\jenkins7770552654778169643.bat
C:\Program Files (x86)\Jenkins\workspace\Main Project>cd C:/Testing
C:\Testing>npm test -- Jenkins
> testing#1.0.0 test C:\Testing
> jest "Jenkins"
FAIL .\Jenkins.test.js (5.023s)
Describe
× Test (513ms)
� Describe › Test
net::ERR_TUNNEL_CONNECTION_FAILED at https://google.com/
at navigate (node_modules/puppeteer/lib/Page.js:539:37)
Test Suites: 1 failed, 1 total
Tests: 1 failed, 1 total
Snapshots: 0 total
Time: 6.587s
Ran all test suites matching /Jenkins/i.
npm ERR! Test failed. See above for more details.
Build step 'Execute Windows batch command' marked build as failure
Finished: FAILURE
I checked Jenkins HTTP Proxy Configuration and it was set up correctly (host, port and credentials are all correct), but if I enter https://google.com/ in Test URL and press "Validate Proxy", this error appears:
Failed to connect to https://google.com/ (code 407).
Is there something that can refuse Jenkins requests to websites? Windows Defender firewall is disabled.
One of the steps in my Jenkins job is to run the "sloccount command".
I get an error "sloccount: command not found" but when I tried to run it form the command line it works.
It seems that sloccount isn't installed on the machine where the job is executed.
If you have master/slave constellation sloccount has to be installed on the slave.