How to use publish over ssh plugin in pipeline - jenkins

I would like to SSH to linux server from Jenkins hosted on windows and execute a command over in linux machine, I tried installing publish over ssh plugin and tested the connection in global config and it works fine, I don't know how to proceed next in pipeline. Any help would be appreciated.

If you are using a pipeline project and a Jenkinsfile, then all you need to do is go into your project in Jenkins and click configure. In the pipeline section of the configuration, at the bottom there is a link "pipeline syntax". It will take you to the Snippet Generator. Its self explanatory and in our case it allows to generate "publish over ssh" snippets that you would add to your Jenkinsfile (add it to a steps section inside a stage definition). In the generator you can define what to publish, options to run a shell command, etc. source

In case you were looking for the syntax for a declarative pipeline (Jenkinsfile) for Publish-Over-SSH, (Instead of the scripted pipeline, which is all I could find). Here's what finally worked for me.
pipeline{
agent any
environment {
RELEASENAME="yourProject-ci"
}
stages{
stage("Get the charts..."){
steps {checkout scm}
}
stage('SSH transfer') {
steps([$class: 'BapSshPromotionPublisherPlugin']) {
sshPublisher(
continueOnError: false, failOnError: true,
publishers: [
sshPublisherDesc(
configName: "kubernetes_master",
verbose: true,
transfers: [
sshTransfer(execCommand: "/bin/rm -rf /opt/deploy/helm"),
sshTransfer(sourceFiles: "helm/**",)
]
)
]
)
}
}
stage('Deploy Helm Scripts'){
steps([$class: 'BapSshPromotionPublisherPlugin']) {
sshPublisher(
continueOnError: false, failOnError: true,
publishers: [
sshPublisherDesc(
configName: "kubernetes_master",
verbose: true,
transfers: [
sshTransfer(execCommand: "cd /opt/deploy/helm;helm upgrade ${RELEASENAME} . --install"),
]
)
]
)
}
}
}
}
I have a checkout that happens first and then I copy some helm charts from the checkout to my kubernetes master and then run the charts.
configName: "kubernetes_master" is something I setup in the Publish_over_ssh plugin configuration section (Found under Manage Jenkins > Configure System) so I could reference it. It includes a username, sshkey, destination hostname, and base directory for the destination which I put as /opt/deploy.
FYI execCommand does not use the base directory... it assumes you will use full pathing.
Hope that helps.
edit: I should probably mention that there are lots more options for the sshPublisher than what I used. You can find them here: https://jenkins.io/doc/pipeline/steps/publish-over-ssh/

Based on levis answer, the below has worked for me.
stage('Deploy') {
agent any
steps {
sh 'mv target/my-app-0.0.1-SNAPSHOT.jar my-app.jar'
sshPublisher(
continueOnError: false,
failOnError: true,
publishers: [
sshPublisherDesc(
configName: "my-ssh-connection",
transfers: [sshTransfer(sourceFiles: 'my-app.jar')],
verbose: true
)
]
)
}
}

I got this question some time ago, and here is the answer. Change the code according to your requirement.
pipeline {
agent any
options { timestamps () }
stages {
stage('Publish over ssh plugin in pipeline') {
steps([$class: 'BapSshPromotionPublisherPlugin']) {
script {
List SERVERS_LIST = ["Server_1", "Server_2"]
for(cr_server in SERVERS_LIST){
sshPublisher(
publishers: [
sshPublisherDesc(
configName: cr_server,
transfers: [
sshTransfer(
cleanRemote: false,
excludes: '',
execCommand: '',
execTimeout: 120000,
flatten: false,
makeEmptyDirs: false,
noDefaultExcludes: false,
patternSeparator: '[, ]+',
remoteDirectory: '',
remoteDirectorySDF: false,
removePrefix: '',
sourceFiles: '**/*'
)
],
usePromotionTimestamp: false,
useWorkspaceInPromotion: false,
verbose: false
)
]
)
}
}
}
}
}
}

I don't know how helpful this'll be but I found a tutorial on something that should work until they have a nicer way to do it.

Related

How do I dynamically load a Jenkins pipeline library from Perforce? [duplicate]

In continuation to jenkins-pipeline-syntax-for-p4sync - I am not able to get the "Poll SCM" option work for my pipeline job.
Here is my configuration:
"Poll SCM" is checked and set to poll every 10 minutes
Pipeline script contains the following:
node ('some-node') // not actual value
{
stage ('checkout')
{
checkout([
$class: 'PerforceScm',
credential: '11111111-1111-1111-1111-11111111111', // not actual value
populate: [
$class: 'AutoCleanImpl',
delete: true,
modtime: false,
parallel: [
enable: false,
minbytes: '1024',
minfiles: '1',
path: '/usr/local/bin/p4',
threads: '4'
],
pin: '',
quiet: true,
replace: true
],
workspace: [
$class: 'ManualWorkspaceImpl',
charset: 'none',
name: 'jenkins-${NODE_NAME}-${JOB_NAME}',
pinHost: false,
spec: [
allwrite: false,
clobber: false,
compress: false,
line: 'LOCAL',
locked: false,
modtime: false,
rmdir: false,
streamName: '',
view: '//Depot/subfolder... //jenkins-${NODE_NAME}-${JOB_NAME}/...' // not actual value
]
]
]
)
}
stage ('now do something')
{
sh 'ls -la'
}
}
Ran the job manually once
Still, polling does not work and job does not have a "Perforce Software Polling Log" link like a non-pipelined job has when configuring the perforce source and Poll SCM in the GUI.
It's like the PerforceSCM is missing a poll: true setting - or i'm doing something wrong.
Currently I have a workaround in which I poll perforce in a non-pipelined job which triggers a pipelined job, but then I have to pass the changelists manually and I would rather the pipeline job to do everything.
edit: versions
jenkins - 2.7.4
P4 plugin - 1.4.8
Pipeline plugin - 2.4
Pipeline SCM Step plugin - 2.2
If you go to the Groovy snippet generator and check the "include in polling" checkbox, you'll see that the generated code includes a line item for it:
checkout([
poll: true,
As an aside, you may run into problems at the moment using ${NODE_NAME} in your workspace name. The polling runs on the master, so it might not properly find the change number of your previous build. If that's the case, I know a fix for it should be coming shortly.
After updating all the plugins to latest (as of this post date) and restarting the jenkins server - the polling appears to be working with the exact same configuration (job now has the poll log link).
I'm not sure what exactly resolved the issue - but I consider it resolved.

publish over ftp jenkins plugin is not uploading subfolder

I have a Jenkinsfile that deploys my angular app code to a site using the Publish over FTP plugin. All of the files in the dist folder are transferred except an assets subfolder. I have tried putting in the following values for the sourceFiles parameter with no success: 'webapp/dist/', 'webapp/dist/**', 'webapp/dist/**/*'.
Here is the publish over FTP part of my Jenkinsfile:
stage('Deploy') {
steps {
echo 'Deploying....'
ftpPublisher paramPublish: null, masterNodeName:'', alwaysPublishFromMaster: true, continueOnError: false, failOnError: true, publishers: [
[configName: 'Angular app', verbose: true, transfers: [
[asciiMode: false, cleanRemote: true, makeEmptyDirs:true, excludes: '', flatten: false,
noDefaultExcludes: false, patternSeparator: '[, ]+',
remoteDirectory: "webapp",
removePrefix: "webapp/dist",
remoteDirectorySDF: false,
sourceFiles: 'webapp/dist/**/*']
], usePromotionTimestamp: false, `enter code here`useWorkspaceInPromotion: false]
]
}
}
I've looked at the Publish over FTP pipeline documentation: https://jenkins.io/doc/pipeline/steps/publish-over-ftp/ and couldn't find anything parameters that I was missing. I'm stuck.
I was able to solve the issue. I changed the title of the pipeline to all lowercase letters without any spaces. I then changed the file path of the workspace folder to 'C:/jenkinsworkspace/${ITEM_FULL_NAME}' by modifying the workspaceDir entry in the config.xml located in the Jenkins root directory. I stopped the Jenkins service before modifying the config.xml. Both the assets folder and the favicon got generated in the build. It was one of the solutions mentioned in https://github.com/angular/angular-cli/issues/9230. Thanks for your help #Alberto L. Bonfiglio.

Jenkins Pipeline sshPublisher: How to get exit code and output of execCommand?

Using Jenkins Pipeline's sshPublisher plugin ("publish over ssh"), is it possible to get the exit code and output of the command ran with execCommand (after artifacts have been transferred over)?
I'm using the plugin as follows:
script {
echo "Sending artifacts to machine at " + remoteDirectory
// Use of the ssh publisher plugin over SSH
sshPublisher(
failOnError: false,
publishers: [
sshPublisherDesc(
configName: "my-drive",
transfers: [
sshTransfer(
sourceFiles: mySourceFilesList,
remoteDirectory: remoteDirectory,
flatten: true,
execCommand: commandToExec,
execTimeout: 1800000
)
],
sshRetry: [
retries: 0
]
)
]
)
// How can I get the output of execCommand?
// If the exit code was 1, I want to perform some special steps
// I'd also like to include the output of the command in these steps
}
The wiki page here says (this is old and from 2011 though):
STDOUT and STDERR from the command execution are recorded in the
Jenkins console.
It is a "No" (can't be sure, but I try every thing I can).
And now I'm happy with this script ssh user#nas01 su -c "/path/to/command1 arg1 arg2"

Declarative Jenkinsfile CIFS share

i have another question about the jenkins pipeline.
How can i publish the build artifacts to a windows share? In normal build jobs there is a "CIFS Publisher" post build action. But how can i use it in
post{
success {
//publish build artifacts
}
}
Is there any example?
I've succesfully managed it in this way:
cifsPublisher alwaysPublishFromMaster: false, continueOnError: false, failOnError: false, publishers: [[
configName: 'NAME_OF_THE_CIFS_CONFIG', transfers: [[
cleanRemote: false,
excludes: '',
flatten: false,
makeEmptyDirs: false,
noDefaultExcludes: false,
patternSeparator: '[, ]+',
remoteDirectory: '$BUILD_NUMBER',
remoteDirectorySDF: false,
removePrefix: '',
sourceFiles: 'myfile']],
usePromotionTimestamp: false,
useWorkspaceInPromotion: false,
verbose: true
]]
Please use the auto generate tool in Jenkins to help you.
As you can see the highlight link. Click it.
Fill all the required value that you expected.
After scroll down and click Generate pipeline script, you will see the syntax that you need.
Last step, copy the generated script into success clause.

Reusing stages of a jenkins pipeline in multiple jobs

My team is moving to Jenkins 2 and I am using the pipeline plugin so that our build can live in our repository. Because getting repositories allocated has lots of overhead in our company we have a single respository with many sub-projects & sub-modules in it.
What I want is separate builds and reporting of Junit/checkstyle/etc reports for each sub-module as well as a final "build and deploy" step for each sub-project putting it all together.
My current plan is to create separate jobs for each sub-module so that they get their own junit/checkstyle/etc reports page. Then have a multi-job project to orchestrate the sub-module builds for the sub-projects. Since all of the sub-projects are simple jar builds, I want to put bulk of the logic in a common file, lets call it JenkinsfileForJars at the root of the sub-project. So the repo structure is
sub-project
JenkinsfileForJars.groovy
sub-moduleA
Jenkinsfile
sub-moduleB
Jenkinsfile
My Jenkinsfile contains
def submoduleName = "submoduleA"
def pipeline
node {
pipeline = load("${env.WORKSPACE}/subproject/JenkinsfileForJars.groovy")
}
pipeline.build()
pipeline.results()
And my JenkinsfileForJars contains
def build() {
stage('Build') {
// Run the maven build
dir("subproject") {
sh "./gradlew ${submoduleName}:build"
}
}
}
def results() {
stage('Results') {
dir("subproject/${submoduleName}") {
junit 'build/test-results/TEST-*.xml'
archive 'build/libs/*.jar'
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: 'build/reports/cobertura/', reportFiles: 'frame-summary.html', reportName: 'Cobertura Report'])
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: 'build/reports/findbugs/', reportFiles: 'main.html', reportName: 'Fidbugs Report'])
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: 'build/reports/pmd/', reportFiles: 'main.html', reportName: 'PMD Report'])
step([$class: 'CheckStylePublisher', pattern: 'build/reports/checkstyle/main.xml', unstableTotalAll: '200', usePreviousBuildAsReference: true])
}
}
}
return this;
When I run the Jenkinsfile above I get the following error:
Running on master in /var/lib/jenkins/workspace/jobA
[Pipeline] {
[Pipeline] load
[Pipeline] { (/var/lib/jenkins/workspace/jobA/subproject/JenkinsfileForJars.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.lang.NullPointerException: Cannot invoke method build() on null object
As far as I can tell, I am following what is shown in the documents for loading manual scripts and the example given for a loaded script. I do not understand why my script is null after the load command.
How do I get my Jenkinsfile to load JenkinsfileForJars.groovy?
The problem is related to the SCM checkout as mentioned by Blake Mitchell in the comment above.
Since you are loading your groovy functions from a submodule, you will need to checkout the submodule first, preferably on a build agent /slave, if you would like to keep only bare repos on the master.
def pipeline
node( 'myAgentLabel' ) {
stage ( 'checkout SCM' ) {
checkout([
$class: 'GitSCM'
,branches: scm.branches
,extensions: scm.extensions
+ [[ $class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: false]]
,doGenerateSubmoduleConfigurations: false
,userRemoteConfigs: scm.userRemoteConfigs
])
pipeline = load( "${env.WORKSPACE}/path/to/submodule/myGroovyFunctions.grooovy" )
}
pipeline.build()
}
Note that in the checkout example, access to scm.* attributes also needs to be whitelisted by an administrator in Jenkins (In-process script approval)
There might be two possible problems:
Why do you put the load in a node structure. This import does not need computational resources, so you do not need it there.
The call to build should be put inside a node structure. And probably the call to result should also be inside (the same) node structure to make sure, that the correct results are archived (if you use more than one (slave) nodes).
(This should probably be a comment below your question but I do not have enough points to add a comment there.)

Resources