How do I use the CloudBees SDK (command line interface) on Jenkins - jenkins

I am running a job in DEV#cloud on cloudbees - but I want to use the CloudBees SDK/CLI, how do I do this from within the job?

The Scripting Bees SDK in Jenkins doc includes a good write-up about this.
Firstly, you use a freestyle job, then install the Bees SDK as part of it:
# INSTALL AND CONFIGURE BEES SDK
export BEES_HOME=/opt/cloudbees/cloudbees-sdk/
export PATH=$PATH:$BEES_HOME
if [ ! -d ~/.bees ]; then
bees init -f -a <your account name here> -ep us -k $BEES_API -s $BEES_SECRET
fi
Then set the secrets in your Jenkins setup for BEES_API BEES_SECRET - and then you can use bees SDK commands.
You can install any plugins you need from here as well.

Related

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

Jenkins pipeline job gets triggered as anonymous but not as an user/Admin

Jenkins Pipeline job doesn't trigger pipeline job using jenkins cli. When i run jenkins as anaonymous this works, but when i create a user/admin it fails.
I have a job A which has parameters and passes the same to Pipeline Job. This is a Master-slave setup. This is how i run:
sudo java -jar /home/user/jenkins-cli.jar -s $JENKINS_URL build pipeline_job -p parameter_Name="$parameter_Name" -p parameter_Name2="$parameter2_Name"
1.) I tried using options, "-auth" , "-username -password" but doesn't work.
errors:
No such command: -auth
No such command: -ssh
2.) Another option is paste the public key in SSH section http://jenkin_url/me/configure , but still it fails
error:
java.io.IOException: Invalid PEM structure, '-----BEGIN...' missing
Is there i am missing anything ?
I Found the solution,
1.) used SSH CLI.
In my case i was using master-slave environment, connection was made using SSH keys vice-versa. In order to trigger the build using Jenkins CLI, place the SSH keys both public & private and place them in http://jenkinsURL/user/username/configure
Here username= the one used to connect the nodes.
Trigger the job as below:
java -jar /home/username/jenkins-cli.jar -s $JENKINS_URL -i /home/username/.ssh/id_rsa build JOBNAME
Note: This is one way but cloudbees doesn't encourage this approach.
2.) There is new approach i.e., using API token authentication.
Go to http://jenkinsURL/user/username/configure
Copy the API token
trigger the build as below:
java -jar /home/username/jenkins-cli.jar -s $JENKINS_URL -auth username:apitoken /home/username/.ssh/id_rsa build JOBNAME
Note: For using API token option, download the latest jar file

Jenkins Pipeline - Enabling Cloudfoundry deployment

Installed Blue Ocean from the docker image docker pull jenkinsci/blueocean. I wanted to include a Cloud Foundry Deployment step (sh cf push) in my pipeline and stuck with the error:
script.sh: line 1: cf: not found
I knew what's happening - as there is no compatible CF CLI plug-in the script command CF is not working. And I tried different things:
In my Jenkinsfile, I Tried using the the Cloud foundry plug-in (CloudFoundryPushPublisher) which is supported in non-pipeline build. And that didn't help.
step([$class: 'com.hpe.cloudfoundryjenkins.CloudFoundryPushPublisher',
target: 'https://api.ng.bluemix.net',
organization: 'xxxx',
cloudSpace: 'xxxxx',
credentialsId: 'xxxxxx',
selfSigned: true,
resetIfExists: true]);
That failed with Invalid Argument exception.
My question is, I heard Cloudbees has a commercial version that supports CF CLI, but that ability is missing from Blue ocean. So how should I be able to push the deployments to cloud foundry using Pipeline job?
I'm not sure whether you already fixed the issue, but I just installed 'cf cli' on jenkins machine by manual and use 'cf push' as shell script like;
sh 'cf login -u xxx - p xxx -s space -o org'
sh 'cf push appname ...'

Syncing TortoiseHG with Jenkins

I'm new to this continuous integration thing. I want to use Jenkins as my CI system, but I can't get it to pull the build everytime there's a new one.
Using mercurial's plugin I'm able to connect to my repository and pull my builds normally, but I don't want Jenkins to keep polling, I want it to update the build only when there's a new one instead. On the plugin's wiki I found this:
As of version 1.38 it's possible to trigger builds using push
notifications instead of polling. In your repository's .hg/hgrc file
add:
[hooks]
commit.jenkins = wget -q -O /dev/null <jenkins root>/mercurial/notifyCommit?url=<repository remote url>
incoming.jenkins = wget -q -O /dev/null <jenkins root>/mercurial/notifyCommit?url=<repository remote url>
For now I'm keeping Jenkis local, so I used this o my hgrc file:
commit.jenkins = wget -q -O /dev/null http://localhost:8080/mercurial/notifyCommit?url=<my repository remote url>
incoming.jenkins = wget -q -O /dev/null http://localhost:8080/mercurial/notifyCommit?url=<my repository remote url>
But builds aren't being triggered. Could someone help me?
[UPDATE]
I didn't pay attention to the wget command, which doesn't exist on windows. Installed it and it's still the same. Jenkins is not pulling the builds.
You must to have wget on PATH (I'll recommend native port of GOW, not Cygwin - or Bash in Win10)
Your hooks must be in working state
wget ... must produce the expected result
you have threenow two possible points of failure and have to test all independently
Does my hooks work?
Replace your current content of hooks with dumb billet like
commit.jenkins = echo Commit hook here
incoming.jenkins = echo Incoming hook here
and test hooks (in console for better visibility) by executing commit into repo with added hook and pull|push to it|unbundle anything. If you'll see hook output - they are usable
Does Jenkins integration work?
After commit to repo you can perform task of hook by hand: run wget -q -O /dev/null ... and check results in Jenkins

How to retain the build number while migrating a Jenkins job?

We have a Jenkins job running on a Jenkins server instance A. The current build number for this job is say 58.
We are migrating this Jenkins job to a new Jenkins server - B. However, there is a need to retain the build number - 58 from the previous server in this new Jenkins instance B.
Is this possible? If yes, how?
Thank you
If you only intend to keep the build number intact for job the in the new Jenkins Server, you could achieve it simply by writing a script that will populate the nextBuildNumber file in $JENKINS_HOME/jobs/<job_name>/ with the appropriate #buildnumber that you wish to have.
Something like this (script.sh) :-
#!bin/bash -x
JENKINS_HOME=/var/lib/jenkins
mkdir -p $JENKINS_HOME/jobs/<new_job> && cp $JENKINS_HOME/jobs/<old_job>/* $JENKINS_HOME/jobs/<new_job>/
OLD_BUILD_NO=`cat $JENKINS_HOME/jobs/seed/nextBuildNumber`
NEW_BUILD_NO=`expr $OLD_BUILD_NO - 1`
echo $NEW_BUILD_NO > $JENKINS_HOME/jobs/<new_job>/nextBuildNumber
chown -R jenkins:jenkins $JENKINS_HOME/jobs/temp/
Now run this script as:-
sudo bash script.sh
Although it creates the required job in the same jenkins server instance, the basic idea is same ..to populate the nextBuildNumber file.
The accepted answer to modify the nextBuildNumber File sadly didn't work for me, but found this answer by jayan in another Stackoverflow question:
https://stackoverflow.com/a/34951963
Try running below script in Jenkins Script Console.. Change "workFlow" to your
Jobname
def job = Jenkins.instance.getItem("workFlow")
job.nextBuildNumber = 10
job.saveNextBuildNumber()

Resources