Jenkins Pipeline - Enabling Cloudfoundry deployment - jenkins

Installed Blue Ocean from the docker image docker pull jenkinsci/blueocean. I wanted to include a Cloud Foundry Deployment step (sh cf push) in my pipeline and stuck with the error:
script.sh: line 1: cf: not found
I knew what's happening - as there is no compatible CF CLI plug-in the script command CF is not working. And I tried different things:
In my Jenkinsfile, I Tried using the the Cloud foundry plug-in (CloudFoundryPushPublisher) which is supported in non-pipeline build. And that didn't help.
step([$class: 'com.hpe.cloudfoundryjenkins.CloudFoundryPushPublisher',
target: 'https://api.ng.bluemix.net',
organization: 'xxxx',
cloudSpace: 'xxxxx',
credentialsId: 'xxxxxx',
selfSigned: true,
resetIfExists: true]);
That failed with Invalid Argument exception.
My question is, I heard Cloudbees has a commercial version that supports CF CLI, but that ability is missing from Blue ocean. So how should I be able to push the deployments to cloud foundry using Pipeline job?

I'm not sure whether you already fixed the issue, but I just installed 'cf cli' on jenkins machine by manual and use 'cf push' as shell script like;
sh 'cf login -u xxx - p xxx -s space -o org'
sh 'cf push appname ...'

Related

Running CI/CD jenkins pipeline script occurs error: "error yaml: line 53: mapping values are not allowed in this context"

step1:I deploymented jenkins、rancher、gitlab servers in my local environment.I would want to realize CI/CD pipeline.
step2:My project is a web-based management system which used vue and gin.The project source code was pushed into the gitlab repository.I wrote a Dockerfile and a Jenkinsfile in my local IDE.The next is my important description.In my Jenkinsfile:there is some statement like the following shows :
steps {
sh "kubectl set image deployment/gin-vue gin-vue=myhost/containers/gin-vue.${BUILD_NUMBER} -n web"
}
Then executed git commit and git push command.
step3:I created a task in jenkins and clicked the build button to run the corresponding pipeline script.
step4:After end running,the newer image could not be updated into the rancher.
enter image description here
But I executed this command in rancher kubectl client terminal "kubectl set image deployment/gin-vue gin-vue=myhost/containers/gin-vue.${BUILD_NUMBER} -n web" could be successful.
So what's the cause of this problem?And this confused me so many days and not found the solution.Thanks a lot!
I executed this command in rancher kubectl client terminal "kubectl set image deployment/gin-vue gin-vue=myhost/containers/gin-vue.${BUILD_NUMBER} -n web" could be successful.

Jenkins Pipeline - connect to Docker host using SSH credentials

We would like to use Jenkins Pipelines to work with AWS ECR images on a remote host that has Docker installed, but does not (and will not) expose the Docker socket over port 2376.
A couple of simpler options include:
using the existing Jenkins SSH/scripts
using the pipeline ssh-agent and running commands in-line there
However, because the declarative docker plugin seems to have everything needed it would be cleaner to use this since tags, etc., will all align with other parts of the pipeline.
All examples on the internet show
docker.withServer("tcp://X:2376","credentialsId") {...}
However from configuring the Jenkins Cloud Config -> Docker Templates it seems ssh is provided so we tried the following:
stages {
stage('Deploy to Remote Host') {
steps {
script {
docker.withServer("ssh://ec2-x-x-x-x.mars-1.compute.amazonaws.com:22", "ssh-credentials-id") {
docker.withRegistry("https://1234567890.dkr.ecr.mars-1.amazonaws.com", "ecr:mars-1:ecr-credentials-id") {
docker.pull('my-image:latest')
}
}
}
}
}
}
Unfortunately, we get the following connection error:
error during connect: Post http://docker/v1.40/auth: command [ssh -p 22 ec2-x-x-x-x.mars-1.compute.amazonaws.com -- docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=Host key verification failed.
We have Docker v19 on the server, and the ssh key is fine using ssh-agent.
Any ideas about what we need to do to get this working?
I solved this by adding SSH commands (invoked on the agent) to the remote/target host, then invoking 'docker pull...', 'docker run...' etc using the shell.... however doing this in Jenkins Pipeline is fragile - the ssh commands turn into scripts, the credentials lookups are fairly complex & messy to look at and the overall outcome, while presenting a nice pipeline in Blue Ocean, is going to become difficult to code/maintain.
So following the advice that 'doing everything in Jenkins Pipeline is an anti-pattern' I broke the process into two pieces - building/pushing the image using Jenkins Pipeline + Docker Plugin which is documented and supported as a first class concern, and then invoking a Ansible 'build job' from the Pipeline passing dev/stage/live parameters, and letting Ansible take care of all of the other variables and secrets needed. This means the end result is similar in Blue Ocean, but the complex deployment logic is coded more cleanly in an appropriate tool designed to handle my use-case, and looping through environments/hosts is much cleaner in Ansible [or substitute your preferred deployment tool here].
To get rid of the "Host key verification failed" error I used this in my pipeline:
sh '''
[ -d ~/.ssh ] || mkdir ~/.ssh && chmod 0700 ~/.ssh && ssh-keyscan HOSTNAME >> ~/.ssh/known_hosts
'''

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

Jenkins Pipeline Build with Docker, Google Registry, and Google Auth Plugin

I'm building a Docker image using a Jenkins pipeline (using a pipeline script that was auto-generated by JHipster). I want to push my final docker image to the Google Container Registry.
Here's what I've done:
I've installed both the CloudBees Docker Custom Build Environment Plugin and the Google Container Registry Auth Plugin.
I've set up Google auth credentials in Jenkins following instructions over here
I've configured my build step to use the Google Registry tag format, like so: docker.build('us.gcr.io/[my-project-id]/[my-artifact-id]', 'target/docker')
I've referenced the id of my Google Auth credentials in my push step:
(Hm. Needs extra text line after bullets to format properly)
docker.withRegistry('https://us.gcr.io', '[my-credential-id]') {
dockerImage.push 'latest'
}
But the build fails with:
ERROR: Could not find credentials matching [my-credential-id]
Finished: FAILURE
I'm basically at the point of believing that these plugins don't work in a pipelines world, but I thought I'd ask if anyone has accomplished this and could give me some pointers.
Try prefixing your credentials id by "gcr:".
Your credential id would look like "gcr:[my-credential-id]".
Complete working example:
stage('Build image') {
app = docker.build("[id-of-your-project-as-in-google-url]/[name-of-the-artifact]")
}
stage('Push image') {
docker.withRegistry('https://eu.gcr.io', 'gcr:[credentials-id]') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
Please note the name of the image. I was struggling with pushing of the image even with working credentials until I've named the image wit [id-of-your-project-as-in-google-url]/[name-of-the-artifact] notation.
When you get a message that you need to enable the Google....... API, you probably got your [id-of-your-project-as-in-google-url] wrong.
Images can now be successfully used with url of eu.gcr.io/[id-of-your-project-as-in-google-url]/[name-of-the-artifact]:47.
LZ
The previous answers didn't seem to work for me anymore. This is the syntax that works for me:
stage('Push Image') {
steps {
script {
docker.withRegistry('https://gcr.io', 'gcr:my-credential-id') {
dockerImage.push()
}
}
}
}
Other way of setting up Google cloud registry can be as below where you use withCredentials plugin and use file credential type.
withCredentials([file(credentialsId: 'gcr-private-repo-reader', variable: 'GC_KEY')]){
sh '''
chmod 600 $GC_KEY
cat $GC_KEY | docker login -u _json_key --password-stdin https://eu.gcr.io
docker ps
docker pull eu.gcr.io/<reponame>/<image>
'''
}
check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
The below answer didn't completely work before, and is apparently now deprecated. I'm leaving the text here for historical reasons.
I've ultimately bypassed the problem by using a gcloud script stage in place of a docker pipeline stage, like so:
stage('publish gcloud') {
sh "gcloud docker -- push us.gcr.io/[my-project-id]/[my-artifact-id]"
}
The gcloud command is able to find the auth credentials that are set up using a gcloud init on the command line of the Jenkins server.

Jenkins Pipeline push Docker image

My Jenkins job is Pipeline that running in Dockers:
node('docker') {
//Git checkout
git url: 'ssh://blah.blah:29411/test.git'
//Build
sh 'make'
//Verify/Run
sh './runme'
}
I'm working with kernel and my sources take a lot of time to get it from GIT (it's about 2GB). I'm looking on how I can push the docker image to use it for the next build so it will already contain most of the sources. I probably need to do:
docker push blahdockergit.blah/myjenkinsslaveimage
but it should run outside of the container.
Found in pipeline syntax that following class can be used for building external jobs

Resources