I'm trying to create a Jenkins pipeline where a user would create a resource from a swagger definition that he pushes to github, triggering the Pipeline to deploy it on the API Manager but I keep getting the following error :
Error... stat /var/lib/jenkins/workspace/my_pipeline/apis/SwaggerPetstore-1.0.0: no such file or directory
This is my pipeline definition, I've been stuck with this for days now and don't know what to do
pipeline {
agent {
node {
label 'node'
}
}
environment {
PATH = "/root/apictl:$PATH"
}
options {
buildDiscarder logRotator(
daysToKeepStr: '16',
numToKeepStr: '10'
)
}
stages {
stage('Setup Environment for APICTL') {
steps {
sh """#!/bin/bash
ENVCOUNT=\$(apictl get envs --format {{.}} | wc -l)
if [ "\$ENVCOUNT" == "0" ]; then
apictl add env dev --apim https://am.wso2.com --registration https://am.wso2.com --token https://websub.am.wso2.com/token -k
fi
"""
}
}
stage('Deploy APIs To "Dev" Environment') {
steps {
sh """
apictl set --export-directory /var/lib/jenkins/workspace/my_pipeline
apictl set --vcs-deployment-repo-path /var/lib/jenkins/workspace/my_pipeline
apictl set --vcs-config-path /var/lib/jenkins/workspace/gitconfig
apictl get envs
apictl login dev -u admin -p admin -k
apictl vcs deploy -e dev -k --verbose
"""
}
}
}
}
Seems the steps mentioned under "stage('Deploy APIs To "Dev" Environment')" in your script are incorrect. Refer to the complete set of steps mentioned in the document [1] (The document is for APIM 4.1.0 and apictl 4.1.x. If you are using APIM 4.0.0 and apictl 4.0.x please refer [2]).
These documents contains a section named "Promoting APIs in a Git repository to upper environments via CI/CD" [3], which explains which commands that you should use to perform apictl vcs related tasks. Another important thing is, that you should make sure to install git in your running environment.
Apart from the above facts, I can see you have not done setting the source repository (apictl set --vcs-source-repo-path path/to/Source) or not executed apictl vcs init.
Trying the correct set of steps as mentioned in the documentation will solve your problem.
[1] https://apim.docs.wso2.com/en/4.1.0/install-and-setup/setup/api-controller/cicd-using-cli/#step-1-prepare-the-environments
[2] https://apim.docs.wso2.com/en/4.0.0/install-and-setup/setup/api-controller/cicd-using-cli/#step-1-prepare-the-environments
[3] https://apim.docs.wso2.com/en/4.1.0/install-and-setup/setup/api-controller/cicd-using-cli/#a-promoting-apis-in-a-git-repository-to-upper-environments-via-cicd
Related
I have a laravel application that requires the "yarn" command at initialization, and later only if certain files are changed.
Using the code below, I manage to detect when that file has changed, but I need a suggestion to be able to run it at the first initialization (practically, that file together with all the others seem to be new files from the perspective of the Jenkinsfile).
Thanks!
Current try:
stage("Install NodeJS dependencies") {
when {
changeset "package.json"
}
agent {
docker {
image 'node:14-alpine'
reuseNode true
}
}
steps {
sh 'yarn --silent'
sh 'yarn dev --silent'
}
}
We are using PACT (https://pact.io/) in our project. The can-i-deploy check, whether the deployment can be executed is done like this:
Jenkins Environment Variables (see <JENKINS_URL>/configure)
PACT_BROKER_URL
PACT_RUBY_STANDALONE_VERSION
(PACT_RUBY_STANDALONE_VERSION from https://github.com/pact-foundation/pact-ruby-standalone/releases)
Jenkinsfile:
environment {
SOURCE_BRANCH_NAME = sourceBranchName(env.BRANCH_NAME, env.CHANGE_BRANCH)
}
...
def sourceBranchName(String branchName, String changeBranchName) {
return changeBranchName == null ? branchName : changeBranchName
}
...
stage('can-i-deploy') {
steps {
withCredentials([usernamePassword(credentialsId: 'XXX', passwordVariable: 'PACT_BROKER_PASSWORD', usernameVariable: 'PACT_BROKER_USERNAME')]) {
sh "curl -LO https://github.com/pact-foundation/pact-ruby-standalone/releases/download/v${PACT_RUBY_STANDALONE_VERSION}/pact-${PACT_RUBY_STANDALONE_VERSION}-linux-x86_64.tar.gz"
sh "tar xzf pact-${PACT_RUBY_STANDALONE_VERSION}-linux-x86_64.tar.gz"
echo "Performing can-i-deploy check"
sh "pact/bin/./pact-broker can-i-deploy --broker-base-url=${PACT_BROKER_URL} --broker-username=${PACT_BROKER_USERNAME} --broker-password=${PACT_BROKER_PASSWORD} --pacticipant=project-frontend --latest=${env.SOURCE_BRANCH_NAME} --pacticipant=project-backend --latest=${env.SOURCE_BRANCH_NAME} --pacticipant=other-project-backend --latest=${env.SOURCE_BRANCH_NAME}"
}
}
}
Is there a more elegant way to do this?
I can't speak for Jenkins, but there are two things worth changing in the arguments sent to can-i-deploy:
It's no recommended to use the latest flag. You should use the --version to indicate the version of the application you are deploying and the --to flag to denote the target environment (latest runs the risk of race conditions with builds giving you false positives/negatives)
You don't need to specify compatible other projects, can-i-deploy will automatically detect all dependent components.
So it would look more like this
can-i-deploy --broker-base-url=${PACT_BROKER_URL} --broker-username=${PACT_BROKER_USERNAME} --broker-password=${PACT_BROKER_PASSWORD} --pacticipant=project-frontend --version some-sha-1234 --to prod
If you have access to docker, you might prefer to use our container.
P.S. if you simply export the following environment variables you can drop them off the argument list also:
PACT_BROKER_BASE_URL (please note the minor difference from what you're using)
PACT_BROKER_USERNAME
PACT_BROKER_PASSWORD
I am stuck in trying to get a Jenkinsfile to work. It keeps failing on sh step and gives the following error
process apparently never started in /home/jenkins/workspace
...
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I have tried adding
withEnv(['PATH+EXTRA=/usr/sbin:/usr/bin:/sbin:/bin'])
before sh step in groovy file
also tried to add
/bin/sh
in Manage Jenkins -> Configure System in the shell section
I have also tried replacing the sh line in Jenkinsfile with the following:
sh "docker ps;"
sh "echo 'hello';"
sh ./build.sh;"
sh ```
#!/bin/sh
echo hello
```
This is the part of Jenkinsfile which i am stuck on
node {
stage('Build') {
echo 'this works'
sh 'echo "this does not work"'
}
}
expected output is "this does not work" but it just hangs and returns the error above.
what am I missing?
It turns out that the default workingDir value for default jnlp k8s slave nodes is now set to /home/jenkins/agent and I was using the old value /home/jenkins
here is the config that worked for me
containerTemplate(name: 'jnlp', image: 'lachlanevenson/jnlp-slave:3.10-1-alpine', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins/agent')
It is possible to get the same trouble with the malformed PATH environment variable. This prevents the sh() method of the Pipeline plugin to call the shell executable. You can reproduce it on a simple pipeline like this:
node('myNode') {
stage('Test') {
withEnv(['PATH=/something_invalid']) {
/* it hangs and fails later with "process apparently never started" */
sh('echo Hello!')
}
}
}
There is variety of ways to mangle PATH. For example you use withEnv(getEnv()) { sh(...) } where getEnv() is your own method which evaluates the list of environment variables depending on the OS and other conditions. If you make a mistake in the getEnv() method and PATH gets overwritten you get it reproduced.
I have a jenkinsfile that was working and able to deploy some infrastructure automatically with terraform. Unfortunately after adding a terraform module with a git source it stopped working with the following error:
+ terraform init -input=false -upgrade
Upgrading modules...
- module.logstash
Updating source "git::https://bitbucket.org/*****"
Error downloading modules: Error loading modules: error downloading 'https://bitbucket.org/*****': /usr/bin/git exited with 128: Cloning into '.terraform/modules/34024e811e7ce0e58ceae615c545a1f8'...
fatal: could not read Username for 'https://bitbucket.org': No such device or address
script returned exit code 1
The urls above were obfuscated after the fact. Below is the cut down module syntax:
module "logstash" {
source = "git::https://bitbucket.org/******"
...
}
Below is the Jenkinsfile:
pipeline {
agent {
label 'linux'
}
triggers {
pollSCM('*/5 * * * *')
}
stages {
stage ('init') {
steps {
sh 'terraform init -input=false -upgrade'
}
}
stage('validate') {
steps {
sh 'terraform validate -var-file="production.tfvars"'
}
}
stage('deploy') {
when {
branch 'master'
}
steps {
sh 'terraform apply -auto-approve -input=false -var-file=production.tfvars'
}
}
}
}
I believe this to be a problem with terraform internally using git to checkout the module but Jenkins has not configured the git client within the pipeline job itself. Preferably I would be able to somehow pass the credentials used by the multibranch pipeline job into the job itself and configure git but I am at a loss of how to do that. Any help would be appreciated.
So I found a non-ideal solution that requires you to specify the credentials inside your Jenkinsfile rather than automatically using the credentials used by the job for checkout.
withCredentials([usernamePassword(credentialsId: 'bitbucketcreds', passwordVariable: 'GIT_PASS', usernameVariable: 'GIT_USER')]) {
sh "git config --global credential.helper '!f() { sleep 1; echo \"username=${env.GIT_USER}\\npassword=${env.GIT_PASS}\"; }; f'"
sh 'terraform init -input=false -upgrade'
sh 'git config --global --remove-section credential'
}
The trick is to load the credentials into environment variables using the withCredentials block and then I used the answer from this question to set the credential helper for git to read in those creds. You can then run terraform init and it will pull down your modules. Finally it clears the modified git settings to hopefully avoid contaminating other builds. Note that the --global configuration here is probably not a good idea for most people but was required for me due to a quirk in our Jenkins agents.
If anyone has a smoother way of doing this I would be very interested in hearing it.
I have a custom tool defined within Jenkins via the Custom Tools plugin. If I create a freestyle project the Install custom tools option correctly finds and uses the tool (Salesforce DX) during execution.
However, I cannot find a way to do the same via a pipeline file. I have used the pipeline syntax snippet generator to get:
tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'
I have put that into my stage definition:
stage('FetchMetadata') {
print 'Collect Prod metadata via SFDX'
tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'
sh('sfdx force:mdapi:retrieve -r metadata/ -u DevHub -k ./metadata/package.xml')
}
but I get an error message stating line 2: sfdx: command not found
Is there some other way I should be using this snippet?
Full Jenkinsfile for info:
node {
currentBuild.result = 'SUCCESS'`
try {
stage('CheckoutRepo') {
print 'Get the latest code from the MASTER branch'
checkout scm
}
stage('FetchMetadata') {
print 'Collect Prod metadata via SFDX'
tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'
sh('sfdx force:mdapi:retrieve -r metadata/ -u DevHub -k ./metadata/package.xml')
}
stage('ConvertMetadata') {
print 'Unzip retrieved metadata file'
sh('unzip unpackaged.zip .')
print 'Convert metadata to SFDX format'
sh('/usr/local/bin/sfdx force:mdapi:convert -r metadata/unpackaged/ -d force-app/')
}
stage('CommitChanges') {
sh('git add --all')
print 'Check if any changes need committing'
sh('if ! git diff-index --quiet HEAD --; then echo "changes found - pushing to repo"; git commit -m "Autocommit from Prod # $(date +%H:%M:%S\' \'%d/%m/%Y)"; else echo "no changes found"; fi')
sshagent(['xxx-xxx-xxx-xxx']) {
sh('git push -u origin master')
}
}
}
catch (err) {
currentBuild.result = 'FAILURE'
print 'Build failed'
error(err)
}
}
UPDATE
I have made some progress using this example Jenkinsfile
My stage now looks like this:
stage('FetchMetadata') {
print 'Collect Prod metadata via SFDX'
def sfdxLoc = tool 'sfdx'
sh script: "cd topLevel; ${sfdxLoc}/sfdx force:mdapi:retrieve -r metadata/ -u DevHub -k ./metadata/package.xml"
}
Unfortunately, although it looks like Jenkins is now finding and running the sfdx tool, I get a new error:
TypeError: Cannot read property 'run' of undefined
at Object.<anonymous> (/var/lib/jenkins/.cache/sfdx/tmp/heroku-script-509584048:20:4)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.runMain (module.js:604:10)
at run (bootstrap_node.js:394:7)
at startup (bootstrap_node.js:149:9)
at bootstrap_node.js:509:3
I ran into the same problem. I got to this workaround:
environment {
GROOVY_HOME = tool name: 'Groovy-2.4.9', type: 'hudson.plugins.groovy.GroovyInstallation'
}
stages {
stage('Run Groovy') {
steps {
bat "${groovy_home}/bin/groovy <script.name>"
}
}
}
Somehow the tool path is not added to PATH by default (as was customary on my 1.6 Jenkins server install). Adding the ${groovy_home} when executing the bat command fixes that for me.
This way of calling a tool is basically lent from the scripted pipeline syntax.
I am using this for all my custom tools (not only groovy).
The tool part:
tool name: 'Groovy-2.4.9', type: 'hudson.plugins.groovy.GroovyInstallation'
was generated by the snippet generator like you did.
According to the Jenkins users mailing list, work is still ongoing for a definitive solution, so my solution really is a work around.
This is my first time commenting on stack overflow, but I've been looking for this answer for a few days and I think I have a potential solution. Checking out Fholst answer, I'd like to expand on it. That environment stanza I think may work for declarative syntax, but on a scripted pipeline you must use the withEnv() equivalent, and pass in the tools via a gString: i.e. ${tool 'nameOfToolDefinedInGlobalTools'}. For my particular use case, for reasons beyond my control, we do not have maven installed on our jenkins host machine, but there is one defined within the global tools configuration. This means I need to add mvn to the path before executing my sh commands within my steps. What I have been able to do is this:
withEnv(["PATH+MVN=${tool 'NameOfMavenTool'}/bin"]){
sh '''
echo "PATH = ${PATH}"
'''
}
This should give you what you need. Please ignore the triple single quotes on the sh line, I actually have several environment variables loaded and simply removed them from my snippet.
Hope this helps anyone who has been searching for this solution for days. I feel your pain. Cobbled this together from looking through the console output of a declarative pipeline script (if you use tools{} stanza it will show you how it builds those environment variables and wraps your subsequent declarative steps) and the following link: https://go.cloudbees.com/docs/cloudbees-documentation/use/automating-projects/jenkinsfile/
You may be having a problem because of the path to your sfdx install folder if you are on Windows. The Dreamhouse Jenkinsfile was written for a linux shell or Mac terminal so some changes are necessary to make it work on Windows.
${sfdxLoc}/sfdx
Should be
\"${sfdxLoc}/sfdx\"
So that the command line handles any spaces properly.
https://wipdeveloper.com/2017/06/22/salesforce-dx-jenkins-jenkinsfile/