I am learning jenkins. In the jenkins file, i passing the credentials stored in the jenkins using the withCredentials.
withCredentials([
usernamePassword(credentialsId: 'DB_credentials', usernameVariable: 'serviceaccount_user', passwordVariable: 'serviceaccount_pwd'),
]) {
python3 apache.py ${serviceaccount_user} ${serviceaccount_pwd}
But when i run the jenkins build, it is somehow adding the strings to the serviceaccount_pwd . When i saved the credentials in the jenkins, I made sure that there were no credentials were added. Any help would be appreciated on why jenkins in adding the strings,
Console output:
You should consider upgrading via the '/tmp/sg_virtualenv/bin/python -m pip install --upgrade pip' command.
+ python3 apache.py **** '****'
Thanks
Related
I I’m trying to use the Nexus repository to manage *.deb packages generated by jenkins, but when trying to upload these packages to nexus I’m getting a lot of errors, what is the best way to publish the packages in deb format that are in jenkins to a nexus repository apt, using the jenkins credential management mechanism to authenticate?
Using “Nexus Repository Manager Publisher” the apt repository does not appear, only maven repositories appear.
Tentei usar os seguintes scripts de pipeline:
withCredentials([usernamePassword(credentialsId: '3e4ea258-4b41-4379-a0fe-4462ed3e420a', passwordVariable: 'NEXUS_PASSWORD', usernameVariable: 'NEXUS_USERNAME')]) {
sh '''
for i in `/workspace/*.deb`
do
echo "Sending $i\r\n"
curl -u \"$NEXUS_USERNAME:$NEXUS_PASSWORD\" -H "Content-Type: multipart/form-data" --data-binary \"#/$i\" "http://10.224.50.202:8081/repository/nerd4ever-labs/"
done
'''
}
withCredentials([usernamePassword(credentialsId: '3e4ea258-4b41-4379-a0fe-4462ed3e420a', passwordVariable: 'NEXUS_PASSWORD', usernameVariable: 'NEXUS_USERNAME')]) {
sh '''
for i in `/workspace/*.deb`
do
echo "Sending $i\r\n"
curl -u $(NEXUS_USERNAME):$(NEXUS_PASSWORD) --upload-file $i "http://10.224.50.202:8081/repository/nerd4ever-labs/"
done
'''
}
With the above pipelines I get the error:
: Syntax error: newline unexpected
It’s my first contact with the Sonatype Nexus Repository Manager, I’m using an OSS 3.32.0-03 version.
I plan to use it for apt (debian), yum (centos) and poudriere (freebsd) and composer packages, but I’m struggling with the basics.
I’ve already set up the apt repository.
I want to run Danger test on the CI that runs on Jenkins. I'm using scripted pipelines.
I have installed the GitHub branch source plug-in into Jenkins. I have also created a personal access token to the builds_user account (that Jenkins uses for GitHub interaction), and stored this in the Jenkins credential store, with unique ID builds_user_repos_acces.
This is the way I'm running danger:
withCredentials([usernamePassword(credentialsId: 'builds_user_repos_access', passwordVariable: 'DANGER_GITHUB_API_TOKEN', usernameVariable: '')]) {
def dangerEnv = [
"DANGER_GITHUB_API_TOKEN=${env.DANGER_GITHUB_API_TOKEN}"
]
stage('danger') {
withEnv(buildEnv + dangerEnv) {
sh 'bundle exec danger'
}
}
}
The buildEnv is a list of env variable that I need to run ruby gems. Everything works when I run other gems and when I run bundle install.
what I get is a warning:
bundle exec danger
Not a Jenkins Pull Request - skipping 'danger' run.
I have also tried:
stage('danger') {
withEnv(buildEnv) {
withCredentials([usernamePassword(credentialsId: 'skbuilds_repos_access', passwordVariable: 'DANGER_GITHUB_API_TOKEN', usernameVariable: '')]) {
sh 'env'
sh "bundle exec danger"
}
}
}
And I can see that the DANGER_GITHUB_API_TOKEN is there (masked).
Do you see what is wrong? Is there a way to get more info?
Danger works only with pull request, so you have to make multibranch or github organization job in jenkins and connect jenkinsFile.
I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)
I have set up cmake tool under the Global tool configuration in Jenkins. I tried to reference it in my jenkinsfile but the build is giving this error:
'cmake is not a recognised command'.
This is how i am referencing it in the jenkinsfile :
stage('run CMake')
{
bat '''
mkdir build
cd build
cmake -DBOOST_ROOT=E:/local/boost_1_64_0 -DOPC_UA_FRAMEWORK_ROOT=E:/local/bhi-opcuaframework-1.2.0-win32
And this is the configuration of CMake in the Jenkins dashboard:
This is how my setting in global tool config looks like.
How do i correctly reference the tool in the pipeline?
Please help!!
Use the tool step:
stage('run CMake')
{
def cmakePath = tool 'CMake'
bat """
mkdir build
cd build
${cmakePath}\\cmake -DBOOST_ROOT=...
"""
}
Do you have cmake installed on jenkins master? if not try to install it using installer.
Check the jenkins cmake wiki page screenshot check below, install it using installer if you haven't already check this help:
Tool Configuration
Global Configuration
CMake Build Configuration
I've got a Jenkins pipeline containing stages for source loading, building and deploying on a remote machine through SSH. The problem is about the last one. I saved the script of the following template on the remote server:
#!/bin/bash
bash /<pathTo>/jboss-cli.sh --command="deploy /<anotherPath>/service.war --force"
It works fine if executed in a terminal connected to the remote server.
The best outcome I've received through Jenkins is
/<pathTo>/jboss-cli.sh: line 87: usr/bin/java/bin/java: No such file or directory
in Jenkins console output.
Tried switching between bash and sh, exporting path to java in the pipeline script etc.
Any suggestions are appreciated.
Thanks!
p.s. Execution call from Jenkins looks like:
sh """
ssh -o StrictHostKeyChecking=no $connectionName 'bash /<pathToTheScript>/<scriptName>.sh'
"""
line 87: **usr/bin/java/bin/java**: No such file or directory
As per error line it is considering path from usr not /usr. Can you check if this is what the problem is?
Sorry, I know this should be in comments section but I don't have right to add comments yet.