Update file content in gitlab using jenkin - jenkins

Is there a way, after updating the file content via Jenkinsfile, to push it back to Git Lab and replace the previous file?
I fetched all the files to the working dir and changed the content with sed.
pipeline
{
agent any
stages {
stage('Update deployment file') {
steps {
sh 'sed -i "s/source/update/g" file.txt'
}
}
stage('Push to gitlab') {
steps {
?????????
}
}
Thanks in advance.

You can simply use a shell block for this. If you need credentials to push, you may have to append them to the URL or configure them in the git client.
stage(''Push to gitlab''){
steps{
sh '''
git add file.txt
git commit -m "Updates file1.txt"
git push
'''
}
}

In extension to the Answer before.
You can set the ssh Key for git with the following snipped:
sshagent(['<credentialsID>']) {
sh("git push origin HEAD:${BRANCH}")
}

Related

Use GITHUB_ACCESS_TOKEN in Jenkins inside docker for authentication

I've followed the solution provided by #biolauri in this post, in using GITHUB_ACCESS_TOKEN in Jenkins inside a docker container. Below is my stage code:
stage('Push git tag') {
agent { label 'docker' }
steps {
script {
try {
container = docker.build("git", "-f git.dockerfile .")
container.inside {
withCredentials([usernamePassword(
credentialsId: "<credential-name-stored-in-jenkins>",
usernameVariable: "GITHUB_APP",
passwordVariable: "GITHUB_ACCESS_TOKEN")]) {
withEnv(["GITHUB_TOKEN=$GITHUB_ACCESS_TOKEN"]) {
sh "git tag ${version_g}"
sh "git push origin ${version_g}"
}
}
}
} catch (Exception e) {
// sh "git tag -d ${version_g} || true"
throw e
}
}
}
}
But I am still getting this error:
fatal: could not read Username for 'https://github.com': No such
device or address
What am I doing wrong here?
Just to make sure that I am getting the correct Github App ID, I actually echoed it and it is indeed the correct Github App ID. And I also echoed the generated GITHUB_ACCESS_TOKEN, and it indeed looks like a generated token. But the docker image built seems to be not recognizing the GITHUB_TOKEN environment variable set, to be able to do a git push

Copy key file to folder using Jenkingfile

I am using jenkins scripted file.
I have .key file stored in jenkins files ( Where all env files are present ).
And I need to copy that file to code folder.
Like i want to store device.key in src/auth/keys.
Then will run test on code in pipeline.
I am using scripted Jenkinsfile. And i am unable to find any way to this.
node{
def GIT_COMMIT_HASH
stage('Checkout Source Code and Logging Into Registry') {
echo 'Logging Into the Private ECR Registry'
checkout scm
sh "git rev-parse --short HEAD > .git/commit-id"
GIT_COMMIT_HASH = readFile('.git/commit-id').trim()
# NEED TO COPY device.key to /src/auth/key
}
stage('TEST'){
nodejs(nodeJSInstallationName:'node'){
sh 'npm install'
sh 'npm test'
}
}
}
How I solved this:
I installed Config File Provider Plugin
I added the files as custom files for each environment
In the JenkinsFile I replace the configuration file from the project with the one comming from jenkins:
stage('Add Config files') {
steps {
configFileProvider([configFile(fileId: 'ID-of-Jenkins-stored-file', targetLocation: 'relative-path-to-destination-file-in-the-project')]) {
// some block , maybe a friendly echo for debugging
} } }
Please see the plugin doc as it is capable of replacing tokens in XML and json files and many others.

Why does my jenkins script seem to 'forget origin' even though it pull code from it?

below is my script. idea is to create projects based off of a template project so the developer would not need to do repeat this work every time a new project comes along... the template project will have all necessary scripts to run CI, release automatically, and deploy automatically...
node {
try {
stage('Clean Up') {
deleteDir()
stage('Verify New Github Repo Exists') {
sh 'git ls-remote git#github.com:account/${githubProject}.git' // check if repo exist
stage('Clone Github Repo') {
// clone into directory and cd
sh 'git clone git#github.com:account/${githubProject}.git .'
sh 'git remote -v'
sh 'ls -lash'
stage('Merge Template Into Project') {
// add the template into the project
sh 'git remote add template git#github.com:account/${appType}-jenkins-ci-template.git'
sh 'git remote -v' // this shows both origin and template repos
sh 'git fetch --all'
sh 'git merge template/master' // able to merge origin's master with template's
sh 'ls -lash'
sh 'git log --graph --abbrev-commit --max-count=10'
sh 'git push -u origin master' // when the code get here, it fails to push, ends up with ERROR: Repository not found.
// do the work to replace all __APP_NAME__ with the actual app name/service
// commit and push to master
stage('Configure Project Properties') {
// clone and copy property files from template project
// do the work to replace all __APP_NAME__ with the actual app name/service
// commit and push to master
stage('Wrap Up') {
// creating jenkins jobs will continue to be manual
}
}
}
}
}
}
} catch (e) {
//notifyFailure(e, "Script failure!")
currentBuild.result = "FAILURE"
}
}

Jenkins Multibranch Pipeline: How to checkout only once?

I have created very basic Multibranch Pipeline on my local Jenkins via BlueOcean UI. From default config I removed almost all behaviors except one for discovering branches. The config looks line follows:
Within Jenkinsfile I'm trying to setup following scenario:
Checkout branch
(optionally) Merge it to master branch
Build Back-end
Build Front-end
Snippet from my Jenkinsfile:
pipeline {
agent none
stages {
stage('Setup') {
agent {
label "master"
}
steps {
sh "git checkout -f ${env.BRANCH_NAME}"
}
}
stage('Merge with master') {
when {
not {
branch 'master'
}
}
agent {
label "master"
}
steps {
sh 'git checkout -f origin/master'
sh "git merge --ff-only ${env.BRANCH_NAME}"
}
}
stage('Build Back-end') {
agent {
docker {
image 'openjdk:8'
}
}
steps {
sh './gradlew build'
}
}
stage ('Build Front-end') {
agent {
docker {
image 'saddeveloper/node-chromium'
}
}
steps {
dir ('./front-end') {
sh 'npm install'
sh 'npm run buildProd'
sh 'npm run testHeadless'
}
}
}
}
}
Pipeline itself and building steps works fine, but the problem is that Jenkins adds "Check out from version control" step before each stage. The step looks for new branches, fetches refs, but also checks out current branch. Here is relevant output from full build log:
// stage Setup
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh git checkout -f my-branch
// stage Merge with master
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh git checkout -f origin/master
sh git merge --ff-only my-branch
// stage Build Back-end
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh ./gradlew build
// stage Build Front-end
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh npm install
sh npm run buildProd
sh npm run testHeadless
So as you see it effectively resets working directory to particular commit before every stage git checkout -f f067...595.
Is there any way to disable this default checkout behavior?
Or any viable option how to implement such optional merging to master branch?
Thanks!
By default, git scm will be executed in a Jenkins pipeline. You can disable it by doing:
pipeline {
agent none
options {
skipDefaultCheckout true
}
...
Also, I'd recommend take a look to other useful pipeline options https://jenkins.io/doc/book/pipeline/syntax/#options

Terraform cannot pull modules as part of jenkins pipeline

I have a jenkinsfile that was working and able to deploy some infrastructure automatically with terraform. Unfortunately after adding a terraform module with a git source it stopped working with the following error:
+ terraform init -input=false -upgrade
Upgrading modules...
- module.logstash
Updating source "git::https://bitbucket.org/*****"
Error downloading modules: Error loading modules: error downloading 'https://bitbucket.org/*****': /usr/bin/git exited with 128: Cloning into '.terraform/modules/34024e811e7ce0e58ceae615c545a1f8'...
fatal: could not read Username for 'https://bitbucket.org': No such device or address
script returned exit code 1
The urls above were obfuscated after the fact. Below is the cut down module syntax:
module "logstash" {
source = "git::https://bitbucket.org/******"
...
}
Below is the Jenkinsfile:
pipeline {
agent {
label 'linux'
}
triggers {
pollSCM('*/5 * * * *')
}
stages {
stage ('init') {
steps {
sh 'terraform init -input=false -upgrade'
}
}
stage('validate') {
steps {
sh 'terraform validate -var-file="production.tfvars"'
}
}
stage('deploy') {
when {
branch 'master'
}
steps {
sh 'terraform apply -auto-approve -input=false -var-file=production.tfvars'
}
}
}
}
I believe this to be a problem with terraform internally using git to checkout the module but Jenkins has not configured the git client within the pipeline job itself. Preferably I would be able to somehow pass the credentials used by the multibranch pipeline job into the job itself and configure git but I am at a loss of how to do that. Any help would be appreciated.
So I found a non-ideal solution that requires you to specify the credentials inside your Jenkinsfile rather than automatically using the credentials used by the job for checkout.
withCredentials([usernamePassword(credentialsId: 'bitbucketcreds', passwordVariable: 'GIT_PASS', usernameVariable: 'GIT_USER')]) {
sh "git config --global credential.helper '!f() { sleep 1; echo \"username=${env.GIT_USER}\\npassword=${env.GIT_PASS}\"; }; f'"
sh 'terraform init -input=false -upgrade'
sh 'git config --global --remove-section credential'
}
The trick is to load the credentials into environment variables using the withCredentials block and then I used the answer from this question to set the credential helper for git to read in those creds. You can then run terraform init and it will pull down your modules. Finally it clears the modified git settings to hopefully avoid contaminating other builds. Note that the --global configuration here is probably not a good idea for most people but was required for me due to a quirk in our Jenkins agents.
If anyone has a smoother way of doing this I would be very interested in hearing it.

Resources