Use GITHUB_ACCESS_TOKEN in Jenkins inside docker for authentication - jenkins

I've followed the solution provided by #biolauri in this post, in using GITHUB_ACCESS_TOKEN in Jenkins inside a docker container. Below is my stage code:
stage('Push git tag') {
agent { label 'docker' }
steps {
script {
try {
container = docker.build("git", "-f git.dockerfile .")
container.inside {
withCredentials([usernamePassword(
credentialsId: "<credential-name-stored-in-jenkins>",
usernameVariable: "GITHUB_APP",
passwordVariable: "GITHUB_ACCESS_TOKEN")]) {
withEnv(["GITHUB_TOKEN=$GITHUB_ACCESS_TOKEN"]) {
sh "git tag ${version_g}"
sh "git push origin ${version_g}"
}
}
}
} catch (Exception e) {
// sh "git tag -d ${version_g} || true"
throw e
}
}
}
}
But I am still getting this error:
fatal: could not read Username for 'https://github.com': No such
device or address
What am I doing wrong here?
Just to make sure that I am getting the correct Github App ID, I actually echoed it and it is indeed the correct Github App ID. And I also echoed the generated GITHUB_ACCESS_TOKEN, and it indeed looks like a generated token. But the docker image built seems to be not recognizing the GITHUB_TOKEN environment variable set, to be able to do a git push

Related

Update file content in gitlab using jenkin

Is there a way, after updating the file content via Jenkinsfile, to push it back to Git Lab and replace the previous file?
I fetched all the files to the working dir and changed the content with sed.
pipeline
{
agent any
stages {
stage('Update deployment file') {
steps {
sh 'sed -i "s/source/update/g" file.txt'
}
}
stage('Push to gitlab') {
steps {
?????????
}
}
Thanks in advance.
You can simply use a shell block for this. If you need credentials to push, you may have to append them to the URL or configure them in the git client.
stage(''Push to gitlab''){
steps{
sh '''
git add file.txt
git commit -m "Updates file1.txt"
git push
'''
}
}
In extension to the Answer before.
You can set the ssh Key for git with the following snipped:
sshagent(['<credentialsID>']) {
sh("git push origin HEAD:${BRANCH}")
}

Authentication problem of my pipeline with my gitlab project

I am in multi-branch option with my jenkins and I have a problem of authentication to Gitlab. Here is my jenkins file :
pipeline {
agent any
environment {
registry = "*****#gmail.com/test"
registryCredential = 'test'
dockerImage = ''
}
stages {
stage('Cloning our Git') {
steps{
git 'https://gitlab.com/**********/*************/************.git'
}
}
stage('Build docker image') {
steps {
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy our image') {
steps{
script {
docker.withRegistry( '', registryCredential ){
dockerImage.push()
}
}
}
}
stage('Cleaning up') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
This is the error I got:
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress -- https://gitlab.com/************/*******/***************.git +refs/heads/:refs/remotes/origin/" returned status code 128:
stdout:
stderr: remote: HTTP Basic: Access denied. The provided password or token is incorrect or your account has 2FA enabled and you must use a personal access token instead of a password. See https://gitlab.com/help/topics/git/troubleshooting_git#error-on-git-fetch-http-basic-access-denied
I would like to know how to authenticate with the jenkinsfile to gitlab or if you have a better solution for me I am interested. Thanks
If you follow the link provided in the error message, you end up here:
https://docs.gitlab.com/ee/user/profile/account/two_factor_authentication.html#troubleshooting
You need to create a Personal Access Token which is kind of a special ID to delegate access to parts of your account rights.
The documentation for PAT is here:
https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html
In the Gitlab repository interface, it is under Settings > Access Tokens.
As you try to read an HTTPS repository, it seems you need to create a token with rights read_repository.
Then you should be able to access the repository with:
https://<my-user-id>:<my-pat>#gitlab.com/<my-account>/<my-project-name>.git

How do I use Jenkins to build a private GitHub Rust project with a private GitHub dependency?

I have a private GitHub Rust project that depends on another private GitHub Rust project and I want to build the main one with Jenkins. I have called the organization Organization and the dependency package subcrate in the below code.
My Jenkinsfile looks something like
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sh "cargo build"
}
}
etc...
}
}
I have tried the following in Cargo.toml to reference the dependency, it works fine on my machine
[dependencies]
subcrate = { git = "ssh://git#ssh.github.com/Organization/subcrate.git", tag = "0.1.0" }
When Jenkins runs I get the following error
+ cargo build
Updating registry `https://github.com/rust-lang/crates.io-index`
Updating git repository `ssh://git#github.com/Organization/subcrate.git`
error: failed to load source for a dependency on `subcrate`
Caused by:
Unable to update ssh://git#github.com/Organization/subcrate.git?tag=0.1.0#0623c097
Caused by:
failed to clone into: /usr/local/cargo/git/db/subcrate-3e391025a927594e
Caused by:
failed to authenticate when downloading repository
attempted ssh-agent authentication, but none of the usernames `git` succeeded
Caused by:
error authenticating: no auth sock variable; class=Ssh (23)
script returned exit code 101
How can I get Cargo to access this GitHub repository? Do I need to inject the GitHub credentials onto the slave? If so, how can I do this? Is it possible to use the same credentials Jenkins uses to checkout the main crate in the first place?
I installed the ssh-agent plugin and updated my Jenkinsfile to look like this
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sshagent(credentials: ['id-of-github-credentials']) {
sh "ssh -vvv -T git#github.com"
sh "cargo build"
}
}
}
etc...
}
}
I get the error
+ ssh -vvv -T git#github.com
No user exists for uid 113
script returned exit code 255
Okay, I figured it out, No user exists for uid error is because of a mismatch between the users in the host /etc/passwd and the container /etc/passwd. This can be fixed by mounting /etc/passwd.
agent {
docker {
image 'rust:latest'
args '-v /etc/passwd:/etc/passwd'
}
}
Then
sshagent(credentials: ['id-of-github-credentials']) {
sh "cargo build"
}
Works just fine

How to test a git-crypt encrypted repo with Jenkins

The tests I run with Jenkins (multi-branch pipeline) on my repo make use of an encrypted file (keys.py) in it, via git-crypt. In order to use that file locally, I usually use git-crypt unlock, but I cannot directly add this step to the Jenkinsfile because of how that command works:
gpg is used to decrypt the symmetric key used for encrypting my file (i.e. .git-crypt/keys/default/0/xxxx.gpg). This key is encrypted with RSA, using my private key, and this one has a passphrase which you are prompted to enter when trying to use it.
Decrypts keys.py using the decrypted key.
To solve the issue of the prompt, run the git-crypt steps manually inserting
your passphrase as a command-line argument to gpg and the decrypted symmetric
key to git-crypt unlock. Here we make use of a few more tricks that will
ease your life like the use of the Jenkins environment variables.
gpg --no-tty --passphrase YOUR_PASSPHRASE_GOES_HERE --output $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg --decrypt $WORKSPACE/.git-crypt/keys/default/0/YOUR_KEY_FILE_GOES_HERE.gpg && git-crypt unlock $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg
Here we raise a second issue, and it is that executing this twice will raise an
error as well. We want the repo to be decrypted only when it is encrypted. In
order to solve that, first check that the file containing the symmetric key
exists, generated only during the previous step. In the end, we end up with a
stage that looks like:
stage('Unlock repo') {
steps {
script {
sh("[ -f $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg ] || gpg --no-tty --passphrase YOUR_PASSPHRASE_GOES_HERE --output $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg --decrypt $WORKSPACE/.git-crypt/keys/default/0/YOUR_KEY_FILE_GOES_HERE.gpg && git-crypt unlock $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg")
}
}
}
I've build another solution for git-crypt by creating a separate container with git-crypt and invoke these in stages before and after the main build step:
pipeline {
environment {
// $HOME is not set in build-agent
JAVA_TOOL_OPTIONS = '-Duser.home=/home/jenkins/'
}
agent {
label 'docker'
}
stages {
stage('Decrypt') {
agent {
docker {
image 'wjung/jenkins-git-crypt:latest'
registryUrl 'https://index.docker.io/v1/'
registryCredentialsId 'docker-hub'
}
}
steps {
withCredentials([file(credentialsId: 'git-crypt-key', variable: 'FILE')]) {
sh 'cd $WORKSPACE; git-crypt unlock $FILE'
}
}
}
stage('Build docker image') {
agent {
docker {
image 'maven:3-jdk-11'
args '-v /services/maven/m2:/home/jenkins/.m2 -v /services/maven/m2/cache:/home/jenkins/.cache'
}
}
steps {
configFileProvider([configFile(fileId: 'mvn-setting-xml', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS -B -Dmaven.test.skip clean deploy'
}
}
}
stage('Lock dir') {
agent {
docker {
image 'wjung/jenkins-git-crypt:latest'
registryUrl 'https://index.docker.io/v1/'
registryCredentialsId 'docker-hub'
}
}
steps {
sh 'cd $WORKSPACE; git-crypt lock'
}
}
}
}
The encryption key is exported from the repository by git-crypt export-key TMPFILE and later added as secret file with id: git-crypt-key.

How to save Docker volume from within Cloudbees Pipeline in case of fail

I run a set of API-Tests in a Docker-Container that are started by a Jenkins-Pipeline-Stage (Cloudbees-plugin).
I would like to save the logs of the tests away in case the stage (see below) fails.
I tried to do it with a post-action in a later stage but then I do not have access to the image any more.
How would you approach this problem? How can I save the image away in case of a fail?
stage('build Dockerimage and run API-tests') {
steps{
script {
def apitestimage = docker.build('apitestimage', '--no-cache=true dockerbuild')
apitestimage.inside('-p 5800:5800') {
dir('testing'){
sh 'ctest -V'
}
}
sh 'docker rmi --force apitestimage'
}
}
}
Use a post { failure { .. } } step to archive the data of the failing stage directly within the failed stage, not later.

Resources