I logged in to docker normally, and the authentication information was also checked, but the jib build fails.
docker login
cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"credsStore": "desktop"
}%
Docker login is successful.
// build.gradle
jib {
from {
image = "eclipse-temurin:17"
}
to {
image = "username/${project.name}:${project.version}"
tags = ["latest"]
}
}
and command ./gradlew jib
error message
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':jib-test:jib'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build image failed, perhaps you should make sure your credentials for 'registry-1.docker.io/library/eclipse-temurin' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
Looks like a duplicate of these:
How to setup Jib container to authenticate with docker remote registry to pull images?
401 Unauthorized when using jib to create docker image
https://github.com/GoogleContainerTools/jib/issues/3677
Try emptying config.json entirely or just delete the file. Particularly, remove the entry for "https://index.docker.io/v1/" and credsStore.
Related
I'm trying to copy files using scp command from jenkins (ci/cd). But i got permission denied error. If i'm trying manually from diffrent servers, its all done, at the same time i tried the same command in jenkins exes command, then i got error.
Configure > Build > execute shell
and the console output is like below,
[nginx_server] $ /bin/sh -xe /tmp/jenkins7685256768444698.sh
scp 111 ubuntu#34.229.202.9:/home/ubuntu Permission denied, please try again. Permission denied, please try again. ubuntu#34.229.202.9:
Permission denied (publickey,password). lost connection Build step
'Execute shell' marked build as failure Finished: FAILURE
if anyone know the solution, please answer...
Add your ssh key as a secret to Jenkins.
In your pipeline use sshagent:
pipeline {
stages {
stage('Stage_name') {
steps {
script {
sshagent (credentials: ['secret_name']) {
sh "scp ... "
}
}
}
}
}
}
I have my ~/.docker/config.json file like this:
{
"auths": {
"gcr.io": {
"auth": "b2Fhs74nf9s1d.............1SXowjd71nvg4"
}
},
"credsStore": "desktop",
"experimental": "disabled",
"stackOrchestrator": "swarm"
}
I am using the auth keyname following the Second way stated in this gitlab doc.
The value of auths[gcr.io][auth] is a base64 encoded string generated using:
echo -n "<USERNAME>:<PASSWORD>" | base64
For my username & password, I followed this gcloud doc. Hence, the base64 generation command was like this:
echo -n "oauth2accesstoken:$(gcloud auth print-access-token)" | base64
Now when I run in my local machine:
docker pull gcr.io/my_gcp_project/my_gcr_image:my_tag
I get the following error:
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
What am I doing wrong?
Some other things I tried myself
I tried the gcloud auth configure-docker method to see what it does to the auths field so that I can replicate that. However, all it did was just add credHelpers["gcr.io"] = "gcloud" in the json file. So it will use my local machines's creds helper and not any auth credentials. That's not helping my case here as I am looking to make it work by hard-coding the credentials inside the auths[gcr.io] field in ~/.docker/config.json file.
I also tried the "docker login with gcloud auth print-access-token" method like shown in this gcloud doc to see what it does to the auths field. All it did was add an empty object inside the config.json like auths[gcr.io] = {} and it had nothing else. I assume my OS is storing something in the system somewhere and using the actual creds from there. But this is not helping my case as I am looking to make it work by hard-coding the credentials inside config.json.
This is what the 2 methods above produced in my ~/.docker/config.json file.
// ~/.docker/config.json file
{
"auths": {
"gcr.io": {}, // <--- this one introduced by `docker login`
"https://index.docker.io/v1/": {}
},
"credsStore": "desktop",
"experimental": "disabled",
"stackOrchestrator": "swarm",
"credHelpers": { // <--- this one introduced by `gcloud auth configure-docker`
"gcr.io": "gcloud"
}
}
Runtime versions I have been using:
Google Cloud SDK v341.0.0
Docker v20.10.5, build 55c4c88
I'm trying to use docker with Jenkins Scripted pipeline, and faced with several problems.
If I use it in sh docker ... it results in an error
command not found docker
I tried to fix it by changing Install setting in Global Configuration tool - but not succeed with it.
I'm trying to use Docker plugin now.
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
Where cmd == --user=\$UID --rm -t -v ./build/:/home/user/build 192.168.1.33:5000/my-img
I use this code for parallel stages (list of stages generated dynamically), and got this error
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
What is proper usage of this plugin?
I found a lot of examples with withRun and other methods from docker, but I don't need to run any commands inside this image, I have command in Dockerfile (so it built-in for my container).
The error itself has the answer :).
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
You are missing protocol in custom registry. Refer https://jenkins.io/doc/book/pipeline/docker/#custom-registry
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("https://192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
You are missing the protocol, the registry must be https://192.168.1.33:5000
Also I have problem with relative path, but simple fix with adding pwd before relative path to build fixed.
Thx #yzT
I have a private GitHub Rust project that depends on another private GitHub Rust project and I want to build the main one with Jenkins. I have called the organization Organization and the dependency package subcrate in the below code.
My Jenkinsfile looks something like
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sh "cargo build"
}
}
etc...
}
}
I have tried the following in Cargo.toml to reference the dependency, it works fine on my machine
[dependencies]
subcrate = { git = "ssh://git#ssh.github.com/Organization/subcrate.git", tag = "0.1.0" }
When Jenkins runs I get the following error
+ cargo build
Updating registry `https://github.com/rust-lang/crates.io-index`
Updating git repository `ssh://git#github.com/Organization/subcrate.git`
error: failed to load source for a dependency on `subcrate`
Caused by:
Unable to update ssh://git#github.com/Organization/subcrate.git?tag=0.1.0#0623c097
Caused by:
failed to clone into: /usr/local/cargo/git/db/subcrate-3e391025a927594e
Caused by:
failed to authenticate when downloading repository
attempted ssh-agent authentication, but none of the usernames `git` succeeded
Caused by:
error authenticating: no auth sock variable; class=Ssh (23)
script returned exit code 101
How can I get Cargo to access this GitHub repository? Do I need to inject the GitHub credentials onto the slave? If so, how can I do this? Is it possible to use the same credentials Jenkins uses to checkout the main crate in the first place?
I installed the ssh-agent plugin and updated my Jenkinsfile to look like this
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sshagent(credentials: ['id-of-github-credentials']) {
sh "ssh -vvv -T git#github.com"
sh "cargo build"
}
}
}
etc...
}
}
I get the error
+ ssh -vvv -T git#github.com
No user exists for uid 113
script returned exit code 255
Okay, I figured it out, No user exists for uid error is because of a mismatch between the users in the host /etc/passwd and the container /etc/passwd. This can be fixed by mounting /etc/passwd.
agent {
docker {
image 'rust:latest'
args '-v /etc/passwd:/etc/passwd'
}
}
Then
sshagent(credentials: ['id-of-github-credentials']) {
sh "cargo build"
}
Works just fine
I'm working on building Jenkins pipeline for building a project with Gradle.
Jenkins has several slaves. All the slaves are connected to a NAS.
Some of the build steps run Gradle inside Docker containers while others run directly on the slaves.
The goal is to use as much cache as possible but I have also run into deadlock issues such as
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
> Timeout waiting to lock file hash cache (/home/slave/.gradle/caches/4.2/fileHashes). It is currently in use by another Gradle instance.
Due to the Gradle issue mentioned in the comment above, I do something like this — copying the Gradle cache into the container at startup, and writing any changes back at the end of the build:
pipeline {
agent {
docker {
image '…'
// Mount the Gradle cache in the container
args '-v /var/cache/gradle:/tmp/gradle-user-home:rw'
}
}
environment {
HOME = '/home/android'
GRADLE_CACHE = '/tmp/gradle-user-home'
}
stages {
stage('Prepare container') {
steps {
// Copy the Gradle cache from the host, so we can write to it
sh "rsync -a --include /caches --include /wrapper --exclude '/*' ${GRADLE_CACHE}/ ${HOME}/.gradle || true"
}
}
…
}
post {
success {
// Write updates to the Gradle cache back to the host
sh "rsync -au ${HOME}/.gradle/caches ${HOME}/.gradle/wrapper ${GRADLE_CACHE}/ || true"
}
}
}