use gcloud with Jenkins - jenkins

I've been trying to write a script that polls Google Cloud Storage periodically. This works fine when I run it normally, but if I include it as a build step in Jenkins, it gives a 403 Forbidden error. This is because there's no gcloud auth login process completed for the Jenkins user, which requires a verification code to be copied..how do I do that using Jenkins ?
EDIT:
I tried the steps at: https://cloud.google.com/storage/docs/authentication#service_accounts and downloaded a JSON key that looks like:
{"web":{"auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://accounts.google.com/o/oauth2/token","client_email":"....#project.googleusercontent.com","client_x509_cert_url":"https://www.googleapis.com/robot/v1/metadata/x509/....#project.googleusercontent.com","client_id":"....project.googleusercontent.com","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs"}}
which is darn strange because all of the links point to stuff like bad request, invalid request..I must be doing something wrong. The command I ran was:
gcloud auth activate-service-account ...#project.googleusercontent.com --key-file /var/lib/jenkins/....project.googleusercontent.com.json

Your best bet is probably to use a "service account" to authenticate gcloud/gsutil with the GCS service. The major steps are to use generate a JSON-formated private key following the instructions here:
https://cloud.google.com/storage/docs/authentication#service_accounts
Copy that key to a place where the Jenkins user can read it, and as the Jenkins user run
gcloud auth activate-service-account ...
(See https://cloud.google.com/storage/docs/authentication#service_accounts). Note that support for JSON key files is pretty new and you'll need an up-to-date gcloud release.
From there your Jenkins process should be able to access GCS as usual.
The key file should have the following format:
{
"private_key_id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"private_key": "-----BEGIN PRIVATE KEY-----\n ... \n-----END PRIVATE KEY-----\n",
"client_email": "...#developer.gserviceaccount.com",
"client_id": "..."
"type": "service_account"
}

Related

Running sfdx force:auth:web:login on jenkins job

I have a Jenkins job to deploy metadata to a given org. This is meant to be used as a first time setup method for new metadata. I have a jenkinsfile that can run the sfdx commands, and I'm trying to run force:auth:web:login.
agent none
steps {
script {
withEnv(["HOME=${env.WORKSPACE}", "MY_TOOL_DIR=${tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'}"]){
def sfdx = "SFDX_USE_GENERIC_UNIX_KEYCHAIN=true ${MY_TOOL_DIR}/sfdx"
sh "${sfdx} force:auth:web:login --setalias deployOrg"
sh "${sfdx} force:mdapi:deploy -c -d ../MetadataFiles -u deployOrg -w 10"
}
}
This runs, but it doesn't open up the prompt to do the actual login. I was trying to do this before with ant, which was running but was refusing to deploy customSite data. So I could do either or, I just have to fix one error or the other. Is there a way to authorize a regular org (not devhub) like with JWT flows, or is that fully impossible?
Any help is much appreciated.
Is there a way to authorize a regular org (not devhub) like with JWT flows, or is that fully impossible?
Yes. The JWT Flow is in no way specific to Dev Hub orgs. You can authorize those orgs using JWT and a stored certificate following the instructions in the Salesforce DX Developer Guide.

docker-compose - how to provide credentials or API key in order to pull image from private repository?

I have private repo where I am uploading images outside of the docker.
image: example-registry.com:4000/test
I have that defined in my docker-compose file.
How I can provide credentials or API key in order to pull from that repository? Is it possible to do it without executing "docker login" command or it is required to always execute those commands prior the docker-compose command?
I have API key which I am using for example to do the REST API from PowerShell or any other tool.
Can I use that somehow in order to avoid "docker login" command constantly?
Thank you
docker login creates or updates the ~/.docker/config.json file for you. With just the login part, it look likes
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "REDACTED"
}
}
}
There can be many things in this file, here is the doc
So to answer your question, you can avoid the login command by distributing this file instead. Something like:
Create a dedicated token (you shouldn't have multiple usage by token) here https://hub.docker.com/settings/security
Move your current config elsewhere if it does exist mv ~/.docker/config.json /tmp
Execute docker login -u YOUR-ACCOUNT, using the token as password
Copy the generated ~/.docker/config.json that you can then distribute to your server(s). This file is as much a secret as your password , don't make it public!
Move back your current config mv /tmp/config.json ~/.docker/
Having the file as a secret that you distribute doesn't make much difference than inputing the docker login command though, especially if you've some scripting to do that.

create jenkins ssh username with private key credential via rest xml api

Basically, I am trying to create a credential on jenkins via Rest API. Using xml data below:
<?xml version='1.0' encoding='UTF-8'?>
<com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
<scope>GLOBAL</scope>
<id>jenkins-github-ssh</id>
<description>jenkins-github-ssh</description>
<username>username</username>
<directEntryPrivateKeySource>
<privateKey>-----BEGIN OPENSSH PRIVATE KEY-----
*****************************************
-----END OPENSSH PRIVATE KEY-----</privateKey>
</directEntryPrivateKeySource>
</com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
I can see the credential after calling REST post request. But when I use this credential for a GitHub repository, Jenkins says:
Failed to connect to repository : Command "git ls-remote -h -- git#github.com:***.git HEAD" returned status code 128:
stdout:
stderr: Load key "/tmp/ssh3978703187838467164.key": invalid format
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
But If I update the credential which is created by rest api with same private key above manually. It works. Somehow key is broken while posting. Do you guys have any idea to point me the solution?
Jenkins 2.198 & SSH Credentials Plugin 1.17.3
Thanks
I faced exactly the same problem while pushing private SSH keys to Jenkins by a Python script. I'm using the Requests library to create and update SSH key credential sets in arbitrary credential stores on the Jenkins server.
The general problem is that your XML structure is partially wrong. The tag
<directEntryPrivateKeySource>
must be replaced by
<privateKeySource class="com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$DirectEntryPrivateKeySource">
Getting the basic XML structure
You can get the correct XML structure by yourself from the Jenkins server when you follow these steps:
Create a SSH key credential item manually. In the example below the key's id is test-sshkey. Let's place it in a credential store which is located in the folder "API-Test" which is a subfolder of "Playground", i.e. Playground/API-Test.
When you click on the newly created credential item in the Jenkins UI its URL should look like this:
https://JENKINS_HOSTNAME/job/Playground/job/API-Test/credentials/store/folder/domain/_/credential/test-sshkey/
Add /config.xml to the URL above so that it looks like this:
https://JENKINS_HOSTNAME/job/Playground/job/API-Test/credentials/store/folder/domain/_/credential/test-sshkey/config.xml
The XML structure returned by the URL in step 3 has almost the structure that we need for using with the Credentials API but is partially incomplete:
<com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey plugin="ssh-credentials#1.18.1">
<id>test-sshkey</id>
<description>DELETE AFTER USE</description>
<username>test</username>
<privateKeySource class="com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$DirectEntryPrivateKeySource">
<privateKey>
<secret-redacted/>
</privateKey>
</privateKeySource>
</com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
Using the Credentials API
Add the tags <scope> and <passphrase> for a valid XML scaffold that you can POST to the Credentials API:
<com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
<scope>GLOBAL</scope>
<id>CREDENTIAL_ID</id>
<description>MY_DESCRIPTION</description>
<username>A_USERNAME</username>
<passphrase>OPTIONAL_KEY_PASSWORD</passphrase>
<privateKeySource class="com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$DirectEntryPrivateKeySource">
<privateKey>YOUR_PRIVATE_SSH_KEY_GOES_HERE</privateKey>
</privateKeySource>
</com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
Caveat: If the submitted XML has a wrong structure the REST API of the Credentials Plugin will nevertheless accept it and return a misleading HTTP status code 200!
Now we can use this XML structure to POST it to the API endpoints for creating or updating by cURL or similar tools. We assume that all operations are executed in the credential store of the folder "Playground/API-Test".
The following code example in Python is "dumbed down" completely to focus on the general approach:
def addCredentialSetSshPrivateKey(self, credentialDataObj):
"""
Adds a credential set with a private SSH key to a credential store
credentialDataObj: An instance of a simple DTO
"""
jenkinsRequestUrl = "https://ci-yoda-new.codemanufaktur.com/job/Playground/job/API-Test/credentials/store/folder/domain/_/createCredentials"
authentication = ("jenkins_admin_user", "API-TOKEN_FOR_THE_USER")
completeSamlData = """
<com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
<scope>GLOBAL</scope>
<id>{0}</id>
<description>{1}</description>
<username>{2}</username>
<passphrase>{3}</passphrase>
<privateKeySource class="com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$DirectEntryPrivateKeySource">
<privateKey>{4}</privateKey>
</privateKeySource>
</com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey>
""".format(credentialDataObj.id(), credentialDataObj.description(), credentialDataObj.username(), credentialDataObj.key_passphrase(), credentialDataObj.private_ssh_key())
# When using CSRF protection in Jenkins a API crumb must be included in the actual REST call.
# The following method requests the Jenkins Crumb Issuer for a API crumb and returns a JSON object like this:
# {'_class': 'hudson.security.csrf.DefaultCrumbIssuer', 'crumb': 'a5d36ef09e063322169888f0b81341fe13b4109482a7936bc08c6f9a01badd39', 'crumbRequestField': 'Jenkins-Crumb'}
jsonCrumb = self._requestApiCrumb()
# The actual REST call with headers, XML payload and all other bells and whistles.
remoteSession = requests.Session()
return remoteSession.post(jenkinsRequestUrl, auth = authentication, headers = {"content-type":"application/xml", jsonCrumb['crumbRequestField']:jsonCrumb['crumb']}, data = completeSamlData)
REST endpoint for creating a SSH credential item:
https://JENKINS_HOSTNAME/job/Playground/job/API-Test/credentials/store/folder/domain/_/createCredentials
REST endpoint for updating a SSH credential item:
https://ci-yoda-new.codemanufaktur.com/job/Playground/job/API-Test/credentials/store/folder/domain/_/credential/credential_ci-yoda-new-project-apex_privatekey/config.xml
Apparently in the latter case you just update the config.xml file of an existing credential item.
Also see the user guide for the Credentials Plugin, section REST API, expecially for constructing the correct REST URLs. For requesting the Jenkins crumb issuer with Python see this StackOverflow answer.
Solution tested with:
Jenkins 2.214
Credentials Plugin 2.3.1
SSH Credentials Plugin 1.18.1
For the people who are having the exact same problem;
I've tried uploading it as a file, uploading it with API, using jenkins CLI, etc. Everything I tried has failed. Same issue is alsoposted in here;
https://issues.jenkins.io/browse/JENKINS-60714
So steps that finally worked is explained as follows;
Install and configure the Jenkins Configuration as Code Plugin.
Upload your configuration similar to yaml file below.
You might also want to define the private key content as an environment variable in the Jenkins instance and use it as "${private_key}" instead of pasting it visibly.
jenkins:
systemMessage: "Example of configuring credentials in Jenkins"
credentials:
system:
domainCredentials:
- credentials:
- basicSSHUserPrivateKey:
description: "kro"
id: "kro"
scope: GLOBAL
username: "kro"
privateKeySource:
directEntry:
privateKey: |
-----BEGIN RSA PRIVATE KEY-----
MIIG5AIBAAKCAYEAvuiaIDs+ydzR7Xxo5Owvv+G9/arbqN0YwhaGQQlicJjM4ZvI
..........YOUR KEY.............
53Zg4QmSb1XGKUTXxIeGd27OIvgkwAn7K/cjQsU9t802iYV3tisnfA==
-----END RSA PRIVATE KEY-----

How to pass AWS secret key and id in jenkins build pipeline script?

I am trying to pass the AWS secret key and password into jenkins script (which creates the env file)
My code:
node {
writeFile file: 'temp_env.txt', text: """
AWS_ACCESS_KEY= << Access Key >>
AWS_SECRET_KEY= << password >>
"""
docker.withRegistry('https://quay.io', 'c5234316dc-dqwqwda1-415645452-b343-406bf8332edb') {
sh 'docker pull quay.io/docker_image'
docker.image ('quay.iodocker_image').run('-it --env-file temp_env.txt --name test quay.io/docker_image:develop ./code/test1.py test-service')
sh 'rm temp_env.txt'
}
}
I am using the actual secret key and id, i would rather have the credentials injected here. How can I achieve it? I read the entire instructions here but was not able to figure out.
I know how to use the credentials binding plug-in, but not sure how I can add the user_name and password credentials variables to my code.
The idea of those instructions is to first store those credentials in the Credentials section of Jenkins, from the Jenkins Credential Biding plugin, as shown here
Once there are safely stored there, you can declare them in an environment step:
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
Solution: The best practice for storing credentials, api tokens and secret keys is to store it on global credentials in jenkins ( this applies to all scope of credentials in the project/item/object) and get it pipeline code. This methodology prevents users storing sensitive data in plain-text insecurely on their code/project.
I'll demonstrate it with short example: On this example I'll store an api token of my app in Jenkins credentials ,get it in my pipeline step and curl it to perform some rest api HTTP operation on my server.
First Store it:
Jenkins -> Credentials -> Global -> Add Credentials -> Select your
type and fill in the details
In my example I create secret text with secret xxx, the ID would be MY_SECRET_TOKEN and the description would be my secret token for my api service. Than I'll save it and my pipeline would be like below...
script{
withCredentials([
string(
credentialsId: 'MY_SECRET_TOKEN',
variable: 'TOKEN_VARIABLE')
]) {
sh """#!/bin/bash
curl -k -L -H 'Authorization: Token ${TOKEN_VARIABLE}'
https://myservice/api/users/someaction/
"""
}
}
For more information look on the documentation

Error response from daemon: Get https://xxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/v2/xxxx/manifests/v_50: no basic auth credentials

I'm trying to implement CD/CI workflow with jenkins-docker-aws. I'm in the point of having the job properly configured but I'm getting an error at deployment time in ec2.
I face in AWS the following error:
Status reason CannotPullContainerError: API error (404): repository xxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/xxxxxxxxx not found
My repository exists in AWS ECR. So, debugging and trying to pull the image that is in the Repository, I've executed the following commands to confirm everything is fine:
1.- Getting Logging succeeded by executing the output of:
aws ecr get-login --no-include-email
2.- Checked my ~/.docker/config.json it shows, firstly it showed registry URL without protocol, but after reading some recomendations pointed to add it:
{
"auths": {
"https://xxxxxxxx.dkr.ecr.us-west-1.amazonaws.com": {
"auth": "long key..."
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/17.12.1-ce (linux)"
}
}
So, after these checks and execute the pull command, I'm still getting...
[ec2-user#ip-xxxxxx .docker]$ docker pull xxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/xxxxxxxxx:v_50
Error response from daemon: Get https://xxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/v2/davidtest/manifests/v_50: no basic auth credentials

Resources