Jenkins on GCE - Deploying Google Cloud Function to different GCP project - jenkins

I have Jenkins installed on GCE VM (Debian) on project xxxx.
I need to deploy a cloud function with source in Google cloud repository to project yyyy.
I do it successfully from the shell of Jenkins VM.
In order to deploy a function from pipeline I did follow:
Create a service account in yyyy project.
Upload the key (json file) to VM.
Activate the account gcloud auth activate-service-account yyyy-sa#yyyy.iam.gservice account.com --key-file jenkins-test.json
Define pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'gcloud config set account yyyy#yyyy.iam.gserviceaccount.com'
sh '''
gcloud functions deploy helloWorld --region=us-central1 --runtime nodejs8 --trigger-http --project yyyy \
--source https://source.developers.google.com/projects/xxxx/repos/test1/moveable-aliases/master/paths/HelloWorld/
'''
}
}
}
}
Anyway, I receive:
gcloud functions deploy helloWorld --region=us-central1 --runtime nodejs8 --trigger-http
--project yyyy --source https://source.developers.google.com/projects/xxxx/repos/test1/moveable-
aliases/master/paths/HelloWorld/
ERROR: (gcloud.functions.deploy) Your current active account [yyyy#yyyy.iam.gserviceaccount.com] **does not have any valid credentials**

Related

Bitbucket pipeline to deploy to Azure VM using Azure-cli command cannot access to script file

I have my source code in Bitbucket and I'm using bitbucket pipeline to build and deploy my Web application to Azure VM.
I'm not using Azure Web Application because of constraint into the use of third parties tools.
I'm stuck on how to use a script file into my Azure cli run command.
Actual error is:
"/opt/atlassian/pipelines/agent/build/SetupSimpleSite.ps1 : The term \n'/opt/atlassian/pipelines/agent/build/SetupSimpleSite.ps1' is not recognized as the name of a cmdlet, function, script \nfile, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct \nand try again.\nAt C:\\Packages\\Plugins\\Microsoft.CPlat.Core.RunCommandWindows\\1.1.8\\Downloads\\script1.ps1:1 char:1\n+ /opt/atlassian/pipelines/agent/build/SetupSimpleSite.ps1
my pipeline code:
test-azure-cli-pipeline:
- step:
name: "Display Azure account"
deployment: staging
script:
- pipe: microsoft/azure-cli-run:1.1.0
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
#CLI_COMMAND: 'az account show'
CLI_COMMAND: 'az vm run-command invoke --command-id RunPowerShellScript --name $AZURE_VM_NAME --resource-group $AZURE_RESOURCE_GROUP_NAME --scripts $BITBUCKET_CLONE_DIR/SetupSimpleSite.ps1'
The script SetupSimpleSite.ps1 is located at the root of my Git repo, same directory than my bitbucket-pipelines.yml
Note that the Azure cli is working fine as the az account show is displaying account details as expected.
I cannot found any relevant information from the repository on how to use source code script from the azure cli, link: https://bitbucket.org/microsoft/azure-cli-run/src/master/
I would like my script Powershell to be kept into my source code.
I finally get it working, you should prefix the file with '#'.
I found the solution from there: Using the 'az vm run-command' with a .ps1 file
So here is the final script working:
test-azure-cli-pipeline:
- step:
name: "Display Azure account"
deployment: staging
script:
- pipe: microsoft/azure-cli-run:1.1.0
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
CLI_COMMAND: 'az vm run-command invoke --command-id RunPowerShellScript --name $AZURE_VM_NAME --resource-group $AZURE_RESOURCE_GROUP_NAME --scripts #SetupSimpleSite.ps1'
DEBUG: 'true'
The debug true is useful thanks Chase
First add DEBUG: 'true' Also added pwd ; ls ; to your CLI_COMAND verify that your path is correct, this is the error you receive so its the most likely the issue. Fix path and test again.
test-azure-cli-pipeline:
- step:
name: "Display Azure account"
deployment: staging
script:
- pipe: microsoft/azure-cli-run:1.1.0
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
CLI_COMMAND: 'pwd ; ls ; az vm run-command invoke --command-id RunPowerShellScript --name $AZURE_VM_NAME --resource-group $AZURE_RESOURCE_GROUP_NAME --scripts $BITBUCKET_CLONE_DIR/SetupSimpleSite.ps1'
DEBUG: 'true'
If this doesnt work, stop using the azure-cli-pipe, and instead use a step with the az-cli docker image: mcr.microsoft.com/azure-cli and run as a regular script... (I'm not familiar enough with az cli to know how to configure credentials, but you should know.)
- step:
name: Run AZ script
image: mcr.microsoft.com/azure-cli
script:
- az login -u $AZURE_APP_ID -p $AZURE_PASSWORD #setup az credentials
- az vm run-command invoke --command-id RunPowerShellScript --name $AZURE_VM_NAME --resource-group $AZURE_RESOURCE_GROUP_NAME --scripts $BITBUCKET_CLONE_DIR/SetupSimpleSite.ps1'

Connect kubernetes with Jenkins pipeline

I am trying to deploy container using helm from Jenkins pipeline. I have install Kubernetes plugin for jenkins and provided it local running kubernetes URL and the config file in credentials. When I am doing 'Test connection', it is showing'Connected to Kubernetes 1.16+'.
But when I run helm install command from pipeline it gives error
Error: Kubernetes cluster unreachable: the server could not find the requested resource
Note: I am able to do all operation using CLI and also from Jenkins pipeline by using withCredentials and passing cred file variable name(created in jenkins credentials). I just want to do this without wrapping it inside 'withCredentials' .
Both Jenkins and kubernetes are running seperately on windows 10. Please help
Helm uses kubectl config file. I'm using an step like this.
steps {
container('helm') {
withCredentials([file(credentialsId: 'project-credentials', variable: 'PULL_KEYFILE')]) {
sh """
gcloud auth activate-service-account --key-file=${PULL_KEYFILE} --project project-name
gcloud container clusters get-credentials cluster-name --zone us-east1
kubectl create namespace ${NAMESPACE} --dry-run -o yaml | kubectl apply -f -
"""
helm upgrade --install release-name .
}
}
}

docker buildkit not supported by daemon in AWS EKS kubernetes cluster

I am using Build Toolkit to build docker image for each microservice.
./build.sh
export DOCKER_BUILDKIT=1
# ....
docker build -t ....
# ...
This works on my machine with docker (18.09.2).
However, it does not work with Jenkins, that I setup as following :
EKS is provisioned with a Terraform module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "5.0.0"
# ....
}
Jenkins is deployed on EKS (v1.12.10-eks-ffbd9 , docker://18.6.1) via this Helm Chart.
Jenkins plugins as defined in Values of the helm release:
kubernetes:1.18.1
workflow-job:2.33
workflow-aggregator:2.6
credentials-binding:1.19
git:3.11.0
blueocean:1.19.0
bitbucket-oauth:0.9
Jenkins Pipeline is declarative, and it uses a Pod template where the container image is docker:18-dind and the container name is dind.
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'jenkins-pod.yaml'
}
}
stages {
stage('Build Backends') {
steps {
container('dind') {
sh 'chmod +x *sh'
sh './build.sh -t=dev'
}
containerLog 'dind'
}
}
}
}
When Jenkins executes this pipeline, it shows this error :
buildkit not supported by daemon
I am not sure which software should I upgrade to make docker-buildkit work ? and to which version ?
Terraform eks Module which is now 5.0.0 ?
Or
docker:18-dind image which behaves like environment of the ephemeral Jenkins slaves ?
Or
the Jenkins Plugin kubernetes:1.18.1 ?
As per docker-ce sources, there are two requirements to make successful check isSessionSupported for starting buildkit session:
dockerCli.ServerInfo().HasExperimental
versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.31"
So:
check version of your docker-cli library
and is HasExperimental option enabled.
To check if it has Experimantal support, run:
docker version -f '{{.Server.Experimental}}'
Docker buildkit support came out of experimental in 18.09, so you may need to upgrade docker inside of EKS:
EKS (v1.12.10-eks-ffbd9 , docker://18.6.1
Or perhaps you have an old dind image (the 18-dind should be new enough, but an older version of this tag pointing to 18.06 or 18.03 would not). You can try 18.09-dind and 19-dind which should both work if the build is actually happening inside dind.

Jenkins: Push to ECR from slave

I'm building a docker container with spotify's maven plugin and try to push to ecr afterwards.
This happens using cloudbees Build and Publish plugin after managing to login with the Amazon ECR plugin.
This works like a charm on the jenkins master.
But on the slave I get:
no basic auth credentials
Build step 'Docker Build and Publish' marked build as failure
Is pushing from slaves out of scope for the ECR Plugin or did I miss something?
The answers here didn't work for my pipeline. I find this solution working, and also clean:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'myCreds', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh '''
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${REGISTRY}
..
'''
}
This solution doesn't require aws cli v2.
You might be falling foul of the bug reported in the ECR plugin here: https://issues.jenkins-ci.org/browse/JENKINS-44143
Various people in that thread are describing slightly different symptoms, but the common theme is that docker was failing to use the auth details that had been correctly generated by the ECR plugin.
I found in my case this was because the ECR plugin was saving to one docker config and the docker-commons plugin (which handles the actual work of the docker API) was reading from another. Docker changed config formats and locations in an earlier version which caused the conflict.
The plugin author offers a workaround which is to essentially just nuke both config files first:
node {
//cleanup current user docker credentials
sh 'rm ~/.dockercfg || true'
sh 'rm ~/.docker/config.json || true'
//configure registry
docker.withRegistry('https://ID.ecr.eu-west-1.amazonaws.com', 'ecr:eu-west-1:86c8f5ec-1ce1-4e94-80c2-18e23bbd724a') {
//build image
def customImage = docker.build("my-image:${env.BUILD_ID}")
//push image
customImage.push()
}
You might want to try that purely as a debugging step and quick fix (if it works you can be confident this bug is your issue).
My permanent fix was to simply create the new style dockercfg manually with a sensible default, and then set the environment variable to point to it.
I did this in my Dockerfile which creates my Jenkins instance like so:
RUN mkdir -p $JENKINS_HOME/.docker/ && \
echo '{"auths":{}}' > $JENKINS_HOME/.docker/config.json
ENV DOCKER_CONFIG $JENKINS_HOME/.docker
You have not credentials in the slave, that is the problem you have. I fix this problem injecting this credentials in every pipeline that run in the on demand slaves.
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWS_EC2_key', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh "aws configure set aws_access_key_id ${AWS_ACCESS_KEY_ID}"
sh "aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY}"
sh '$(aws ecr get-login --no-include-email --region eu-central-1)'
sh "docker push ${your_ec2_repo}/${di_name}:image_name${newVersion}"
Of course you need to have installed the aws-cli in the slave

Managing multi-pod integration tests with kubernetes and jenkins

I am trying to set up a testing framework for my kubernetes cluster using jenkins and the jenkins kubernetes plugin.
I can get jenkins to provision pods and run basic unit tests, but what is less clear is how I can run tests that involve coordination between multiple pods.
Essentially I want to do something like this:
podTemplate(label: 'pod 1', containers: [ containerTemplate(...)]) {
node('pod1') {
container('container1') {
// start service 1
}
}
}
podTemplate(label: 'pod 2', containers[ containerTemplate(...)]) {
node('pod2') {
container('container2') {
// start service 2
}
}
stage ('Run test') {
node {
sh 'run something that causes service 1 to query service 2'
}
}
I have two main problems:
Pod lifecycle:
As soon as the block after the podtemplate is cleared, the pods are terminated. Is there an accepted way to keep the pods alive until a specified condition has been met?
ContainerTemplate from docker image:
I am using a docker image to provision the containers inside each kubernetes pod, however the files that should be inside those images do not seem to be visible/accessable inside the 'container' blocks, even though the environments and dependencies installed are correct for the repo. How do I actually get the service defined in the docker image to run in a jenkins provisioned pod?
It has been some time since I have asked this question, and in the meantime I have learned some things that let me accomplish what I have been asking, though maybe not as neatly as I would have liked.
The solution to multi-service tests ended up being simply using an pod template that has the google cloud library, and assigning that worker a service-account credential plus a secret key so that it can kubectl commands on the cluster.
Dockerfile for worker, replace "X"s with desired version:
FROM google/cloud-sdk:alpine
// Install some utility functions.
RUN apk add --no-cache \
git \
curl \
bash \
openssl
// Used to install a custom version of kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.XX.X/bin/linux/amd64/kubectl &&\
chmod +x ./kubectl &&\
mv ./kubectl /usr/local/bin/kubectl
// Helm to manage deployments.
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh &&\
chmod 700 get_helm.sh && ./get_helm.sh --version vX.XX.X
Then in the groovy pipeline:
pipeline {
agent {
kubernetes {
label 'kubectl_helm'
defaultContainer 'jnlp'
serviceAccount 'helm'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: gcloud
image: your-docker-repo-here
command:
- cat
tty: true
"""
}
}
environment {
GOOGLE_APPLICATION_CREDENTIALS = credentials('google-creds')
}
stages {
stage('Do something') {
steps {
container('gcloud') {
sh 'kubectl apply -f somefile.yaml'
sh 'helm install something somerepo/somechart'
}
}
}
}
Now that I can access both helm and kubectl commands, I can bring pods or services up and down at will. It still doesn't solve the problem of being able to use the internal "context" of them to access files, but at least it gives me a way to run integration tests.
NOTE: For this to work properly you will need a service account of the name you use for your service account name, and credentials stored in jenkins credentials store. For the helm commands to work, you will need to make sure Tiller is installed on your kubernetes cluster. Also, do not change the name of the env key from GOOGLE_APPLICATION_CREDENTIALS as the gsutils tools will be looking for that environmental variable.

Resources