jenkins pipeline - Provisioning with CloudFormation - jenkins

Documentation talks about provisioning Docker containers.
Ansible can be used for environment provisioning with Jenkins.
Using pipeline script , I would like to provision an AWS EC2 instance on AWS cloud using AWS CloudFormation template
Can Jenkins pipeline script reuse CloudFormation templates for provisioning on AWS cloud?

Yes you can use the Jenkins pipeline for provisioning resources on cloud. You can store ur cloudformation code either in SVN or GIT and write a script to pull those resources from SVN and GIT and provision resources using "aws cli" commands in the script you create and use to deploy resources to cloud.
You can create separate jobs for different stages of pipeline and make it work.

Related

How can I deploy aws resources using external Jenkins and terraform. (I don`t like my Jenkins running in ec2 or in aws)

How can I deploy aws resources using external jenkins and terraform. (I don`t like my jenkins running in ec2 or in aws) because it may terminate at any time and every time I have to build from ami or all steps that I do on first time. I mean to say save all settings and credentials etc. So, I looking for some solution to install it on my VM/virtual box and then run pipeline job there and build aws resources/ services using terraform.
You can run terraform or jenkins from anywhere to create resources in AWS.
Jenkins is just an orchestrator tool which will use terraform to create resources.
We only need to change how terraform interact with your AWS environment.
if you are having terraform on one of the AWS EC2 you can utilize EC2 metadata to interact/authenticate with AWS.
now as you move towards your local system or VM you have to change the way how you authenticate with terraform.
you can use below code in terraform to authenticate with AWS
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
please refer terraform documentation for more authentication methods
https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration

Invoking Ansible roles in jenkins

I am an architect completely new to DevOPS and CICD and my question may be not clear. My application vendor has provided some Ansible roles and playbooks. I have Jenkins server which is having a pipeline running a terraform scripts to provision the compute engines on GCP. After this provisioning I will have deploy my application provided by my vendor.
I have couple of questions here
Do you need separate server hosting ansible other than the Jenkins to execute the Ansible roles and playbook? Or Just a Ansible plug-in installed will suffice to execute these ansible roles
For application deployment on the target server provisioned by terraform , We need to tell fill those details dynamically to the hosts files of ansible? Have anyone tried this before or is there any other way.
My target servers where the application will be deployed will have both Windows and Linux. What integrations are required at jenkins level for application deployment pipeline to work?
Thanks,
Manoj

CI/CD pipeline deployment flow for test and prod environment

I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Jenkins Pipeline using Openshift

I am trying to create a Jenkins pipeline on Openshift, that automatically runs a Jenkins service when we start the pipeline build. I referred few templates online and created a Jenkins pod and a pipeline. But whenever I try to run the pipeline, It shows build status as "not started.
Later, I created a standalone Jenkins service in Openshift, created a Jenkins file in open shift and tried executing it. I encountered authentication issues while connecting with Openshift from Jenkins.
Can anyone guide me, if I am missing something or any other working templates for a pipeline?
Thanks
It’s because of permissions
Jenkins runs with Jenkins user and openshift doesn’t know how to connect to it
Create a new service account in openshift jenkins

GKE/Kubernetes CI/CD Pipelines With Jenkins: Gcloud Authentication Issue in Deploy stage

As part of a Jenkins pipeline to build and deploy an app to Google's Kubernetes service (GKE), I've created a script to carry out the following deployment to GKE:
checkout code
setup authentication to gcloud and
create the deployment and service using kubectl:
Detailed steps implemented by the script are as follows:
a) Create the docker registry authentication file (.json)
b) login to the google docker registry using the authentication file
c) initialise a git repo in the current directory
d) add the remote origin in prep for code pull
e) pull the source code for the microservice container
f) Create a kubectl configurtion file and directory to authenticate to the kubernetes cluster in Gcloud
g) Create a keyfile for a Gcloud service account that needs to authenticate to the container service
h) Activate the service account
i) Get the credentials for the container cluster from Gcloud
j) Run kubectl apply to create the kubernetes services
Full, tested, script at: https://pastebin.com/sZPrQuzD
If I put this sequence of steps in a scripts on an AWS EC2 instance and run it manually it works. However,the Jenkins build step fails at the the point kubectl is invoked to run the service, with the following error:
gcloud container clusters get-credentials jenkins-cd --zone europe-west1-b --project noon-prod
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
Build step 'Execute shell' marked build as failure
The full error dump from the Jenkins run is as follows:
https://pastebin.com/pSWPQ5Ei
My questions:
a) How to fix this? Surely it can't be that difficult to get authentication running from Jenkins?
b) Is this the correct way to authenticate to the gcloud container service from a Jenkins system which is not on Gcloud infrastructure at all?
Many thanks in advance for any help!
Traiano
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
We worked around some of the issues you've been having by running the Jenkins pipelines inside the kubernetes cluster; so there's no need to authenticate with GKE.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Resources