How can I deploy aws resources using external Jenkins and terraform. (I don`t like my Jenkins running in ec2 or in aws) - jenkins

How can I deploy aws resources using external jenkins and terraform. (I don`t like my jenkins running in ec2 or in aws) because it may terminate at any time and every time I have to build from ami or all steps that I do on first time. I mean to say save all settings and credentials etc. So, I looking for some solution to install it on my VM/virtual box and then run pipeline job there and build aws resources/ services using terraform.

You can run terraform or jenkins from anywhere to create resources in AWS.
Jenkins is just an orchestrator tool which will use terraform to create resources.
We only need to change how terraform interact with your AWS environment.
if you are having terraform on one of the AWS EC2 you can utilize EC2 metadata to interact/authenticate with AWS.
now as you move towards your local system or VM you have to change the way how you authenticate with terraform.
you can use below code in terraform to authenticate with AWS
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
please refer terraform documentation for more authentication methods
https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration

Related

Optimal Subnet Configuration for a CI/CD with BeanStalk, GitHub, CodePipeline and Jenkins

I am planning to create a CI/CD configuration inside a VPC which involves AWS BeanStalk (Host Environment), GitHub (Code Repository), CodePipeline, Jenkins (Code Build). The application located in the GitHub repository is supposed to run inside the BeanStalk environment, while any change to the GitHub repo should be reflected from the frontend.
I created a VPC with 2 public and private subnets each. And I have provisioned a BeanStalk environment with NodeJS platform (the application is NodeJS). Then I am configuring the CodePipeline, in which Add Source stage, I have successfully managed to connect to the GitHub repo. Now, I am at the Code Build stage which I want to add a Jenkins which runs in an EC2 instance. Therefore I am provisioning an EC2 instance which I plan to install Jenkins and then mention it in the Code Build stage.
However, I am not sure which subnet the Jenkins EC2 instance should be. Public or private?
For more secure architecture host jenkins in a private subnet and setup AWS codepipeline to use VPC endpoints, With VPC endpoints, no public IP addresses are required and traffic between the VPC and CodePipeline does not leave the Amazon network.

Move Jenkins config from one Kubernetes cluster to another

I have inherited a Jenkins installation which is used by multiple remote teams, and running on an Amazon EKS cluster. For reasons that are not really relevant, I need to move this Jenkins workload to a new EKS cluster.
Deploying jenkins itself is not a major issue, I am doing so using helm. The persistence of the existing jenkins deployment is bound to an Amazon EBS volume, the persistence of the new deployment will also be. The mount point will be /var/jenkins_home
I'm trying to find a simple way of migrating everything from the current jenkins installation and configuration to the new one. This includes mainly:
Authorization Strategy (RBAC)
Jobs
Plugins
Cloud and Agent Config
I know that everything required is most likely in Jenkins Home. Could I in theory just dump out the current Jenkins Home folder and import into the new running jenkins container using kubetl cp or something like that? Is there an easier way? Is this unsafe in some way?

How to integrate kubernetes cloud plugin with jenkins

I am trying to integrate Jenkins with K8 secrets in a dedicated namespace but even after creating the service account and secret, I still see Test Connection failures.
You need to create the jenkins global credential with the secret for the cluster to be authenticated. Do try using default namespace initially. Also double check your k8 url by running #kubectl cluster-info.

Deployment with Ansible in Jenkins pipeline

I have an Ansible playbook to deploy a java application (jar) on AWS EC2. I would like to use it inside a Jenkins pipeline as 'Deploy' step. To deploy on EC2, I need the downloaded private ssh key when the instance was created.
I have 2 choices :
Install ansible on the machine hosting Jenkins, insert the private SSH key in Jenkins, and use ansible-playbook plugin to deploy my app
Take a base docker image with ansible installed, extend it by inserting my private SSH key, and use this docker image to deploy my app
From a security point of view, what is best ?
For option 1, it's recommended to create a new user account, e.g. jenkins in the EC2 instance without sudo privilege, or at least passcode protected sudo
And it's a good scenario that using Ansible to manage those users accounts, it limits usage of the super key created by AWS
While for option 2, Docker is a good scenario of immutable deployment, which means the configuration should be determined even before the image is ready, so that Ansible is not quite useful in this scenario.
Different conf means different images to be created
Maybe you still use Ansible to manage those DockerFiles rather than initiate Ansible and interact with the application itself
The 2 options look quite different from each other in terms of how you design your system more than security concern
Do let me know you need more clarification

GKE/Kubernetes CI/CD Pipelines With Jenkins: Gcloud Authentication Issue in Deploy stage

As part of a Jenkins pipeline to build and deploy an app to Google's Kubernetes service (GKE), I've created a script to carry out the following deployment to GKE:
checkout code
setup authentication to gcloud and
create the deployment and service using kubectl:
Detailed steps implemented by the script are as follows:
a) Create the docker registry authentication file (.json)
b) login to the google docker registry using the authentication file
c) initialise a git repo in the current directory
d) add the remote origin in prep for code pull
e) pull the source code for the microservice container
f) Create a kubectl configurtion file and directory to authenticate to the kubernetes cluster in Gcloud
g) Create a keyfile for a Gcloud service account that needs to authenticate to the container service
h) Activate the service account
i) Get the credentials for the container cluster from Gcloud
j) Run kubectl apply to create the kubernetes services
Full, tested, script at: https://pastebin.com/sZPrQuzD
If I put this sequence of steps in a scripts on an AWS EC2 instance and run it manually it works. However,the Jenkins build step fails at the the point kubectl is invoked to run the service, with the following error:
gcloud container clusters get-credentials jenkins-cd --zone europe-west1-b --project noon-prod
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
Build step 'Execute shell' marked build as failure
The full error dump from the Jenkins run is as follows:
https://pastebin.com/pSWPQ5Ei
My questions:
a) How to fix this? Surely it can't be that difficult to get authentication running from Jenkins?
b) Is this the correct way to authenticate to the gcloud container service from a Jenkins system which is not on Gcloud infrastructure at all?
Many thanks in advance for any help!
Traiano
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
We worked around some of the issues you've been having by running the Jenkins pipelines inside the kubernetes cluster; so there's no need to authenticate with GKE.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Resources