Isolating Secrets for Pipelines in Jenkins - jenkins

We are implementing a GitOps like CI/CD in Jenkins. Where we are deploying to Openshift/Kubernetes. For sake of simplicity lets say we have only 2 repositories:
First with the application source code , there is also Jenkinsfile in the source that defines the build. (that also pushes images to a repository.)
We ha a second repository where the deployment pipeline is defined (jenkinsfile). This pipeline deploys image to production (think "kubectl apply").
The problem is that the pipeline (2) needs to access credentials that are used to authenticate (against kubernetes api) to productions. We thought to store these credentials in Jenkins. Where we don't want in same Jenkins the first (1) pipeline to have access to these production credentials.
How could we solve this with Jenkins? (How to store these credentials)
thank you

Just to capture from the comments, there's effectively an answer from #RRT in another thread ( https://stackoverflow.com/a/42721809/9705485 ) :
Using the Folders and Credentials Binding plugin, you can define credentials on the folder level that are only available for the job(s) inside this folder. The folder level store becomes available once you made the folder.
Source: https://support.cloudbees.com/hc/en-us/articles/203802500-Injecting-Secrets-into-Jenkins-Build-Jobs
Another example of adding scoped credentials (this one for dockerhub credentials) is https://liatrio.com/building-docker-jenkins-pipelines/

Related

Deploying configuration via Jenkinsfile to unknown amount of servers

We're setting up multiple more or less static servers in AWS. These are primarily configured via Ansible and that's also the ultimate source of truth when it comes to their existence, grouping, host names and IPs. But then there's Jenkins deploying configuration files to these servers based on new commits added to a git repository.
I'm having an issue with listing the target servers directly in a Jenkinsfile. How shall I proceed? Which are the most common ways of dealing with this?
I understand this is mostly an opinion based topic. But maybe there's a particular Jenkins feature which I don't know about...?
Thank you.
This is very subjective. Following are a few ways to do this.
Store the details somewhere accessible after the Ansible step. Possibly commit to a Github repo and retrieve these details within the Jenkins Job.
Using AWS APIs/CLI to retrieve server details. You can either set up AWS CLI in Jenkins Agent or use something like AWS Step Plugin.
Do an API call to Jenkins after the Ansible script is executed and update the server details in the Job itself.

Publish latest build artifact from "LOCAL" Jenkins to Azure DevOps Release Pipeline?

I have a local Jenkins server running on one of my spare computers (win10). Note that it is not behind any sort of a server and hence is only available within my local network. I have set it up so that it does the continuous fetch from my remote git repo and builds the artifacts and archives them for a successful build. I would like to publish these archives to my AzureDevops Release pipeline. How do I do this? (And yes I have looked through all the tutorials but they assume that I have Jenkins running on a VM somewhere on the cloud).
So far I have had no luck with the tutorials on the web since I donot really have a URL to this instance of Jenkins since it is only available on my local network. I cannot really build these artifacts on a remote Jenkins server, so I am really restricted to using this solution for running the builds.
I am looking to have these archives that Jenkins builds be directly available within my Azure DevOps release pipeline, on every successful build. Thanks for the help!
So since nobody else has answered this I am going to detail what I ended up doing (maybe not the best of the approaches but it works for my setup, suggestions are welcome!).
To interface with the Azure DevOps platform from a local machine you will need to configure a self-hosted agent (based on your specific OS), which will allow you to trigger builds, archive and upload the build artifacts to the Azure DevOps platform. This way you also donot have to poll for SCM changes too (which I think is not that elegant sometimes).
1. So you will need to go through the setup as outlined here for you local self-hosted agent:
Windows: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
Linux: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=azure-devops
MacOS: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-osx?view=azure-devops
NOTE: I have chosen to run the agent as service on windows for my setup
2. Next setup your Jenkins build job how you normally would, with your usual repo access setup. Things to keep in mind are following:
Under "Build Triggers", select the Poll SCM option, but make sure that the schedule is blank, this will make sure that the trigger from your post-commit hook from the agent works. Example setup shown below:
Under "Post-build Actions", make sure that you are archiving the artifacts as required. Example shown below:
3. Now time to setup your project's "Jenkins Service Connection", this can be accessed from the Project Settings tab on the bottom left of you project view in Azure DevOps. Note that this basically helps you self-hosted agent to locate and communicate with the Jenkins instance running locally (or an other network accessible location!). Go under Pipelines -> Service Connections and a new service connection for Jenkins. Note that the trick here is to use the URL for the connection as seen by you local self-hosted agent, which means it can be just any IP (including localhost) that the agent can access normally. Username and password are the same as the ones you setup in Jenkins. Example shown below:
NOTE: You can try to do "Verify and Save" but it will throw an error, so ignore the error or just go ahead and "Save without verification". Also you will have to do this per project, unlike the self-hosted agent setup which is per machine.
4. Now you just need to configure your build pipeline to give jobs to the right agent and pointing to the right service end-point. Now under you build pipeline settings use the agent pool that has the self-hosted agent(s) which can access your build servers. And choose the Jenkins connection that you just created in the above step. The rest of the setup is identical to how you would normally setup your project's build pipeline. An example would be as follows:
NOTE: The key here is the correct "Job name" (this should be the same as the one you have setup in you Jenkins build server instance) and the correct "Jenkins service connection".
5. The rest is straight forward in the sense that you just now need to make sure that you have a step to "Download artifacts" (NOT necessary if you donot want the artifacts on the DevOps platform) & "Publish Artifacts" (this is needed for your release pipeline to see that build artifact and to trigger it too if you want), after your jenkins queue job step. Make sure to setup the correct job directories for download from you local self-hosted agent. Example setup for both the steps:
NOTE: If you are having trouble with the paths for download and publish refer to this link for predefined variables for the self-hosted agents: https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml
6. Now in your release pipeline you should be able to add the artifact sources from you build pipeline. Example shown below:
Now you should be able to get the local artifacts in the cloud on the Azure DevOps platform, in case you cannot use the build agents provided by Microsoft for any reason!

Jenkins CI/CD setup for a microservices system

I have a system with dozens of microservices, all build and released the same way - each is in a docker container, and deployed in a Kubernetes cluster.
There are multiple clusters (Dev1, dev2, QA ... Prod)
We are using Jenkins to deploy each microservice. Each microservice has its own pipeline and this pipelines is duplicated for each environment, like so:
DEV1 (view)
dev1_microserviceA (job / pipline)
dev1_microserviceB
...
dev1_microserviceX
DEV2
dev1_microserviceA
dev1_microserviceB
...
dev1_microserviceX
...
PROD
dev1_microserviceA
dev1_microserviceB
...
dev1_microserviceX
each of those pipelines is almost identical, differences are really just a matter of parameters like environment, name of the microservice, name of git repo.
Some common code is in libraries that each pipeline uses. Is this the proper / typical setup and most refactored setup? I'd like to avoid having to create a pipeline for each microservice and for each envionment but not sure what are my further refactoring options. I am new to Jenkins & devops.
I've looked into parametrized pipelines but I do not want to have to enter a parameter each time I need to build, and I also need to be able to chain builds, and see the results of all builds at a glance, in each environment.
I would use Declarative Pipelines where you can define your logic in a local Jenkinsfile in your repositories.
Using Jenkins, you can have a "master" Jenkinsfile and/or project that you can inherit by invoking the upstream project. This will allow you to effectively share your instructions and reduce duplication.
What is typically never covered when it comes to CI/CD is the "management" of deployments. Since most CI/CD services are stateless it has no notion of applications deployed.
GitLab has come a long way with this but Jenkins is far behind.
At the end of the day you will have to either create a separate project for each repository/purpose due to how Jenkins works OR (recommended) have a "master" project that let's you pass in things like project name, git repo url, specific application variables and values and so on.

Jenkins Docker image building for Different Tenant from same code repository

I am trying to implement CI/CD pipeline for my Spring Boot micro service deployment. I am planned to use Jenkins and Kubernetes for Making CI/CD pipeline. And I have one SVN code repository for version control.
Nature Of Application
Nature of my application is, one microservice need to deploy for multiple tenant. Actually code is same but database configuration is different for different tenant. And I am managing the configuration using Spring cloud config server.
My Requirement
My requirement is that, when I am committing code into my SVN code repository, then Jenkins need to pull my code, build project (Maven), And need to create Docker Image for multiple tenant. And need to deploy.
Here the thing is that, commit to one code repository need to build multiple docker image from same code repo. Means one code repo - multiple docker image building process. Actually, Dockerfile containing different config for different docker image ie. for different tenant. So here my requirement is that I need to build multiple docker images for different tenant with different configuration added in Dockerfile from one code repo by using Jenkins
My Analysis
I am currently planning to do this by adding multiple Jenkins pipeline job connect to same code repo. And within Jenkins pipeline job, I can add different configuration. Because Image name for different tenant need to keepdifferent and need to push image into Dockerhub.
My Confusion
Here my confusion is that,
Can I add multiple pipeline job from same code repository using Jenkins?
If I can add multiple pipeline job from same code repo, How I can deploy image for every tenant to kubernetes ? Do I need to add jobs for deployment? Or one single job is enough to deploy?
You seem to be going about it a bit wrong.
Since your code is same for all the tenants and only difference is config, you should better create a single docker image and deploy it along with tenant specific configuration when deploying to Kubernetes.
So, your changes in your repository will trigger one Jenkins build and produce one docker image. Then you can have either multiple Jenkins jobs or multiple steps in pipeline which deploy the docker image with tenant specific config to Kubernetes.
If you don't want to heed to above, here are the answers to your questions:
You can create multiple pipelines from same repository in Jenkins. (Select New item > pipeline multiple times).
You can keep a list of tenants and just loop through OR run all deployments in parallel in a single pipeline stage.

Export Jenkins User Account and Security Settings?

I'm working with Jenkins servers in three different environments:Development-Staging-Production.
We work out the kinks in our Jenkins jobs in dev, test them in stage, and then finally move them to production. We do that be either replicating the job in the GUI (cut and paste) or tarring up the job directory and moving it to the next environment via the command line.
I'm wondering if the move option can be done with the service accounts that run these jobs. I can see the user account directories and config files under /var/lib/jenkins/users. What I don't see are the security settings that get applied to the user from the "Configure Global Security" screen in the GUI.
For these service accounts, we have the minimal authorization of READ on Global and READ and BUILD on Jobs.
What I'd like to be able to do is prove a service account in dev and then promote it to Stage and Prod from the command line vs having to manually recreate the account in the GUI for each upstream environment. If the API key could also be moved along with it that would be great.
Any thoughts or ideas?
User permissions are in config.xml under the Jenkins root folder, in section <authorizationStrategy>
This file contains other global settings, so just copying it would not be advisable
Just a wild thought, but why not use a master-slave config and trigger builds on the desired remote machine based on some "environment" parameter. You can also look through the plugins section to see if you can find something useful such as the:
node label parameter which allows to define and select the label for the node where you want the build to run
copy to slave that facilitates copying files to and from a slave
That way you'll only have one job configuration which can be executed on different environments without too much hustle.

Resources