I have been trying to share the tfstate file between 2 different pipelines - terraform-provider-azure

I have been trying to share the tfstate file between 2 different pipelines. I need to deploy infra with terraform from a different pipeline and application using terraform from different pipeline but with same tfstate file
I am able to use different tfstate files and deploy infra with one pipeline and application with another pipeline. Is there a way we can do it using a single tfstate file.

You should be looking at the terraform backend state management which is designed for exactly the same scenario you're describing - contributing using a shared state file. Here's how it looks like for the Azure Provider backend state configuration using a Storage Account:
terraform {
backend "azurerm" {
resource_group_name = "StorageAccount-ResourceGroup"
storage_account_name = "abcd1234"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}

Related

share volume between agent block in jenkins k8s

we have a jenkins pipeline that looks sort of like this
pipeline {
agent {
kubernetes {
inheritFrom 'default'
yamlFile 'pipeline_agent.yaml'
}
}
stages {
stage('build backend') { ... }
stage('build frontend') { ... }
stage('combine to jar file') { ... }
stage('test') { ... }
}
}
our pipeline_agent.yaml then contains containers for all different stages, so one for building with gradle, one with node for building the frontend, a separate custom one for testing, and so on.
And so this "works" and the output is great, but the issue is that we are provisioning a lot of resources that are not used for the entire duration on the pipeline.
For example it would be great to release all unused containers and then increase the resource requests for the test stage.
But I am not entirely sure how to do that.
I can add an agent block in each stage and then I would be able to provision containers as needed, but then I don't think I can share data/build output between them.
Is there any "standard" or good way of doing this sort of dynamic provisioning?
I think the best way is to have an agent block at each stage freeing up the containers after the job completes. As for sharing data between containers, you might want to use a shared persistent volume mounted in each container. This can be achieved by the NFS server provisioner
The NFS server provisioner would create Persistent Volume(Local one should work in your case) and an NFS storageclass. Your agent pods then can then mount and share the volume provisioned by this storageclass.
Best and simple way for temporary volume sharing between containers in a pod, is used the EmptyDir
Communicate Between Containers in the Same Pod Using a Shared Volume

How to use Jenkins multi-configuration for micro-services build process?

To give a background, we have nearly 20 micor-services each of which has its own Jenkinsfile. Although each micro service may be doing some extra steps in their builds, e.g. building extra docker image, but most of these steps are the same across the all micro services, with only difference being the parameters for each step, for example repository path etc.
Now looking at Jenkins' Multi Configuration project it seems perfect to have one of these jobs and apply the build steps to all these projects. However, there are some doubt that I have:
Are we able to use Multi Configuration to create Multi Branch jobs for each microservice?
Are we able to support extra steps that each micro service may have while the common steps are being generated by Multi Configuration?
To be more clear, let me give you an example.
micro-service-one:
|__ Jenkinsfile
|___ { step1: maven build
step2: docker build
step3: docker build (extra Dockerfile)
}
micro-service-two:
|__ Jenkinsfile
|___ { step1: maven build
step2: docker build
}
Now what I'm thinking is my Multi Configuration will look like something like this:
Axis:
name: micro-service-one micro-service-two
docker_repo: myrepo.com/micro-service-one myrepo.com/micro-service-two
DSL Script:
multibranchPipelineJob("folder-build/${name}") {
branchSources {
git {
id = 'bitbucket-${name}'
remote('git#bitbucket.org:myproject/${name}.git')
credentialsId("some_cred_id")
}
}
orphanedItemStrategy {
discardOldItems {
daysToKeep(2)
numToKeep(10)
}
defaultOrphanedItemStrategy {
pruneDeadBranches(true)
daysToKeepStr("2")
numToKeepStr("10")
}
}
triggers {
periodic(5)
}
}
But I'm not sure how to use the axis variables in the Jenkinsfile for each application? Is it even possible to make Jenkinsfile to be generated by Multi Configuration?
Note: You might ask why do I need this. To answer that, is to reduce the time we spend on modifying or updating these Jenkinsfiles. When a change is needed we need to check out nearly 20 or more repositories and modify one by one as our environment evolving and new features or fixes needed.

Publish Docker images using Spring Boot Plugin without credentials

I've got a Gradle project with Spring Boot plugin and am trying to publish the image built by the plugin: gradle bootBuildImage --publishImage
The problem is to publish "either token or username/password must be provided" and we can't do that since we have different authentication mechanism in different environments. For example on local machine we're using ecr-credentials-helper and the pipeline uses aws ecr get-login-token | docker login.
Is there any way to force the plugin to let the docker handles the authentication? (I'm assuming the plugin uses the docker daemon on the host).
Currently I wrote a task to generate a token file using aws ecr get-login-token and read the token file in bootBuildImage task. But I don't like this solution, due to security reasons.
Here's a solution that replicates aws ecr get-login-password within Gradle, using the AWS Java SDK. Although you could instead invoke the CLI directly from Gradle, this makes the build script more fragile, as it then depends on having a certain version of the CLI installed. Particularly so since ecr login was a breaking change between v1 and v2 of the CLI.
This assumes you have your AWS credentials set up in some standard way that the default credentials provider in the SDK will locate.
import software.amazon.awssdk.services.ecr.EcrClient
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath platform("software.amazon.awssdk:bom:2.17.19")
classpath "software.amazon.awssdk:ecr"
classpath "software.amazon.awssdk:sts" // sts is required to use roleArn in aws profiles
}
}
plugins {
id "java"
id "org.springframework.boot"
}
dependencies {
implementation platform("org.springframework.boot:spring-boot-dependencies:2.5.3")
implementation "org.springframework.boot:spring-boot-starter"
// Rest of app dependencies
}
bootBuildImage {
doFirst {
String base64Token = EcrClient.create().getAuthorizationToken().authorizationData()[0].authorizationToken()
String[] auth = new String( base64Token.decodeBase64() ).split(":", 2)
docker {
publishRegistry {
username = auth[0]
password = auth[1]
}
}
}
}

Good practices for combining multiple projects in one Docker Swarm deployment?

We're developing an application which consists of three parts:
Backend (Java EE) (A)
Frontend (vuejs) (B)
Admin frontend (React) (C)
For each of the above applies as status quo:
Maintained in its own Git repository
Has its own docker-compose.yml
Has its own Jenkinsfile
The Jenkinsfile for each component includes a "Deploy" stage which basically just runs the following command:
docker stack deploy -c docker-compose.yml $stackName.
This approach however doesn't feel "right". We're struggling with some questions like:
How can we deploy the "complete application"? First guess was using a separate docker-compose.yml which contains services of A, B and C.
But where would we keep this file? Definitely not in one of the Git repos as it doesn't belong there. A fourth repo?
How could we start the deployment of this combined docker compose file if there are changes in one of the above repos for A, B, C?
We're aware that these question might be not quite specific but they show our confusion regarding this topic.
Do you have any good practices how to orchestrate these three service components?
Well, one way to do that is to make the 3 deployments separate pipelines, so then as the last step per application you would just call the particular deployment. For example for backend:
stage("deploy backend") {
steps {
build 'deploy backend'
}
}
Then a separate pipeline to deploy all the apps just doing
stage("deploy all") {
steps {
build 'deploy backend'
build 'deploy frontend'
build 'deploy admin frontend'
}
}
Open question would be where would you keep the docker-compose.yml?
I'm assuming that automatic deployment would be available just for your master, so I would keep it still in each project. You would also need additional Jenkins configuration file for deployment pipeline - meaning you would have a simple pipeline 'deploy backend' pointing to this new jenkins configuration file in master branch of 'backend'. But then it all depends on your gitflow.

How to add config file for config file provider plugin with groovy script in jenkins

I am using Job DSL in Jenkins. There is a seed job that generates some files that should be shared across other jobs that could run on different nodes. If the files were not generated, the config files provider plugin could be used for this task. However I need the files to be dynamic so that no Jenkins UI interaction is needed.
Is it possible to add a file to the plugin with a groovy script?
The only other option I could think of was to record the UI interaction and let a script replay it with modified data. In case of a more secured Jenkins this would also require to get authentication and CSRF tokens right.
You can use Job DSL to create config files that are managed by the Config File Provider plugin:
configFiles {
customConfig {
id('one')
name('Config 1')
comment('lorem')
content('ipsum')
providerId('???')
}
}
See https://github.com/jenkinsci/job-dsl-plugin/wiki/Job-DSL-Commands#config-file
When you are using job-dsl you can read in data from anywhere that the Groovy runtime can access.
You could store shared config in a hard coded variable in your script itself.
You could inject the data via a Jenkins parameter to your seed job.
You could retrieve the data from a file in the git repo where your store your seed job.
You could retrieve the data from a database, REST API.
etc etc.

Resources