Circle CI If condition Terraform apply - circleci

I am trying to reuse my config.yml circle ci in the repository and to run a terraform apply just if we want to execute a specific Terraform module.
the config would do:
init
validation
plan
apply if just a specific module is executed in Terraform
I found some information about it, but nothing specific yet. Is there a possible approach to do this?
The version of Terraform that is being used is 0.12.31.
Thank you

Related

Implementing a continuous integration pipeline in Jenkins using Zephyr, Bitbucket and Docker (Windows)

First post here, so ignore the newbie details about the question, the format will get better :)
My question has two questions: first it it doable? and second if eventually yes, any tips, recommendations on how to do this.
I have a software piece written in c in Zephyr RTOS (on a nrf 52840 board) and version-controlled in Bitbucket. Im trying to implement a Jenkins CI pipeline that fetch the code with newly pushed changes from Bitbucket and build it to check for errors and then report.
Now, to build that code in Zephyr I need a build environment and my solution is to run a docker container with zephyr image than is able to build that code and report back if everything looks good or not.
So basically my pipeline in jenkins will look like:
Fetch code from Bibucket.
run docker container with zephyr image that build the code
report back result to Jenkins.
What I have done so far:
Get bitbucket and Jenkins to connect. Have a container with zephyr image running that I got from docker hub. The image is zephyrprojectrtos/ci. Inside the container Im able to git clone my repos, still trying to figure out how to build the code and also if its possible to run something like a git clone inside a docker container but from a jenkinsfile. Any tips here? is it possible to pass a git clone command to a docker container from a jenkinsfile? or Do i have to include all (if possible) in the docker run command when running the container so it runs it and automatically checks out SW and build and report results back.
Im new to all this, Zephyr, Docker, Jenkins and I have no idea if this will work or not and if there a way around that is much simpler.
Thanks for your attention

How can I create a temporary instance of a docker image and execute a command on the instance within an Azure Pipeline?

I have multiple build and deploy pipelines for my application (User Interface, Internal APIs, External APIs, etc...). I have another build pipeline for my automated tests (which use Node JS, Nightwatch-API, Cucumber, etc..) that builds a Docker image and pushes it to the Container registry.
I want to be able to pull the testing image into my deployment pipelines and execute the appropriate test script command (i.e. npm run test:InternalAPIs). My test scripts will publish the results to a separate system. I am trying to find the best way to execute the automated testing from within the deployment pipeline.
This seems like it should be an easy task within the pipeline build, I just cannot find the task that does what I need. Any assistance would be greatly appreciated.
I'd probably write a bash script for it, and then run the script with the Bash#3 task.
Alternatively, you could make use of built in tasks, such as Docker#2 and npm#1.
Refer to Microsoft's documenation for more details.
Edit: You can create a temporary instance of the docker image with the docker run command.

Why does gitlab-ci default to using git clone for each job instead of building a docker image first?

Gitlab-ci's default mode is to use git clone in every job in a pipeline.
This is time-consuming, especially since after cloning we need to install/update all dependencies.
I'd like to flip the order of our pipelines, and start with git clone + docker build, then run all subsequent jobs using that image without cloning and rebuilding for each job.
Am I missing something?
Is there any reason that I'd want to clone the repo separately for each job if I already have an image with the current code?
You are not missing anything. If you know what you are doing, you don't need to clone your repo for each stage in your pipeline. If you set the GIT_STRATEGY variable to none, your test jobs, or whatever they are, will run faster and you can simply run your docker pull commands and the tests that you need. Just make sure that you use the correct docker images, even if you start many parallel jobs. You could for example use CI_COMMIT_REF_NAME as part of the name of the docker image.
As to why GitLab defaults to using git clone, my guess is that this is the least surprising behavior. If you imagine someone new to GitLab and new to CI, it will be much easier for them to get up and running if each job simply clones the whole repo. You also have to remember that not everyone builds docker images in their jobs. I would guess that the most common way this is set up is either with programming languages that doesn't need to be compiled, for example python, or that there is a build job that produces binaries, and then a test job that runs the binaries. They can then use artifacts to send the binaries from the build job to the test job.
This is easy and it works. When people then realize that a lot of the time of their test jobs is spent just cloning the repository, they might look into how to change the GIT_STRATEGY, and to do other things to optimize their particular build.
One of the reasons of using a CI is to execute your repo in a fresh state. This cannot be done if you skip the git clone process in certain jobs. A job may modify the repo's state by deleting its file or generating new ones; only the artifacts which are explicitly documented in the pipeline should be shared between jobs-nothing else.

Is it possible to place a breakpoint inside of a Jenkinsfile for debugging?

Right now sorting out a good workflow using Jenkinsfiles is a bit slow since I have to create a job, and run it from the UI in order to get feedback on whether or not it works.
I was wondering if there was a way to place a breakpoint inside of a Jenkinsfile that way I could toy around and get a feel for the libraries / methods / variables that are available.
Is this something that is possible? Or do I have to stick to my current process of editing a Jenkinsfile in the Jenkins UI and then re-running the build?
--Edit--
I've found a workflow that works a little faster than making changes through the UI. The SSH server within Jenkins exposes a command called declarative-linter and one called replay-pipeline. Now I just develop the script locally and rerun these commands after I make an edit.
So basically, my workflow is like this:
Edit the script to my liking
Run the lint check. I have jenkins setup in my ssh config file, so basically I run this using Powershell:
gc Jenkinsfile | ssh jenkins declarative-linter
Run the newly changed script by replaying a pipeline build:
gc Jenkinsfile | ssh jenkins replay-pipeline <name of my job with branch name>
Run the console command to tail the logs:
ssh jenkins console <name of my job with branch name>
All I did was wrap these lines into a PowerShell function and after I edit the script locally I run one command to perform all this to validate the change. It's definitely more complicated, but the turn around time is a bit faster than it was using the Jenkins UI, plus I get to edit the script using my favorite editor. Hopefully, in the future, there will be better tooling around debugging Jenkinsfiles.
This is the only way currently. Although I know that there requests for future additions in this direction (Pipeline debugger). Probably there are some options to debug this directly from Java, but this is not a trivial setup to be done.
Add an user input
input message: "Continue?"
pipeline input step,
Read interactive input in Jenkins pipeline to a variable

Adding Jenkins Pipelines on Build

does anyone know if its possible to add a Jenkins pipeline build into a Jenkins docker image? For example, I may have a Jenkinsfile that defines my pipeline in groovie, and would like to ADD that into my image when building from the Jenkins image.
something like:
FROM jenkins:latest
ADD ./jobs/Jenkinsfile-pipeline-example $JENKINS_HOME/${someplace}
And have that pipeline ready to go when i run it.
Thanks.
It's a lot cleaner to use Jenkinsfile for this instead. This way, as your repositories develop you can change the build process without needing to recompile and redeploy your Jenkins instance everytime. (less work, and less CI downtime) Also, having the Jenkinsfile in source code allows a simpler decoupling.
If you have any questions about extending Jenkins on Docker further to handle building NodeJS, Ruby or something else I go into how to do all that in an article.
You can create any job in Jenkins by passing in an XML file that describes the job. See https://support.cloudbees.com/hc/en-us/articles/220857567-How-to-create-a-job-using-the-REST-API-and-cURL
The way I've done this is to manually create the job I want in Jenkins, then append config.xml to the URL and it shows you the XML content needed to generate the pipeline job. Save that XML and you can deliver it to your newly deployed Jenkins instance.
I use a system similar to this to generate several hundred jobs based on our external build specifications.

Resources