We're setting up multiple more or less static servers in AWS. These are primarily configured via Ansible and that's also the ultimate source of truth when it comes to their existence, grouping, host names and IPs. But then there's Jenkins deploying configuration files to these servers based on new commits added to a git repository.
I'm having an issue with listing the target servers directly in a Jenkinsfile. How shall I proceed? Which are the most common ways of dealing with this?
I understand this is mostly an opinion based topic. But maybe there's a particular Jenkins feature which I don't know about...?
Thank you.
This is very subjective. Following are a few ways to do this.
Store the details somewhere accessible after the Ansible step. Possibly commit to a Github repo and retrieve these details within the Jenkins Job.
Using AWS APIs/CLI to retrieve server details. You can either set up AWS CLI in Jenkins Agent or use something like AWS Step Plugin.
Do an API call to Jenkins after the Ansible script is executed and update the server details in the Job itself.
Related
I have a jenkins instance where we have around 1000 jobs which are using multiple repositories from github enterprise server which is github.xxxx.xxx. We are migrating the repos from enterprise server to enterprise cloud which is github.com
We also want to update the configuration of the jenkins jobs.
I wrote a groovy and shell script which fetches all the jobs and their configurations and updates the config using curl. From github.xxxx.xxx to github.com.
What I'm looking for is if there is any other better way to make jenkins use github.com repos instead of github.xxxx.xxx. May be like updating any DNS config on Jenkins to repalce github.xxxx.xxx with github.com.
Please give any suggestions or thoughts on this.
I work for a small startup. We have 3 environments (Production, Development, and Staging) and GitHub is used as VCS.
All env runs on EC2 with docker.
Can someone suggest me a simple CICD solution that can trigger builds automatically after certain branches are merged / manual trigger option?
Like, if anything in merged into dev-merge, build and deploy to development, and the same for staging and pushing the image to ECR and rolling out docker update.
We tried Jenkins but we felt it was over-complicated for our small-scale infra.
GitHub actions are also evaluated (self-hosted runners), but it needs YAMLs to be there in repos.
We are looking for something that can give us option to modify the pipeline or overall flow without code-hosted CICD config. (Like the way Jenkins gives option to either use Jenkins file or configure the job manually via GUI)
Any opinions about Team City?
I'm little confused about Jenkins and was hoping someone could clarify some matter for me.
After reading up on Jenkins, both from official docs and various tutorials I get this:
If I wanna set up auto deplyoment or anything Jenkins related, I could just install docker jenkins image, launch it and access it via localhost. That is clear to me.
Then, I just put Jenkinsfile into my repository, so that it knows what and how to build my repo and stuff.
The questions that I have are:
It seems to me that Jenkins needs to be running all the time, so that it can watch for all the repo changes and trigger code building, testing and deploying. If that is the case, I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
I'll try to deploy my backend and front apps on docker-compose file on my server. I'm not sure where does Jenkins integrates in all that.
How Jenkins can watch for all the repository changes and trigger code building, testing and deploying?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
Jenkins and other automation servers offer two options to watch source code changes:
Poll SCM: Download and compare source code at predefined intervals.This is simple but, hardware consumption is elevated and is a little outdated
Webhooks: Optimal functionality offered by github, bitbucket, gitlab, etc. in which Github, for example, at any git event, makes an http request to your automation server, sending all the information like branch name, commit author, etc). Here more info about webhooks and jenkins.
If you don't want a 24/7 dedicated server, you can use:
Some serverless platform or just a simple application able to receive http posts + webhook strategy. For instance, Github will perform a post requet to your app/servlerless and at this point, just execute your build, test or any other commands to deploy your application.
https://buddy.works/. It is like a mini-jenkins.
If I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
Yes. Jenkins is an automation server, so it needs its own dedicated server.
You can install jenkins manually or use docker in your droplet. Configure 8080 port for your jenkins. If everyting is ok, just access to your droplet public ip offered by digitalocean, like: http://197.154.458.456:8080. This url must load the Jenkins dashboard.
Currently, I am working in a quality process so as to ensure that the code is acceptable. For that, I'm integrating Jenkins, SonarQube and GitLab, which are running in different servers (actually they are in different docker containers).
The idea is to check with SonarQube everytime the code is pushed against GitLab and block commits, merges, and so on, whether SonarQube has not passed.
I have already integrated Jenkins with SonarQube, but Jenkins checks the code inside his workspace, so imagine a situation where a developer in his laptop needs to push his changes.
My conceptual question is simple: Is it possible to integrate these technologies in order to do this? And, if the question is yes, which steps are necessary?
PD: I don't need to see code, configuration files,and so on. I just need something like:
Configure SonarQube to work with Jenkins
Do an script so as to copy that file in that folder,
...
First, in docker means each tool is in its own container.
They only need to see each other through the network, which is where a Docker Engine in Swarm mode comes in.
Second "configure Jenkins to work with SonarQube"... that is what I have done in my shop, and there isn't much to it.
Once the Jenkins SonarQube plugin is installed, and the address for the SonarQube server entered, you can configure your job and call sonar (for instance with maven: $SONAR_MAVEN_GOAL -Dsonar.host.url=$SONAR_HOST_URL)
The analysis done in the Jenkins workspace will then be published in the SonarQube server.
A swarm server is the more modern version of this 2015 docker-compose.yml file from the marcelbirkner/docker-ci-tool-stack project.
The idea remains the same though: each element is isolated in its own container.
I haven't tried It myself but https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin could be interesting in your setup.
I want to use Amazon EC2 plugin for setting up autoscaled Slaves.
We aim to script everything using Chef and so far I haven't found anything for this Jenkins plugin. I want to write a cookbook of my own but am wondering what is the best way to do it?
Generally management of the build machine would be done through the EC2 plugin itself, it already installs the Jenkins remote jar for you, so all you need to do beyond that is make sure Java is installed.
There are two methods to use Amazon EC2 plugin and Chef together:
Run Chef to do provisioning on each slave launch or build start
Build pre-baked slave images using Chef and something like Packer and provide them to Jenkins Amazon EC2 plugin
Cons of the first approach:
May take a lot of time depending on what software you are installing with Chef. So it would give a latency for build start and extra bill for machine time.
You can't always get the same build environment you have last time. May lead to heisenbugs and hard troubleshooting.
The second approach is known as Immutable Server. It has its cons too:
Gives you an extra bill for AMI storage.
Less flexible — you can't just fix some version numbers or add requirements for some new software and start a new Jenkins build. You have to rebuild your slave images first. And if you need even slightly different environments you have to build and keep several pre-baked images.
I myself use the second approach right now. You could check source code here. Specifically, configuration of Amazon EC2 plugin with Chef is done here.