We are starting to develop CI workflow for our systems in my company.
Currently we just making few basic tasks like build, tests, and upload to Nexus.
The tech stack is a Java project which build in Gradle and Jenkins makes our build.
Currently i'm working with some basic Groovy script to make what we need, but each time i'm copy and paste my updated code to Jenkins and running the job from Jenkins UI to see the results, and to me it seems like not a very good approach for developing such automation code.
My question is, what is the best practice to build and run Jenkins jobs?
Is it possible to run it straight from Intellij ?
Do we need to create a Jenkins project which should be saved as a repository and then deploy it to Jenkins machine?
Do we need to use some Intellij plugins in order to work with Jenkins?
More best practices are welcome :)
Jenkins has an API - so you can do whatever you want!
But in general, for small to medium teams it's better to use Jenkinsfile and let Jenkins pull code changes (or pull-requests) from SCM and trigger builds. You can also configure hooks to trigger builds if your SCM supports this (Github & bitbucket supports this).
If you are eventually pushing your artifacts to a docker image, I would highly recommend docker multi-stage builds.
If you are completely new to CI/CD stuff - Atlassian has a lot of good resources https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment
Related
I am Test automation engineer and I have developed my automation code repository to test functional aspect of the product. I want this code to run when any developer pushes feature or bug on the beta environment.
I have built the pipeline on Automation repository, and I am using docker image for selenium and maven for the same. When I push any changes on my repository pipeline triggers but I want this same to happen from different repositories.
One solution I can think of it is Trigger automation pipeline from developer's pipeline through REST API (pipeline-initiated). But this is not a full proof solution as automation pipeline image will not be updated after the changes made by developers.
In short: We have automation tests written in one repo and development code run into one repo. As a part of CI/CD/CT, I want all of these things run automatically and we get the bug free build every time.
You should try Ansible for this scenario. As you already have you docker images . Just wrap it with ansible and use to to trigger automation on different repos push trigger.
from the last week i am working with jenkins, and its going good but at the end of R&D I have lost of confusion, these are questions which confusingto me.
How to do Continuous Integration with jenkins of Website?
hoe to view Testing analysis reports produced by sonarqube server to my local system?
what happens where all my developers will do commit and push at the same time on repository?
How to deploy my web application on targeted server using jenkins?
How to I use sonarqube to test my website ?
If you are building a .net website, install msbuild plugin in jenkins. Create a jenkins job, add a step checkout your website (git/svn - install required repo plugin). Schedule to run nightly or hourly depending on your needs.
Type in url for your sonarqube server and then you can view the reports.
This is not an issue if your developers are comitting simultaneously. Usually a CI tool such as Jenkins will take care of that.
This is called continuous deployment or continuous delivery. Something like Octopus Deploy can help you if you are deploying .net applications. You can use the tool octo.exe and pass the API key as parameter to deploy to a specific environment.
SonarQube is used as a quality gate for code analysis. It is not a test framework. Try to investigate on how to use selenium framework for automation testing.
I hope that I have answered your questions.
My development setup is such that for every svn checkin code is built,unit tested, packaged and published in Artifactory. Now I want to automate my deployment process & run integration(Selenium) test as part of this process. I am thinking of using Puppet to managed the deployment
Is puppet the correct tool for this
What is the process I should use to trigger puppet master to initiate a fresh installation on agents, I couldn't find any Jenkins plugin that would actually trigger puppet. One option is to call
puppet apply ...
as a Jenkins post build task
Any suggestions welcome, thank you.
Have a look at this Selenium Jenkins article from Saucelabs, a service that automates cross-browser testing. Though they are a vendor with a service to sell, the article covers how to do Selenium testing yourself with Jenkins. It also exposes common pain points you are likely to run into with this approach.
A Puppet master doesn't serve the function of orchestrating client convergences. Take a look at Mcollective. This is a tool that will allow you to trigger puppet runs on target systems from a Jenkins agent via script commands.
Some Mcollective getting started material:
http://www.slideshare.net/PuppetLabs/presentation-16281121
http://puppetlabs.com/mcollective
Recently, in our company, we decided to use Ansible for deployment and continuous integration. But when I started using Ansible I didn't find modules for building Java projects with Maven, or modules for running JUnit tests, or JMeter tests.
So, I'm in a doubtful state: it may be I'm using Ansible in a wrong way.
When I looked at Jenkins, it can do things like build, run tests, deploy. The missing thing in Hudson is creating/deleting an instance in cloud environments like AWS.
So, in general, for what purposes do we need to use Ansible/Jenkins? For CI do I need to use a combination of Ansible and Jenkins?
Please throw some light on correct usage of Ansible.
First, Jenkins and Hudson are basically the same project. I'll refer to it as Jenkins below. See How to choose between Hudson and Jenkins?, Hudson vs Jenkins in 2012, and What is the most notable difference between Jenkins and Hudson from a user perpective? for more.
Second, Ansible isn't meant to be a continuous integration engine. It (generally) doesn't poll git repos and run builds that fail in a sane way.
When can I simply use Jenkins?
If your machine environment and deployment process is very straightforward (such as Heroku or iron that is configured outside of your team), Jenkins may be enough. You can write a custom script that does a deploy as the final build step (or a chained step).
When can I simply use Ansible?
If you only need to "deploy" without needing to build/test, Ansible might be enough. For instance, you can run a deploy from the commandline or using Ansible Tower. This is great for small projects, static sites, etc.
How do they work together?
A good combination is to use Jenkins to build, test, and save artifacts. Add a step to call Ansible or Ansible Tower to handle the actual deployment process. That allows Ansible to handle machine configuration and lets Jenkins handle the CI process.
What are the alternatives to Jenkins?
I strongly recommend Thoughtworks Go (not to be confused with Go the language) instead of Jenkins. Others include CruiseControl, TravisCI, and Integrity.
Ansible is just a "glorified SSH loop".
CI is not only the software running, but the whole process of how success and failure is handled, who gets notification, and how the change is merged into the target version control.
If we only focus on the software, CI is a reactive scheduler triggered by code changes, and triggering typical build-validate-release-deploy sequence of "steps".
So in respect of software, Ansible without additional "sugaring" is just a toolkit to run things, which can be those very steps, but it is not CI.
The Ansible (without tower) totally lacks this reactive nature.
If you want to marry Ansible with CI, you can.
Ansible tower is a very Ansible oriented scheduler, but if you need CI software, I think you not necessarily need it. Any CI app capable of running shell script would be capable to launch Ansible playbooks.
Yet unlike Ansible tower - CI tools know to display test reports of all test frameworks, trigger notifications, etc.
Ansible tower can make sense in a complex environment with lots of groups touching Ansible code... The truth is I haven't seen a single real reason to pay for it. But if a manager liked the web interface nothing can stand "but others use it" logic.
I suspect the concept of Ansible tower was in response to puppet enterprise.
:)
I'm prototyping a new build system using Jenkins, Gradle, and Artifactory. There seems to be conflicting or rather overlapping features in these tools, in regards to specifying the build artifacts and their destination. I see three paths going forward:
Specify the artifact settings on the particular task in Jenkins, using the Jenkins Artifactory plugin.
Specify the artifact settings in the Gradle build scripts, using the Gradle Artifactory plugin.
Specify generic maven repo settings in the Gradle build scripts, using the standard Gradle "maven" plugin.
I see pro's and con's to all of these approaches, but nothing is missing a critical feature for our builds, as far as I can see.
To further my confusion, the Gradle Artifactory plugin wiki states:
Build Server Integration - When running Gradle builds in your
continuous integration build server, it is recommended to use one of
the Artifactory Plugins for Jenkins, TeamCity or Bamboo to configure
resolution and publishing to Artifactory with build-info capturing,
via your build server UI.
So, some questions to get the conversation going:
Does it make sense to clutter the build scripts with artifact logic? It might help to add that developer's don't deploy. Currently, I only see build artifacts being uploaded from the Jenkins task.
Does leaving all of this build logic in the task configuration expose us to issues, in the event that the CI server is down?
What about version control for artifact changes done through the CI interface?
I've seen simple Bamboo configurations that specify the build artifacts through the CI server UI, rather than the pom's. Is this just a bad build practice?
Is there a killer tool integration feature that separates one of these approaches from the other?
How useful is the build info object? Is that only available in the Jenkins Artifactory plugin and not the Gradle Artifactory plugin?
I am really hoping to hear from existing users of these tools and what pitfalls/requirements may have led them to one of the approaches above (or perhaps even a better one that I haven't considered yet).
Does it make sense to clutter the build scripts with artifact logic? It might help to add that developer's don't deploy. Currently, I only see build artifacts being uploaded from the Jenkins task.
I'd say that's the way to go. Your build server is the single point of truth, and only artifacts built in the build server should be deployed.
Does leaving all of this build logic in the task configuration expose us to issues, in the event that the CI server is down?
That one is simple - you shouldn't deploy while your CI server is down. Building on local machine might produce wrong artifacts, which shouldn't be deployed.
What about version control for artifact changes done through the CI interface?
Not sure I understood your question.
I've seen simple Bamboo configurations that specify the build artifacts through the CI server UI, rather than the pom's. Is this just a bad build practice?
This configuration ignores Maven's ability to deploy, and I am not sure I can find a good scenario to justify it. The only thing I can think of is deferred deploy, but Artifactory plugin can take care of that.
Is there a killer tool integration feature that separates one of these approaches from the other?
Now we got to the essence :)
Well, the advantage of defining what you deploy in your build script (in case of Gradle) gives you the flexibility to fine-tuning every aspect of the deployment (think about the dynamic properties you might want to add in certain cases). Another very serious advantage is that your build is source, which means it is versionable in your version control.
The advantage of defining the deployment details in the build server configuration is that the build server is the only place the deployment should occur. So, if you don't have the deployment details in your build script, you know for sure it won't be deployed standalone.
So, how can you combine between the two to get the advantages of both worlds?
Code your deployment logic in your Gradle script using the Artifactory plugin DSL. Provide details like username and password from properties, which exists on build server only.
How useful is the build info object?
Extremely useful. The information in buildInfo was harvested during the build process and the buidInfo is the only place it exists. Having this information is the only option you will be able to reproduce this build in the future.
Is that only available in the Jenkins Artifactory plugin and not the Gradle Artifactory plugin?
'artifactory' and 'artifactory-publish' Gradle plugins both generate the buildInfo object, regardless of where are they running (be it your local machine or Jenkins build server).