I am not sure something like Jenkins is built for this functionality, but I am curious if it is possible.
Say I have some sort of code, that I want to run from 7am until 7pm. Typically Jenkins jobs are done when whatever process is complete. Like a python script closing.
my goal would be to be able to have a script that will infinetly run, and will be terminated by Jenkins at a certain time. Doing this would still allow me to see the nice web ui, remotely start it, easily add hooks, etc.
Is this possible in Jenkins, or is there another platform like Jenkins that supports this type of functionality. Basically instead of using Jenkins for 'builds', you would be using them to control services
I have completely replaced Linux Cron with Jenkins, so yeah you can do what you're wanting to do. Only limited by your imagination :)
I have all of my Linux servers configured as nodes (via ssh) within Jenkins, and the connecting account on each of them has sudo privileges, so I can essentially have Jenkins do anything I want it to, at the OS level.
Related
I have recently started to mess about with Jenkins and am unsure how to deploy my web app to a basic server. I've gotten into the Pipeline (https://jenkins.io/doc/book/pipeline/) and it seems like a fantastic way to work.
Where I'm a bit stuck is in two spots:
Once my repo is in my workspace within Jenkins, how do I prep it so I am only deploying the files necessary for the application? For example, I don't need my src/ directory or my Vagrantfile when I'm deploying things.
How do I deploy my app to the server? I see examples all over the place, but I am getting a bit lost since there seems to be so many ways to do this. I'm assuming scp or something like that...?
To build off of #2, is there a way to deploy web apps as transactions (in one shot) rather than file-by-file?
Please let me know if I can provide any information for potential answers!
I can't speak to your specific use case but a common way to do this is the build-and-deploy model, where you will have 2 Jenkins jobs. The "build" job will check out from source, run build commands such as maven or make, and lastly will "archive" the build artifacts. The latter is an option under the 'post-build actions' tab at the bottom.
In the "deploy" job, you will grab the artifacts of your choice. You can fetch a single file, all of them, and everything in between. This requires use of the 'Copy Artifact' plug-in and it allows you to copy files generated by other jobs. Now you can run your usual deploy script in the 'Execute Command' box. Most command line paradigms are supported out of the box such as setting environment variables.
The instructions above assume that you want to run your application off of a host that you've provisioned as a Jenkins slave.
Use artifacts as mentioned by Paul Back, or a 3rd party artifactory server as in video
This is always tricky and error-prone. Why not spin up a fresh server with new release (humanly verified once)
Jenkins & Ansible is the answer here. This is how I deploy to production, since I am in no need to use anything like Docker (too many issues with particular app) so have to run the app natively. Quick example would be
You monitor a specific branch in gitlab / github or whatever else and then call a webhook on push / merge etc on that branch, at this point you deal with anything you need to do by running a playbook on the jenkins job that monitors that branch (jenkins).
in my case jenkins and ansible run on the same server. Jenkins runs the ansible playbook that does whatever I need to do.
for example with ansible, I copy certain files that need to be there, run configs / change filenames etc. setup nginx, run composer,
you get the point.
I feel it's a little crazy I couldn't find anything along these lines, especially as it's an incredibly simple requirement: Is there a way you can deploy from Jenkins using SSH/SCP, yet write only one instance of a transfer-set/exec script?
As it stands, deploying to servers is kind of INSANE in that I need to create a new "Deploy to SSH" task, choose a different server from the drop down and then copy/past all transfer-sets and execs from the previous entry. Then do it again. And again. And again.
There must be a better way?
This may not be a short-term immediate solution for your question---
(On long run this can be used)
Your requirement seems to me like you need a configuration management equipment. You could use Chef, Puppet or Ansible. And automation of this deployment can be done using Jenkins CI.
One example of how to deploy an application on jboss using Ansible -
Deploy a hello world application
jboss: src=/tmp/hello-1.0-SNAPSHOT.war deployment=hello.war state=present
Of course, this will require installation of Ansible and little bit of initial configuration. Ansible is simplest of all deployment mechanisms.
Check this for more details - http://docs.ansible.com/ansible/intro.html
I have many long running jobs that take almost a day to complete. Splitting is not possible. If the network fails then all progress is lost.
How can a slave survive disconnections?
EDIT 1
I have around 300 slaves running in Windows tied to one single Jenkins instance.
Slaves are connected using the manual method java - jar slave.jar -jlnpUrl <serverUrl> <slaveName>. I cannot run them as a regular Windows service because some tests manipulate GUI elements and require a real interactive session otherwise test fail.
EDIT 2
According to Jenkins Cookbook I should be using Cygwin + OpenSSH approach instead of custom script with JLNP-connector. Could this improve stability?
Jenkins was not originally designed for builds to survive across server or slave restarts. There is a CloudBees Long-Running Build plugin that supports long-running builds but, unfortunately, it is available only for enterprise users and still beta.
I didn't find any free alternative and would suggest you to try to improve your network stability and to split your long running jobs. At least you can divide your tests on logical groups (test suites).
Jenkins now has a workflow plugin. It claims to handle "server" restart and loss-of connectivity with slave.
From the link
A key feature of a workflow execution is that it's suspendable. That
is, while the workflow is running your script, you can shut down
Jenkins or lose a connectivity to a slave. When it comes back, Jenkins
will still remember what it was doing, and your workflow script
resumes execution as if it was never interrupted. A technique known as
the "continuation-passing style" execution plays a key role in
achieving this.
(not tested at all)
Edit: Copied from #Jesse Glick's comments :
Workflow is open source and available for anyone running Jenkins 1.580.1+ or later. CloudBees Jenkins Enterprise does include a checkpoint feature, but this is not necessary simply to have a build survive slave disconnections and Jenkins restarts: that is automatic
Recently, in our company, we decided to use Ansible for deployment and continuous integration. But when I started using Ansible I didn't find modules for building Java projects with Maven, or modules for running JUnit tests, or JMeter tests.
So, I'm in a doubtful state: it may be I'm using Ansible in a wrong way.
When I looked at Jenkins, it can do things like build, run tests, deploy. The missing thing in Hudson is creating/deleting an instance in cloud environments like AWS.
So, in general, for what purposes do we need to use Ansible/Jenkins? For CI do I need to use a combination of Ansible and Jenkins?
Please throw some light on correct usage of Ansible.
First, Jenkins and Hudson are basically the same project. I'll refer to it as Jenkins below. See How to choose between Hudson and Jenkins?, Hudson vs Jenkins in 2012, and What is the most notable difference between Jenkins and Hudson from a user perpective? for more.
Second, Ansible isn't meant to be a continuous integration engine. It (generally) doesn't poll git repos and run builds that fail in a sane way.
When can I simply use Jenkins?
If your machine environment and deployment process is very straightforward (such as Heroku or iron that is configured outside of your team), Jenkins may be enough. You can write a custom script that does a deploy as the final build step (or a chained step).
When can I simply use Ansible?
If you only need to "deploy" without needing to build/test, Ansible might be enough. For instance, you can run a deploy from the commandline or using Ansible Tower. This is great for small projects, static sites, etc.
How do they work together?
A good combination is to use Jenkins to build, test, and save artifacts. Add a step to call Ansible or Ansible Tower to handle the actual deployment process. That allows Ansible to handle machine configuration and lets Jenkins handle the CI process.
What are the alternatives to Jenkins?
I strongly recommend Thoughtworks Go (not to be confused with Go the language) instead of Jenkins. Others include CruiseControl, TravisCI, and Integrity.
Ansible is just a "glorified SSH loop".
CI is not only the software running, but the whole process of how success and failure is handled, who gets notification, and how the change is merged into the target version control.
If we only focus on the software, CI is a reactive scheduler triggered by code changes, and triggering typical build-validate-release-deploy sequence of "steps".
So in respect of software, Ansible without additional "sugaring" is just a toolkit to run things, which can be those very steps, but it is not CI.
The Ansible (without tower) totally lacks this reactive nature.
If you want to marry Ansible with CI, you can.
Ansible tower is a very Ansible oriented scheduler, but if you need CI software, I think you not necessarily need it. Any CI app capable of running shell script would be capable to launch Ansible playbooks.
Yet unlike Ansible tower - CI tools know to display test reports of all test frameworks, trigger notifications, etc.
Ansible tower can make sense in a complex environment with lots of groups touching Ansible code... The truth is I haven't seen a single real reason to pay for it. But if a manager liked the web interface nothing can stand "but others use it" logic.
I suspect the concept of Ansible tower was in response to puppet enterprise.
:)
I have a jenkins job that runs a bash script.
In the bash script I perform effectively two actions, something like
java ApplicationA &
PID_A=$!
java ApplicationB
kill $PID_A
but if the job is manually aborted, the ApplicationA remains alive (as can be seen with a ps -ef on the node machine). I cannot use trapping and so on, because that won't work if jenkins sends a 9 signal (trapping doesn't work for 9).
It would be ideal if this job could be configured to simply kill all processes that it spawns, how can I do that?
Actually, by default, Jenkins has a feature called ProcessTreeKiller which is responsible to make sure there are no processes left running after the job execution.
The link above explain how to disable that feature. Are you sure you don't have that disabled by mistake somehow?
Edit:
Following the comments by the author, based on the information about disabling ProcessTreeKiller, to achieve the inverse one must set the environment variable BUILD_ID to the build id of Jenkins job. This way, when ProcessTreeKiller looks through the running processes to kill, it will find this as well
export BUILD_ID=$BUILD_ID
You can also use the Build Result Trigger plugin, configure a second job to clean up your applications, and configure it to monitor the first job for ABORTED state as a trigger.