Does anybody know about (or have experience with) a simple continuous integration system that can be run via cron and produce a static HTML report?
Tools like Jenkins and BuildBot all seem to need their own daemon processes. (Which in my case means I need to get time from sysadmins to set up. Not gonna happen.)
Ideally, I'd like an output report that looks like:
http://buildbot.buildbot.net/#/console
That is, one row per revision and one column per build config. With a green/red status on each.
Generally, the reason why people don't run CI systems in cron is that they rapidly outgrow cron.
Related
I am developing automated tests for one of our GUI based app using Pytest framework. I've created a docker image with series of tests for a particular GUI functionality and it is stored in AWS ECR as an image.I've also setup an AWS Batch computing env with a cron schedule to trigger the tests (image) at a particlar time/day which is working fine.
I've couple of questions regarding this:
Is there a way to trigger the tests from AWS without using the cron schedule? This is to enable business users with necessary AWS rights so that they can run the tests independently without waiting for the cron to run the tests.
Secondly, what is the best way to run automation tests for more than one GUI functionalities (pages)? There are about 15 different types of pages within the app that needs to be automated for testing. One way is to create 15 different images to test them and store them in ECR. But it sounds little inefficient way of doing things. Is there a better alternative like creating just one image for these 15 different pages. If so, how can I selectively run tests for a particular GUI page.
Thank you.
The answer to your first question is no, you can't manually trigger a CRON scheduled batch job. If you wanted user's to run the tests you would need to switch to event driven jobs and have the user's create events that trigger the job - drop a file into S3, send a message to an SQS queue etc. You could also wrap your batch job in a State Machine, which is then trivial to manually execute
I've couple of questions regarding this: Is there a way to trigger the
tests from AWS without using the cron schedule? This is to enable
business users with necessary AWS rights so that they can run the
tests independently without waiting for the cron to run the tests.
I think you are asking if a user can run this AWS Batch Job Definition manually in addition to the cron scheduled process?
If so the answer is yes, either with access to the Batch management console or using the CLI (or if you have some other GUI application with the SDK). They would need an IAM user with a role permissions for Batch, ECS, and other AWS resources.
Secondly, what is the best way to run automation tests for more than one GUI functionalities (pages)? There are about 15 different types of pages within the app that needs to be automated for testing. One way is to create 15 different images to test them and store them in ECR. But it sounds little inefficient way of doing things. Is there a better alternative like creating just one image for these 15 different pages. If so, how can I selectively run tests for a particular GUI page.
I would look into continuous integration testing methods. This is the problem those systems are designed to solve.
I have a few processes on my machine that I would like to have constantly running. I like however, how Jenkins organizes the jobs logging and I can go and see a build executing and see its STDOUT in realtime.
Would it be an issue to have a job that never finishes? I've heard after time there would be interruptions. Is there a better tool for something like this? Would basically love to be able to see the output from a web based perspective of the tool (and add hooks on failures)
For example if I were hosting a Node.js site, and wanted to be able to see the output of people connecting to the website or whatever is logged by the site. Ideally as long as you want to run the server, the process would be running constantly
I have searched the whole internet for 2 weeks now, asked on freenode IRC and in the Jenkins user group mailing list for that but got no answer so here I am, you are my last hope (no pressure)
I have a Jenkins scripted pipeline that generates hundreds of parallel branches that have to run simultaneously on hundreds of slaves node. At the moment it looks like Jenkins BlueOcean user interface is not suited for that. We reach a point were all the steps can't be displayed.
I need to provide some kind of background to let you understand our need: We have a huge project in our company that have thousands of Behat/Selenium and this takes more that 30h to run now if done sequentially. We implemented a basic solution some times ago were we use a queuing system (RabbitMq) to store all the tests and consumers that run the tests by downloading the source code from Jenkins and uploading artifacts back to Jenkins too, but this is not as scallable as Jenkins native slaves and it is not maintainable enough (eg. we don't benefit from real time output log and usage statistics).
I know there is an open issue that describe the problem here : https://issues.jenkins-ci.org/browse/JENKINS-41205 but, basically, I need a workaround working for the next week (Our deelopment team are waiting for this new pipeline for a long time now).
Our pippeline looks like that at the moment:
Build --- Unit Tests --- Integration Tests --- Functional Tests ---
| | |
tool A suite A matrix-A-A-batch 0
tool B suite B matrix-A-A-batch 1
tool C matrix-A-A-batch 2
matrix-A-A-batch 3
....
"Unable to display more"
You can find a full version of our Jenkinsfile here : https://github.com/willy-ahva/pim-community-dev/blob/086e4ed48ef1a3d880ca16b6f5572f350d26eb03/Jenkinsfile (It may looks complicated but, basically, the real problem is the "Functional Tests" stage)
My questions are:
Am I using parallel the good way ?
Is it only a Jenkins/BlueOcean issue and I should contribute to the issue I linked ? (If yes, how ? I'm not a Java dev at all)
Should I try to use MultiJob and parallelize jobs instead of steps ?
Is there any other tool except parallel that I can use ? (some kind of fork or whatever) ?
Thanks a lot for your help. I love what Jenkins became with the Pipeline and BlueOcean UI and I really want to make it work in our team.
This is probably a poor way to do the parallel tasks. I would instead treat each parallel map entry as a worker, and put your tests into a queue / stack / data structure. Each worker thread could pop off the queue as required, and then you wouldn't sit there with a million tasks queued. You would have to be more careful with your logging so that it is apparent which test failed, but that shouldn't be too tough.
It's probably not something that's easy to fix, as it is as much a UI design issue as anything else. I would recommend that you give it a poke though! Who knows, maybe a solution will click for you?
Probably not. In my opinion this makes this muddier
Parallel is your option for forking.
If you really want to keep doing this, but don't want the UI to be so weird, you can stop defining each test as a stage. It'll be less clear what failed when one fails, but the UI should be happier.
We need to be able to run a Jenkins job that consumes two slaves. (Or, two jobs, if we can guarantee that they run at the same time, and it's possible for at least one to know what the other is.) The situation is that we have a heavy weight application that we need to run tests against. The tests run on one machine, the application runs on another. It's not practical to have them on the same host.
Right now, we have a Jenkins job that uses a script to kick a dedicated application server up, install the correct version, the correct data, and then run the tests against it. That means that I can't use the dedicated application server to run other tasks, when there aren't the heavy weight testing going on. It also pretty much limits us to one loop. Being able to assign the app server dynamically would allow more of them.
There's clearly no way to do this in the core jenkins, but I'm hoping there's some plugin or hackery to make this possible. The current test build is a maven 2 job, but that's configurable, if we have to wrap it in something else. It's kicked off by the successful completion of another job, which could be changed to start two, or whatever else is required.
I just learned from that the simultaneous allocation of multiple slaves can be done nicely in a pipeline job, by nesting node clauses:
node('label1') {
node('label2') {
// your code here
[...]
}
}
See this question where Mateusz suggested that solution for a similar problem.
Let me see if I understood the problem.
You want to have dynamically choose a slave and start the App Server on it.
When the App server is running on a Slave you do not want it to run any other job.
But when the App server is not running, you want to use that Slave as any other Slave for other jobs.
One way out will be to label the Slaves. And use "Restrict where this project can be run" to make the App Server and the Test Suite run on the machines with Slave label.
Then in the slave nodes, put " # of executors" to 1. This will make sure that at anytime only one Job will run.
Next step will be to create a job to start the App Server and then kick off the Test job once the App Server start job is successful..
If your test job needs to know the server details of the machine where your App server is running then it becomes interesting.
While I'm interested in Jenkins as a means to provide continuous build functionality, I'm really even more interested in Jenkins as a means to exercise my application in its prod environment against unexpected changes in infrastructure beyond my control that may effect my application. I can't find a ton of information on using Jenkins in this way, but I was wondering if there are others out there doing this? Essentially I have a project that runs maven test parametized with my prod url, but for these projects I don't actually do any building. Are there other tools besides Jenkins I should be considering to do this? If so, why?
If you've got your tests set up to run via Maven already, I think Jenkins would be a good option. You could set up email, IM or SMS alerts using Jenkins plugins, and keep a record of the results within Jenkins.
The only down sides I can think of are:
You'll probably want to run your monitoring a lot more frequently than a regular CI job, so you might want to keep more build records than the default of 10.
If you already have a system like Nagios or OpenView to monitor system resources, it might be better to integrate app monitoring into that rather than having another source of truth.
Jenkins Provides a plugin called Status Monitor Plugin
We have ours set to check a specific URL every 5 mins and email us when something fails. Our problem is that it won't sent emails to cell phone carrier email addresses. However, if regular email will suffice, the setup time for a plugin is less than a half hour and it is reliable as long as the Jenkins server stays up.