running aws batch jobs manually - docker

I am developing automated tests for one of our GUI based app using Pytest framework. I've created a docker image with series of tests for a particular GUI functionality and it is stored in AWS ECR as an image.I've also setup an AWS Batch computing env with a cron schedule to trigger the tests (image) at a particlar time/day which is working fine.
I've couple of questions regarding this:
Is there a way to trigger the tests from AWS without using the cron schedule? This is to enable business users with necessary AWS rights so that they can run the tests independently without waiting for the cron to run the tests.
Secondly, what is the best way to run automation tests for more than one GUI functionalities (pages)? There are about 15 different types of pages within the app that needs to be automated for testing. One way is to create 15 different images to test them and store them in ECR. But it sounds little inefficient way of doing things. Is there a better alternative like creating just one image for these 15 different pages. If so, how can I selectively run tests for a particular GUI page.
Thank you.

The answer to your first question is no, you can't manually trigger a CRON scheduled batch job. If you wanted user's to run the tests you would need to switch to event driven jobs and have the user's create events that trigger the job - drop a file into S3, send a message to an SQS queue etc. You could also wrap your batch job in a State Machine, which is then trivial to manually execute

I've couple of questions regarding this: Is there a way to trigger the
tests from AWS without using the cron schedule? This is to enable
business users with necessary AWS rights so that they can run the
tests independently without waiting for the cron to run the tests.
I think you are asking if a user can run this AWS Batch Job Definition manually in addition to the cron scheduled process?
If so the answer is yes, either with access to the Batch management console or using the CLI (or if you have some other GUI application with the SDK). They would need an IAM user with a role permissions for Batch, ECS, and other AWS resources.
Secondly, what is the best way to run automation tests for more than one GUI functionalities (pages)? There are about 15 different types of pages within the app that needs to be automated for testing. One way is to create 15 different images to test them and store them in ECR. But it sounds little inefficient way of doing things. Is there a better alternative like creating just one image for these 15 different pages. If so, how can I selectively run tests for a particular GUI page.
I would look into continuous integration testing methods. This is the problem those systems are designed to solve.

Related

Prevent multiple cron running in nest.js on docker

In docker we have used deploy: replicas: 3 for our microservice. We have some Cronjob & the problem is the system in running all cronjob is getting called 3 times which is not what we want. We want to run it only one time. Sample of cron in nest.js :
#Cron(CronExpression.EVERY_5_MINUTES)
async runBiEventProcessor() {
const calculationDate = new Date()
Logger.log(`Bi Event Processor started at ${calculationDate}`)
How can I run this cron only once without changing the replicas to 1?
This is quite a generic problem when cron or background job is part of the application having multiple instances running concurrently.
There are multiple ways to deal with this kind of scenario. Following are some of the workaround if you don't have a concrete solution:
Create a separate service only for the background processing and ensure only one instance is running at a time.
Expose the cron job as an API and trigger the API to start background processing. In this scenario, the load balancer will hand over the request to only one instance. This approach will ensure that only one instance will handle the job. You will still need an external entity to hit the API, which can be in-house or third-party.
Use repeatable jobs feature from Bull Queue or any other tool or library that provides similar features.
Bull will hand over the job to any active processor. That way, it ensures the job is processed only once by only one active processor.
Nest.js has wrapper for the same. Read more about the Bull queue repeatable job here.
Implement a custom locking mechanism
It is not difficult as it sounds. Many other schedulers in other frameworks work on similar principles to handle concurrency.
If you are using RDBMS, make use of transactions and locking. Create cron records in the database. Acquire the lock as soon as the first cron enters and processes. Other concurrent jobs will either fail or timeout as they will not be able to acquire the lock. But you will need to handle a few cases in this approach to make it bug-free and flawless.
If you are using MongoDB or any similar database that supports TTL (Time-to-live) setting and unique index. Insert the document in the database where one of the fields from the document has unique constraints that ensure another job will not be able to insert one more document as it will fail due to database-level unique constraints. Also, ensure TTL(Time-to-live index) on the document; this way document will be deleted after a configured time.
These are workaround if you don't have any other concrete options.
There are quite some options here on how you could solve this, but I would suggest to create a NestJS microservice (or plain nodeJS) to run only the cronjob and store it in a shared db for example to store the result in Redis.
Your microservice that runs the cronjob does not expose anything, it only starts your cronjob:
const app = await NestFactory.create(
WorkerModule,
);
await app.init();
Your WorkerModule imports the scheduler and configures the scheduler there. The result of the cronjob you can write to a shared db like Redis.
Now you can still use 3 replica's but prevent registering cron jobs in all replica's.

Schedule Mail batch by Rails in Cloud Foundry

I want to send email batch at specific time like CRON.
I think whenever gem (https://github.com/javan/whenever) is not to fit in Cloud Foundry Environment. Because Cloud Foundry can't use crontab.
Please inform me what options are available to me.
There's a node.js app here that you could use to schedule a specific rake task.
I haven't worked with cloudfare so I'm not sure if it'll serve your needs, but you can also try some of the batch job processing tools rails has available: Delayed job and sidekiq. Those store data for recurring jobs either on your database (DJ) or in a separate redis database (Sidekiq) and both need keeping extra processes up and running, so review them deeply and the changes you'd need for your deployment process before using each one. There's also resque, and here's a tutorial to use it with rails for scheduling tasks.
There are multiple solutions here, but the short answer is that whatever you end up doing needs to implement its own scheduler. This is because there is no cron service available to your application when it runs on CF. This means there is nothing to trigger or schedule your actions. Any project or solution that depends on cron will not work when deploying to CF. Any project that implements it's own scheduler should work fine.
Some specific things I've seen people do successfully:
Use a web service that sends HTTP requests to your app on predefined intervals. The requests trigger your action. It's the services responsibility to let you define when to trigger and to send the HTTP requests. I'm intentionally avoiding mentioning any specific services, but you can find them by searching for "cron http service" or something like that.
Importing a library that has cron like functionality. I'm not familiar with Ruby, so I don't know the landscape there. #mlabarca has mentioned a couple that you might try out. Again, look to see that they implement the scheduling functionality and do not depend on cron. I'm more familiar with Java where you have Quartz and Spring, which has some scheduling functionality too.
Implement a "clock" process or scheduler. This would generally be a second app that you deploy on CF. It would be lightweight and probably not have a web interface. It could be as simple as do something, sleep, loop for ever repeating those two steps. It really depends on your needs. You could even get fancy and implement something like the first option above where you're sending some sort of request to your other apps to trigger the actual events.
There are probably other solutions as well, those are just some examples to get you started.
Probably also worth mentioning that the Cloud Controller v3 API will have first class features to run tasks. In this case, the "task" is some job that runs in a finite amount of time and exits (like a batch job). This is opposed to the standard "app" that when run on CF should continue executing forever (i.e. if it exits, it's cause of a crash). That said, I do not believe it will include a scheduler so you'd still need something to trigger the task.

jenkins job on two slaves?

We need to be able to run a Jenkins job that consumes two slaves. (Or, two jobs, if we can guarantee that they run at the same time, and it's possible for at least one to know what the other is.) The situation is that we have a heavy weight application that we need to run tests against. The tests run on one machine, the application runs on another. It's not practical to have them on the same host.
Right now, we have a Jenkins job that uses a script to kick a dedicated application server up, install the correct version, the correct data, and then run the tests against it. That means that I can't use the dedicated application server to run other tasks, when there aren't the heavy weight testing going on. It also pretty much limits us to one loop. Being able to assign the app server dynamically would allow more of them.
There's clearly no way to do this in the core jenkins, but I'm hoping there's some plugin or hackery to make this possible. The current test build is a maven 2 job, but that's configurable, if we have to wrap it in something else. It's kicked off by the successful completion of another job, which could be changed to start two, or whatever else is required.
I just learned from that the simultaneous allocation of multiple slaves can be done nicely in a pipeline job, by nesting node clauses:
node('label1') {
node('label2') {
// your code here
[...]
}
}
See this question where Mateusz suggested that solution for a similar problem.
Let me see if I understood the problem.
You want to have dynamically choose a slave and start the App Server on it.
When the App server is running on a Slave you do not want it to run any other job.
But when the App server is not running, you want to use that Slave as any other Slave for other jobs.
One way out will be to label the Slaves. And use "Restrict where this project can be run" to make the App Server and the Test Suite run on the machines with Slave label.
Then in the slave nodes, put " # of executors" to 1. This will make sure that at anytime only one Job will run.
Next step will be to create a job to start the App Server and then kick off the Test job once the App Server start job is successful..
If your test job needs to know the server details of the machine where your App server is running then it becomes interesting.

Simple continuous integration from cron?

Does anybody know about (or have experience with) a simple continuous integration system that can be run via cron and produce a static HTML report?
Tools like Jenkins and BuildBot all seem to need their own daemon processes. (Which in my case means I need to get time from sysadmins to set up. Not gonna happen.)
Ideally, I'd like an output report that looks like:
http://buildbot.buildbot.net/#/console
That is, one row per revision and one column per build config. With a green/red status on each.
Generally, the reason why people don't run CI systems in cron is that they rapidly outgrow cron.

Jenkins for monitoring application in prod

While I'm interested in Jenkins as a means to provide continuous build functionality, I'm really even more interested in Jenkins as a means to exercise my application in its prod environment against unexpected changes in infrastructure beyond my control that may effect my application. I can't find a ton of information on using Jenkins in this way, but I was wondering if there are others out there doing this? Essentially I have a project that runs maven test parametized with my prod url, but for these projects I don't actually do any building. Are there other tools besides Jenkins I should be considering to do this? If so, why?
If you've got your tests set up to run via Maven already, I think Jenkins would be a good option. You could set up email, IM or SMS alerts using Jenkins plugins, and keep a record of the results within Jenkins.
The only down sides I can think of are:
You'll probably want to run your monitoring a lot more frequently than a regular CI job, so you might want to keep more build records than the default of 10.
If you already have a system like Nagios or OpenView to monitor system resources, it might be better to integrate app monitoring into that rather than having another source of truth.
Jenkins Provides a plugin called Status Monitor Plugin
We have ours set to check a specific URL every 5 mins and email us when something fails. Our problem is that it won't sent emails to cell phone carrier email addresses. However, if regular email will suffice, the setup time for a plugin is less than a half hour and it is reliable as long as the Jenkins server stays up.

Resources