I am trying to run a Python script from a Rails app. This script can POST progress updates to the Rails server by calling:
requests.post('http://127.0.0.1:3000/progress', data={'progress':'10'})
I've tried a few methods to get the Rails server to start the Python script and immediately continue with the rest of its work. It needs to be free to respond to the user and to receive the above progress updates.
I've tried:
In a controller method that also renders a view:
system `python3 /path/to/script.py`
out, err, st = Open3.capture3('python3', 'path/to/script.py')
Both of the above in an after_action.
Both of the above in an active_job.
In every case, as soon as the script is initiated, the server basically hangs, as if it's waiting. The test script I'm using only sends one progress update then exits, so it shouldn't be causing a major delay even if the server is waiting for it to finish.
What is going on here, and how can I run the script in background properly so my Rails server is available to respond to it?
ActiveJob should work. Try this:
in the console generate a new Job
rails g job SlowPython
move your slow python script into app/jobs/script.py
open the new file in app/jobs/slow_python_job.rb and add python3 /app/jobs/script.py in the perform function
In the controller's method you want it to run add SlowPythonJob.perform_later
Now when you open that controllers action in the browser it will queue the job and run it in the background while running the rest of the action immediately. ActiveJob also lets you run SlowPythonJob.perform_now which will run the job and wait for it to finish before running the rest of the action.
What all that does is tell rails to execute the perform method of SlowPythonJob through the Active Job Async Job queue adapter. This works well for the development environment. Keep in mind you will have to set up another queue adapter to handle the job queue in production since Active Job Async Job will drop all the jobs if there is a crash... and it isn't as fast or has as many features as the other ones.
Here is a list of queue adapters: http://edgeapi.rubyonrails.org/classes/ActiveJob/QueueAdapters.html
Related
I had a request a to create a java client and start jenkins build for a specific job; and get back the result of that build.
The problem is, the system is used by multiple users and their build might messed up altogether. Also the get latest build my retrieve me the previous finished build instead of current one. Is there anyway to do build/get result transactionally?
I don't think there's a way to get true transactional functionality (in the way that, say, Postgres is transactional), however, I think you can prevent collisions amongst multiple users by doing the following:
Have your build wrapped around a script (bash, Python, or similar) which takes out an exclusive lock on a semfile before the build and releases it after its done. That is, a file which serves as a semaphore that the build process must be able to exclusively lock in order to be able to proceed.
That way, if you have a build in progress, and another user triggers one, the in-progress build will have the semfile locked, and the 2nd one will block waiting for the exclusive lock on that file, getting the lock only once the 1st build is complete and has released the lock on the file.
Also, to be able to refer to each remote build after the fact, I would recommend you refer to my previous post Retrieve id of remotely triggered jenkins job.
I need to build in jenkins only if there has been any change in ClearCase stream. I want to check it also in nightly or when someone choose to build manually, and to stop the build completely if there are no changes.
I tried the poll SCM but it doesn't seem to work well...
Any suggestion?
If it is possible, you should monitor the update of a snapshot view and, if the log of said update reveal any new files loaded, trigger the Jnekins job.
You find a similar approach in this thread.
You don't want to do something like that in a checkin trigger. It runs on the users client and will slow tings down, not to mention that you'd somehow have to figure out how to give every client access to that snapshot view.
What can work is a cron or scheduled job that runs lshistory and does something when it finds new checkins.
Yes you could do this via trigger, but I'd suggest a combo of trigger and additional script.. since updating the snapshot view might be time-consuming and effect checkins...
Create a simple trigger that when the files you are concerned about are changed on a stream will fire..
The script should "touch/create" a file in some well-known network location (or perhaps write to a pipe)...
the other script could be some cron (unix) or AT (windows) job that runs continually or each minute and if the well-known file is there will perform the update of the snapshot view..
The script could also read the Pipe written to by the trigger if you go that route
This is better than a cron job that has to do an lshistory each time.. but Martina was right to suggest not doing the whole thing in a trigger for performance and snapshot view accessability for all clients.. but a trigger to write to a pipe or write some empty file is efficient and the cron/AT job that actually does the update is effieicnet as it does not have to query the VOB each minute... just the file (or only after there is info on the pipe)..
Is it possible to run a part of code automatically in asp.net or mvc at special time. for example send mail to a group pf users exactly at 8 in the morning ( only by server and not by user)
No, but there are a few ways you could do it:
Quartz.net
A powershell script (which calls into the code you want to run) and job scheduler
A console application (which exposes and calls the code you want to run) and job scheduler
My Jenkins server is set up with two jobs A and B say.
Job A is triggered from changes in subversion, runs unit tests and if successful, creates a WAR and deploys it to another environment.
If Job A succeeds, then Job B triggers. This job runs tests against the deployed WAR.
The problem is that the deployment process takes a while and the WAR is not ready in time for when Job B starts and tries to use it.
I'm looking for ideas on how to delay Job B until the WAR is up and running.
Is there a way, once Job B is triggered to wait for x seconds? I really don't want to put it into the tests in Job B if I can avoid it.
Thanks
There is definitely a way for a job to wait - just put sleep into the first shell build step. Alternatively, you can set 'Quiet period' - it's in Advanced Project Options when you create a build.
That, however, is a band-aid solution to be employed only if other approaches fail. You may try the following: if there is a way to make the deployment process (that job A triggers) right before it finishes to touch a file that Jenkins has access to, then you can use FSTrigger Plugin. See use case 3 there.
The most reliable way to make this work would be to make job A not complete until the deployment is successful, e.g. by testing for a valid response from the URL of the deployed web app. This blog post describes one way to do that.
I just started with Quartz.net and I have it running as a service. I created a Job and moved the resulting .dll to the Quartz folder and added a new entry to the jobs.xml file to kick it off every 3 seconds.
I updated the job .dll but it is in use by Quartz (or is locked).
Is it possible to update the .dll without restarting the Quartz service? If not what would happen to a long running job if I did stop/start the Quartz service?
You cannot update the job dll without restarting the service. Once the server has started it loads the the job dll and the loaded types stay in memory. This is how .NET runtime works. To achieve something like dynamic reloading you would need to use programmatically created app domains etc.
If you stop the scheduler you can pass a bool parameter whether to wait for jobs to complete first. Then you would be safe with jobs completing and no new ones would spawn meanwhile the scheduler is shutting down.