How to make MQL5,MQL4 scripts to run continuosly? - mql4

How does one make a script to run continuously for example i want the script to watch for a specific price level which can be triggered several times during the day?

Related

In Slurm, how to submit multiple experimental runs in a batch, and excute them consecutively one-by-one?

Submitting jobs on a gpu cluster managed by Slurm.
I am doing some experiments and as you know we have to tune the parameters, which means I need to run several similar scripts with different hyperparameters. So I wrote multiple bash scripts (say, named training_n.sh) for executing, in each script it's like:
# training_n.sh
srun [command with specific model/training hyperparameters]
Then I use sbatch to execute these scripts, in the sbatch script it's like:
# sbatch script
bash training_1.sh
bash training_2.sh
...
bash training_n.sh
If I have a list of "srun"s in my "sbatch" script as shown above, how are they arranged in the queue (assuming I run on a single partition)? Are all these "srun"s seen as a single job or they are seen as separate jobs?
In other words, are they consecutively queued in the "squeue" list and will be executed consecutively? Or by contrast, other users' jobs will queue exactly behind the "srun" I am running and the remaining "srun"s can only be executed after these users' jobs are completed?
Additionally, any better ideas to submit a batch of experiment scripts on a publically used cluster? Since many people are using it, I want to complete all my designed experiments consecutively once it's my turn, instead of finishing one "srun" and waiting for other users to complete to start my next one.
If I have a list of "srun"s in my "sbatch" script as shown above, how are they arranged in the queue (assuming I run on a single partition)? Are all these "srun"s seen as a single job or they are seen as separate jobs?
In other words, are they consecutively queued in the "squeue" list and will be executed consecutively? Or by contrast, other users' jobs will queue exactly behind the "srun" I am running and the remaining "srun"s can only be executed after these users' jobs are completed?
If you submit all these single srun scripts/commands in a single sbatch script, you will only get one job. The reason for this is that srun works differently inside a job allocation then outside. If you run srun inside a job allocation (e.g. in an sbatch script), it will not create a new job, but just create a job step. So in your case, you will have a single job with n job steps, that will run consecutively in your allocation.
Additionally, any better ideas to submit a batch of experiment scripts on a publically used cluster?
If these runs are completely independent, you should use a job array, with size n. This way you can create n jobs that can run whenever there are resources available.
Since many people are using it, I want to complete all my designed experiments consecutively once it's my turn, instead of finishing one "srun" and waiting for other users to complete to start my next one.
That might not be a good idea. If these jobs are independent, you can rather submit them as an array. This way, they could take advantage of backfill scheduling and might run more quickly. You likely don't gain anything by putting them into a large job.

Spreadsheet Data Not Refreshing Prior to Running Google Script

I have a spreadsheet which imports stock prices from google finance & other sources, then calculates port folio value.
There is also script which saves daily valuation data.
This has been running well for nearly 2 years, but since early May, it seems to be saving the same data every day, like it's not refreshing the stock prices.
Of course, if I open it manually and run the script, it all works OK.
If I don't open the sheet, the script now saves unrefreshed stock prices. What's the best way to force a refresh ?
You can use trigger function in your app script, where trigger to run your script function daily, hourly and it is one of the most powerful feature in Apps script, far better and easily to set-up compared to RPA.

Is there a way to automate Jmeter to run a number of testplans

Is there a way to automate Jmeter to run a number of test-plans?
Suppose I want to perform an experiment and run same test with single varying field one after the another, and reports saves for every test individually.
For example: perform a number of tests by varying the ramp-up time
so that I start it once and leave its for hours
and comeback to see the whole experiment is performed.
I read somewhere that ANT can be used.
I'd extract relevant variables to .properties files and execute tests passing different cfg file name every time. Jmeter.properties, user.properties, system.properties and similar are what you are looking for.

Need to run a job at a specific time

I'm trying to create a job in order to send a notification to a Twilio call, therefore it is important to have a robust solution that can make sure jobs are run at a specific time (I don't think being put into a queue is accurate enough).
What is the best solution for this type of a task?
These notifications need to happen at a specific time in the call. Such as "1 minute left". Therefore it needs to be able to:
Run at arbitrary times (1:22PM or 2:45AM)
Be defined by user input (they set the time of the call)
(It would be nice if that solution could run on Heroku)
You can use Heroku cron to run jobs either daily or hourly.
Daily cron is free, hourly cron costs $3/month: http://addons.heroku.com/cron.
Typically cron runs when you first initiate it (i.e. if you set it up at 3pm, it'll run at 3pm every day), but you can change that by sending an e-mail to support#heroku.com.
To run code in a cron, add your code to a cron.rake file and check out the cron docs here.
FYI
Heroku's own samples for cron suggest doing a time check, i.e.
if Time.now.hour % 4 == 0 # run every four hours
...
But, if you are running a daily cron, the code will run at a time that is likely to fail the above conditional. So, unless you are paying for hourly cron and you only want it to run specific hours, leave out that part of their sample code and just include your own code normally.
Running at Specific Times
Try delayed_job's :run_at column, which may give you the flexibility you need to run jobs at very specific times.
Heroku Docs: http://devcenter.heroku.com/articles/delayed-job
You need to add a cronjob for that. If you are on a Linux box then you can add a cron to the crontab and specify the time at which it runs. It is very flexible. You can find the details here:
http://en.wikipedia.org/wiki/Cron
If you want to do it in a ruby way, try whenever gem:
https://github.com/javan/whenever
For the specific case that you have mentioned, I think that you should give delayed_job a try:
https://github.com/collectiveidea/delayed_job#readme
it has a run_at option where you can specify the time at which you want to run the job.
Goto cron jobs in your hosting control panel

rails backgroundjob running jobs in parallel?

I'm very happy with By so far, only I have this one issue:
When one process takes 1 or 2 hours to complete, all other jobs in the queue seem to wait for that one job to finish. Worse still is when uploading to a server which time's out regularly.
My question: is Bj running jobs in parallel or one after another?
Thank you,
Damir
BackgroundJob will only allow one worker to run per webserver instance. This is by design to keep things simple. Here is a quote from Bj's README:
If one ignores platform specific details the design of Bj is quite simple: the
main Rails application submits jobs to table, stored in the database. The act
of submitting triggers exactly one of two things to occur:
1) a new long running background runner to be started
2) an existing background runner to be signaled
The background runner refuses to run two copies of itself for a given
hostname/rails_env combination. For example you may only have one background
runner processing jobs on localhost in development mode.
The background runner, under normal circumstances, is managed by Bj itself -
you need do nothing to start, monitor, or stop it - it just works. However,
some people will prefer manage their own background process, see 'External
Runner' section below for more on this.
The runner simply processes each job in a highest priority oldest-in fashion,
capturing stdout, stderr, exit_status, etc. and storing the information back
into the database while logging it's actions. When there are no jobs to run
the runner goes to sleep for 42 seconds; however this sleep is interuptable,
such as when the runner is signaled that a new job has been submitted so,
under normal circumstances there will be zero lag between job submission and
job running for an empty queue.
You can learn more on the github page: Here

Resources