MQL4: Global trend/variable or text files for Single trade per signal/event - mql4

On each new bar/tick, my variable is re-initialed, I am trying to execute a trade once per signal, the problem is that once TP is achieved, if same trends continues, it triggers another trade. I am thinking to store variable in Text file. So just wondering what would the best way to handle such variable. Sorry I don't have code.

MT4 Global Variable objects
While MT4 supports somehow ghost-alike semi-persistent objects called "Global Variables", that can survive between MT4 Terminal re-runs for about several weeks, these ghosts are rather complicated to be used for your sketched purposes.
GlobalVariableCheck()
GlobalVariableSet()
GlobalVariableSetOnCondition()
GlobalVariableGet()
FileSystem Text-File
While doable, this ought be a last resort only option, as this is the slowest and the least manageable part, once running several units, several tens, several hundreds of MT4 Terminal instances in the same environment, the risk of fileIO collisions is clearly visible.
Solution?
Try to create & maintain a singleton-pattern to avoid multiple re-entries into a trend you have already put one trade into.
Try to setup also a clear definition for trend reversals, which stop / reset the singleton-pattern once a new trend was formed.

Related

Marking a key as complete in a GroupBy | Dataflow Streaming Pipeline

To our Streaming pipeline, we want to submit unique GCS files, each file containing multiple event information, each event also containing a key (for example, device_id). As part of the processing, we want to shuffle by this device_id so as to achieve some form of worker to device_id affinity (more background on why we want to do it is in this another SO question. Once all events from the same file are complete, we want to reduce (GroupBy) by their source GCS file (which we will make a property of the event itself, something like file_id) and finally write the output to GCS (could be multiple files).
The reason we want to do the final GroupBy is because we want to notify an external service once a specific input file has completed processing. The only problem with this approach is that since the data is shuffled by the device_id and then grouped at the end by the file_id, there is no way to guarantee that all data from a specific file_id has completed processing.
Is there something we could do about it? I understand that Dataflow provides exactly_once guarantees which means all the events will be eventually processed but is there a way to set a deterministic trigger to say all data for a specific key has been grouped?
EDIT
I wanted to highlight the broader problem we are facing here. The ability to mark
file-level completeness would help us checkpoint different stages of the data as seen by external consumers. For example,
this would allow us to trigger per-hour or per-day completeness which are critical for us to generate reports for that window. Given that these stages/barriers (hour/day) are clearly defined on the input (GCS files are date/hour partitioned), it is only natural to expect the same of the output. But with Dataflow's model, this seems impossible.
Similarly, although Dataflow guarantees exactly-once, there will be cases where the entire pipeline needs to be restarted since something went horribly wrong - in those cases, it is almost impossible to restart from the correct input marker since there is no guarantee that what was already consumed has been completely flushed out. The DRAIN mode tries to achieve this but as mentioned, if the entire pipeline is messed up and draining itself cannot make progress, there is no way to know which part of the source should be the starting point.
We are considering using Spark since its micro-batch based Streaming model seems to fit better. We would still like to explore Dataflow if possible but it seems that we wont be able to achieve it without storing these checkpoints externally from within the application. If there is an alternative way of providing these guarantees from Dataflow, it would be great. The idea behind broadening this question was to see if we are missing an alternate perspective which would solve our problem.
Thanks
This is actually tricky. Neither Beam nor Dataflow have a notion of a per-key watermark, and it would be difficult to implement that level of granularity.
One idea would be to use a stateful DoFn instead of the second shuffle. This DoFn would need to receive the number of elements expected in the file (from either a side-input or some special value on the main input). Then it could count the number of elements it had processed, and only output that everything has been processed once it had seen that number of elements.
This would be assuming that the expected number of elements can be determined ahead of time, etc.

How to discard data from the first sliding window in Dataflow?

I'd like to recognize and discard incomplete windows (independent of sliding) at the start of pipeline execution. For example:
If I'm counting the number of events hourly and I start at :55 past the hour, then I should expect ~1/12th the value in the first window and then a smooth ramp-up to the "correct" averages.
Since data could be arbitrarily late in a user-defined way, the time you start the pipeline up and the windows that are guaranteed to be missing data might be only loosely connected.
You'll need some out-of-band way of indicating which windows they are. If I were implementing such a thing, I would consider a few approaches, in this order I think:
Discarding outliers based on not enough data points. Seems that it would be robust to lots of data issues, if your data set can tolerate it (a statistician might disagree)
Discarding outliers based on data points not distributed in the window (ditto)
Discarding outliers based on some characteristic of the result instead of the input (statisticians will be even more likely to say don't do this, since you are already averaging)
Using a custom pipeline option to indicate a minimum start/end time for interesting windows.
One reason to choose more robust approaches than just "start time" is in the case of down time of your data producer or any intermediate system, etc. (even with delivery guarantees, the watermark may have moved on and made all that data droppable).

How to stop MetaTrader Terminal 4 [MT4] offline chart from updating the prices

How do I stop MetaTrader Terminal 4 offline chart from updating the price on its own?
I want to update the price on my own because of the difference in timezone with my broker. I have checked all the properties and the MQL4 forum. No luck.
For truly offline-charts, there is a way
While regular charts process an independent event-flow, received from MT4-Server, there is a change for retaining your own control over TOHLCV-data records -- including the TimeZone shifts, synthetic Bar(s) additions and other adaptations, as needed.
You may create your own, transformed, TOHLCV-history and import these records via F2 facility, called in MT4 a History Centre.
How to avoid a live-quote-stream updates in MetaTrader Terminal 4
The simplest ever way is not to login to any Trading Server. This will avoid unwanted updates from reaching your local anFxQuoteStreamPROCESSOR.
There used to be a way, how to inject fake QuoteStreamDATA into a local MT4, however this enters a gray, if not black zone, as MetaQuotes, Inc., postulated the Server/Terminal protocol to be a protected IP and any attempt to reverse-engineer they consider an unlawfull violation of their rights and could cause legal consequences, so be carefull on stepping there. Anyway, a doable approach with an explicit risk warning being presented above.
Can't be done. Quotes get fed in from mt4 and get "evented" into the metatrader.

How can I create a golden master for mvc 4 application

I was wondering how to create a golden master approach to start creating some tests for my MVC 4 application.
"Gold master testing refers to capturing the result of a process, and
then comparing future runs against the saved “gold master” (or known
good) version to discover unexpected changes." - #brynary
Its a large application with no tests and it will be good to start development with the golden master to ensure the changes we are making to increase the test coverage and hopefully decrease the complexity in the long don't break the application.
I am think about capturing a days worth of real world traffic from the IIS logs and use that to create the golden master however I am not sure the easiest or best way to go about it. There is nothing out of the ordinary on the app lots controllers with post backs etc
I am looking for a way to create a suitable golden master for a MVC 4 application hosted in IIS 7.5.
NOTES
To clarify something in regards to the comments the "golden master" is a test you can run to verify output of the application. It is like journalling your application and being able to run that journal every time you make a change to ensure you have broken anything.
When working with legacy code, it is almost impossible to understand
it and to write code that will surely exercise all the logical paths
through the code. For that kind of testing, we would need to
understand the code, but we do not yet. So we need to take another
approach.
Instead of trying to figure out what to test, we can test everything,
a lot of times, so that we end up with a huge amount of output, about
which we can almost certainly assume that it was produced by
exercising all parts of our legacy code. It is recommended to run the
code at least 10,000 (ten thousand) times. We will write a test to run
it twice as much and save the output.
Patkos Csaba - http://code.tutsplus.com/tutorials/refactoring-legacy-code-part-1-the-golden-master--cms-20331
My question is how do I go about doing this to a MVC application.
Regards
Basically you want to compare two large sets of results and control variations, in practice, an integration test. I believe that the real traffic can't exactly give you the control that I think you need it.
Before making any change to the production code, you should do the following:
Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
Bombard the class or system under test with these random inputs.
Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.
This doesn't match with your approach. Imagine a scenario in which a user makes a purchase of a certain product, you can not determine the outcome of the transaction, insufficient credit, non-availability of the product, so you cannot trust in your input.
However, what you now need is a way to replicate that data automatically, and the automation of the browser in this case cannot help you much.
You can try a different approach, something like the Lightweight Test Automation Framework or else MvcIntegrationTestFramework which are the most appropriate to your scenario

Multiple Uploads to website simultaneosly

I am building a ASP.Net website and the website accepts a PDF as input and processes them. I am generating an intermediate file with a particular name. But I want to know if multiple users are using the same site at the same time, then how will the server handle this.
How can I handle this. Will Multi-Threading do the job? What about the file names of the intermediate files I am generating? How can I make sure they won't override each other. How to achieve performance?
Sorry if the question is too basic for you.
I'm not into .NET but it sounds like a generic problem anyways, so here are my two cents.
Like you said, multithreading (as usually different requests run in different threads) takes care for most of that kind of problems, as every method invocation involves new objects run in a separate context.
There are exceptions, though:
- Singleton (global) objects whose any of their operations have side effects
- Other resources (files, etc. ), this is exactly your case.
So in the case of files, I'd ponder these (mutually exclusive) alternatives:
(1) Never write the uploaded file to disk, instead hold it into memory and process it in there (like in byte array). In this case you're leveraging the thread-per-request protection. This one cannot be applied if your files are really big.
(2) Choose very randomized names (like UUID) to write them into a temporary location so their names won't clash if two users upload at the same time.
I'd go with (1) whenever possible.
Best

Resources