Interactions between Expert Advisor and Indicators on MQL4 - trading

Is it possible to read changes in pre-built indicator (for example: its value changes) through an expert-advisor, and of course - automate the trades based on those reads?
What is the function that is responsible for doing this?
I have tried to look this up on Google, but it appears I can only do things like track object creation or deletion ... called Chart Events.... maybe I'm missing something?

Yes, it is possible.
MetaTrader4 Terminal is a software platform, that allows you to launch
1x soloist Expert Advisor - as an event-driven code-execution algorith per each MT4.Graph
Nx concurrent Custom Indicator-s event-driven restricted code-base per each MT4.Graph
1x soloist Script asynchronous code-execution unit per each MT4.Graph
This inventory is important, as you have no other means how to automate complex Trading Algorithmisation but this.
Technical Indicators are executed under one common thread, which poses limitations for real-time robustness plus some restrictions apply for the permitted / forbidden operations that might be coded / compiled / executed in Indicator ( all aim at avoiding any and all possible blocking situations ( ref. solo-thread for all ... ) )
This said, you might have noticed, that both Expert Advisor and Technical Indicator-s are externally synchronised ( forget for the moment about non-parallel, shared thread execution with principal nanosecond scale of asynchronicity due to resource / code-execution scheduling ) and bound to externally issued anFxMarketEVENT in a form of an arriving signal ( once a price moves, MT4.Server sends QUOTE downstream message to the MT4.Terminal, a.k.a. a Tick ), which once ( if ) received, triggers MQL4 code-execution facilities on localhost:
OnTick(){ ...} # in case of Expert Advisor
OnCalculate(){...} # in case of Custom Technical Indicator
What is the function that is responsible for this?
Directly? None.
Indirectly? The one you construct and make responsible for registering / monitoring the changes of such value ( be it internally in MQL4 domain or externally via distributed processing model, incl. GPU-cluster one for more demanding processing, where internal shared-thread execution fails to meet the timing constraints ):
bool hasAnIndicatorChanged( double aTol = 0.00001 ){ // DERIVATION
static double prevVALUE = EMPTY_VALUE; // .DEF
double aNewVALUE = iBWMFI( _Symbol, // .SYM
PERIOD_CURRENT, // .PERIOD
0 // .HOT[0]
); // .STO "current"
if ( MathAbs( aNewVALUE - prevVALUE ) <= aTol ){
prevVALUE = aNewVALUE;
return( False ); // JIT/RET --> --> --> --> --> non-MISRA-C JIT/RET
}
else {
prevVALUE = aNewVALUE;
return( True ); // JIT/RET --> --> --> --> --> non-MISRA-C JIT/RET
}
}
Can do principle
One may create a similarly trivial or a bit more complex PID-monitor and ask from Expert Advisor each time an OnTick() is being called ( thus align the code execution with the internal event-handler at no additional cost ).
void OnTick(){
if ( hasAnIndicatorChanged() ){
...
}
...
}

Ok I found it.
In order to use a custom indicator as a tool for a buy / trade decision inside an expert advisor, the function is iCustom()

Related

Sharing Beam State across different DoFns

Is Beam State shared across different DoFns?
Lets say I have 2 DoFns:
StatefulDoFn1: { myState.write(1)}
StatefulDoFn2: { myState.read() ; do something ... output}
And then the pipeline in pseudocode:
pipline = readInput.........applyDoFn(StatefulDoFn1)......map{do something else}.......applyDoFn(StatefulDoFn2)
If I annotate myState identically in both StatefulDoFns - will what I write in StatefulDoFn1 be visible to StatefulDoFn2 , we implemented a pipeline with the assumption the answer is Yes ---- but it seems to be no
No, state is local to each stateful DoFn, and it is also actually local to each key (and window, if you are using a window) inside that DoFn.

anylogic agent communication and message sending

In my model, I have some agents;
"Demand" agent,
"EnergyProducer1" agent
"EnergyProducer2" agent.
When my hourly energy demands are created in the Main agent with a function, the priority for satisfying this demand is belongs to "EnergyProducer1" agent. In this agent, I have a function that calculate energy production based on some situtations. The some part of the inside of this function is following;
**" if (statechartA.isStateActive(Operating.busy)) && ( main.heatLoadDemandPerHour >= heatPowerNominal) {
producedHeatPower = heatPowerNominal;
naturalGasConsumptionA = naturalGasConsumptionNominal;
send("boilerWorking",boiler);
} else ..... "**
Here my question is related to 4th line of the code. If my agent1 fails to satisfy the hourly demand, I have to say agent2 that " to satisfy rest of demand". If I send this message to agent2, its statechart will be active and the function of agent2 will be working. My question is that this all situations will be realized at the same hour ??? İf it is not, is accessing variables and parameters of other agent2 more appropiaote way???
I hope I could explain my problem.
thanks for your help in advance...
**Edited question...
As a general comment on your question, within AnyLogic environment sending messages is alway preferable to directly accessing variable and parameters of another agent.
Specifically in the example presented the send() function will schedule message delivery the next instance after the completion of the current function.
Update: A message in AnyLogic can be any Java class. Sending strings such as "boilerWorking" used in the example is good for general control, however if more information needs to be shared (such as a double value) then it is good practice to create a new Java class (let's call is ModelMessage and follow these instructions) with at least two properties msgStr and msgVal. With this new class sending a message changes from this:
...
send("boilerWorking", boiler);
...
to this:
...
send(new ModelMessage("boilerWorking",42.0), boiler);
...
and firing transitions in the statechart has to be changed to use if expression is true with expression being msg.msgString == "boilerWorking".
More information about Agent communication is available here.

Why is GroupByKey in beam pipeline duplicating elements (when run on Google Dataflow)?

Background
We have a pipeline that starts by receiving messages from PubSub, each with the name of a file. These files are exploded to line level, parsed to JSON object nodes and then sent to an external decoding service (which decodes some encoded data). Object nodes are eventually converted to Table Rows and written to Big Query.
It appeared that Dataflow was not acknowledging the PubSub messages until they arrived at the decoding service. The decoding service is slow, resulting in a backlog when many message are sent at once. This means that lines associated with a PubSub message can take some time to arrive at the decoding service. As a result, PubSub was receiving no acknowledgement and resending the message. My first attempt to remedy this was adding an attribute to each PubSub messages that is passed to the Reader using withAttributeId(). However, on testing, this only prevented duplicates that arrived close together.
My second attempt was to add a fusion breaker (example) after the PubSub read. This simply performs a needless GroupByKey and then ungroups, the idea being that the GroupByKey forces Dataflow to acknowledge the PubSub message.
The Problem
The fusion breaker discussed above works in that it prevents PubSub from resending messages, but I am finding that this GroupByKey outputs more elements than it receives: See image.
To try and diagnose this I have removed parts of the pipeline to get a simple pipeline that still exhibits this behavior. The behavior remains even when
PubSub is replaced by some dummy transforms that send out a fixed list of messages with a slight delay between each one.
The Writing transforms are removed.
All Side Inputs/Outputs are removed.
The behavior I have observed is:
Some number of the received messages pass straight through the GroupByKey.
After a certain point, messages are 'held' by the GroupByKey (presumably due to the backlog after the GroupByKey).
These messages eventually exit the GroupByKey (in groups of size one).
After a short delay (about 3 minutes), the same messages exit the GroupByKey again (still in groups of size one). This may happen several times (I suspect it is proportional to the time they spend waiting to enter the GroupByKey).
Example job id is 2017-10-11_03_50_42-6097948956276262224. I have not run the beam on any other runner.
The Fusion Breaker is below:
#Slf4j
public class FusionBreaker<T> extends PTransform<PCollection<T>, PCollection<T>> {
#Override
public PCollection<T> expand(PCollection<T> input) {
return group(window(input.apply(ParDo.of(new PassthroughLogger<>(PassthroughLogger.Level.Info, "Fusion break in")))))
.apply("Getting iterables after breaking fusion", Values.create())
.apply("Flattening iterables after breaking fusion", Flatten.iterables())
.apply(ParDo.of(new PassthroughLogger<>(PassthroughLogger.Level.Info, "Fusion break out")));
}
private PCollection<T> window(PCollection<T> input) {
return input.apply("Windowing before breaking fusion", Window.<T>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.discardingFiredPanes());
}
private PCollection<KV<Integer, Iterable<T>>> group(PCollection<T> input) {
return input.apply("Keying with random number", ParDo.of(new RandomKeyFn<>()))
.apply("Grouping by key to break fusion", GroupByKey.create());
}
private static class RandomKeyFn<T> extends DoFn<T, KV<Integer, T>> {
private Random random;
#Setup
public void setup() {
random = new Random();
}
#ProcessElement
public void processElement(ProcessContext context) {
context.output(KV.of(random.nextInt(), context.element()));
}
}
}
The PassthroughLoggers simply log the elements passing through (I use these to confirm that elements are indeed repeated, rather than there being an issue with the counts).
I suspect this is something to do with windows/triggers, but my understanding is that elements should never be repeated when .discardingFiredPanes() is used - regardless of the windowing setup. I have also tried FixedWindows with no success.
First, the Reshuffle transform is equivalent to your Fusion Breaker, but has some additional performance improvements that should make it preferable.
Second, both counters and logging may see an element multiple times if it is retried. As described in the Beam Execution Model, an element at a step may be retried if anything that is fused into it is retried.
Have you actually observed duplicates in what is written as the output of the pipeline?

Consuming unbounded data in windows with default trigger

I have a Pub/Sub topic + subscription and want to consume and aggregate the unbounded data from the subscription in a Dataflow. I use a fixed window and write the aggregates to BigQuery.
Reading and writing (without windowing and aggregation) works fine. But when I pipe the data into a fixed window (to count the elements in each window) the window is never triggered. And thus the aggregates are not written.
Here is my word publisher (it uses kinglear.txt from the examples as input file):
public static class AddCurrentTimestampFn extends DoFn<String, String> {
#ProcessElement public void processElement(ProcessContext c) {
c.outputWithTimestamp(c.element(), new Instant(System.currentTimeMillis()));
}
}
public static class ExtractWordsFn extends DoFn<String, String> {
#ProcessElement public void processElement(ProcessContext c) {
String[] words = c.element().split("[^a-zA-Z']+");
for (String word:words){ if(!word.isEmpty()){ c.output(word); }}
}
}
// main:
Pipeline p = Pipeline.create(o); // 'o' are the pipeline options
p.apply("ReadLines", TextIO.Read.from(o.getInputFile()))
.apply("Lines2Words", ParDo.of(new ExtractWordsFn()))
.apply("AddTimestampFn", ParDo.of(new AddCurrentTimestampFn()))
.apply("WriteTopic", PubsubIO.Write.topic(o.getTopic()));
p.run();
Here is my windowed word counter:
Pipeline p = Pipeline.create(o); // 'o' are the pipeline options
BigQueryIO.Write.Bound tablePipe = BigQueryIO.Write.to(o.getTable(o))
.withSchema(o.getSchema())
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND);
Window.Bound<String> w = Window
.<String>into(FixedWindows.of(Duration.standardSeconds(1)));
p.apply("ReadTopic", PubsubIO.Read.subscription(o.getSubscription()))
.apply("FixedWindow", w)
.apply("CountWords", Count.<String>perElement())
.apply("CreateRows", ParDo.of(new WordCountToRowFn()))
.apply("WriteRows", tablePipe);
p.run();
The above subscriber will not work, since the window does not seem to trigger using the default trigger. However, if I manually define a trigger the code works and the counts are written to BigQuery.
Window.Bound<String> w = Window.<String>into(FixedWindows.of(Duration.standardSeconds(1)))
.triggering(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(1)))
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes();
I like to avoid specifying custom triggers if possible.
Questions:
Why does my solution not work with Dataflow's default trigger?
How do I have to change my publisher or subscriber to trigger windows using the default trigger?
How are you determining the trigger never fires?
Your PubSubIO.Write and PubSubIO.Read transforms should both specify a timestamp label using withTimestampLabel, otherwise the timestamps you've added will not be written to PubSub and the publish times will be used.
Either way, the input watermark of the pipeline will be derived from the timestamps of the elements waiting in PubSub. Once all inputs have been processed, it will stay back for a few minutes (in case there was a delay in the publisher) before advancing to real time.
What you are likely seeing is that all the elements are published in the same ~1 second window (since the input file is pretty small). These are all read and processed relatively quickly, but the 1-second window they are put in will not trigger until after the input watermark has advanced, indicating that all data in that 1-second window has been consumed.
This won't happen until several minutes, which may make it look like the trigger isn't working. The trigger you wrote fired after 1 second of processing time, which would fire much earlier, but there is no guarantee all the data has been processed.
Steps to get better behavior from the default trigger:
Use withTimestampLabel on both the write and read pubsub steps.
Have the publisher spread the timestamps out further (eg., run for several minutes and spread the timestamps out across that range)

Timeout for Futures in Akka

We have a server that processes portfolio and securities (inside it) in different actors. For portfolio with smaller number of securities (<20) this works fine. When i increase the number of security count to 1000, encountered following issues:
akka.dispatch.FutureTimeoutException: Futures timed out after [5000] milliseconds
I could bypass this error by increasing timeout inside akka config, is that the right thing to do? In akka versions earlier than 1.2 i could set self.timeout inside the actor but that is deprecated.
The other issue I faced (intermittently) is that the entire server hangs while joining in futures.map code inside my portfolio actor:
//fork out for each security
val listOfFutures = new ListBuffer[Future[Security]]()
for (security <- portfolio.getSecurities.toList) {
val securityProcessor = actorOf[SecurityProcessor].start()
listOfFutures += (securityProcessor ? security) map {
_.asInstanceOf[Security]
}
}
EventHandler.info(this,"joining results from security processors")
//join for each security
val futures = Future.sequence(listOfFutures.toList)
futures.map {
listOfSecurities =>
portfolioResponse = MergeHelper.merge(portfolio, listOfSecurities)
}.get
You do not state which version of Akka you're on, and given my limited time with the crystal ball I'll assume that you're on 1.2.
You can specify a Timeout when you call ask/?
(Also, your code is a bit convoluted, but that I have already solved in your other question.)
Cheers,
√

Resources