When I was optimizing my EA, it always close any remaining positions for me.
But I think it will yield "inaccurate" results because the trades are "not finished".
The system might close an otherwise profiting trades "prematurely"!
Is there a way to disable the "Close at Stop" mechanism so the final trades are "ignored"!?
I just want to know what's the result "up to the last trade".
Much appreciated!
PS. I had tried searching for fxcm mt4 how to disable "close at stop" in ea from Google to no avail, I also tried to look through the MQL4 Reference and failed to find anything neither.
The error usually occurs when the needed margin (plus current drawdown) gets higher then account balance.
In most cases, you can avoid the error message by setting the balance much higher and/or lowering the lotsize.
Related
I am trying to figure out the behavior of Prometheus' increase() querying function with process restarts.
When there is a process restart within a 2m interval and I query:
sum(increase(my_metric_total[2m]))
I get a value less than expected.
For example, in a simple experiment I mock:
3 lcm_restarts
1 process restart
2 lcm_restarts
All within a 2 minute interval.
Upon querying:
sum(increase(lcm_restarts[2m]))
I receive a value of ~4.5 when I am expecting 5.
lcm_restarts graph
sum(increase(lcm_restarts[2m])) result
Could someone please explain?
Pretty concise and well-prepared first question here. Please keep this spirit!
When working with counters, functions as rate(), irate() and also increase() are adjusting on resets due to restarts. Other than the name suggests, the increase() function does not calculate the absolute increase in the given time frame but is a different way to write rate(metric[interval]) * number_of_seconds_in_interval. The rate() function takes the first and the last measurement in a series and calculates the per-second increase in the given time. This is the reason why you may observe non-integer increases even if you always increase in full numbers as the measurements are almost never exactly at the start and end of the interval.
For more details about this, please have a look at the prometheus docs for the increase() function. There are also some good hints on what and what not to do when working with counters in the robust perception blog.
Having a look at your label dimensions, I also think that counter resets don't apply to your constructed example. There is one label called reason that changed between the restarts and so created a second time series (not continuing the existing one). Here you are also basically summing up the rates of two different time series increases that (for themselves) both have their extrapolation happening.
So basically there isn't really anything wrong what you are doing, you just shouldn't rely on getting highly precise numbers out of prometheus for your use case.
Prometheus may return unexpected results from increase() function due to the following reasons:
Prometheus may return fractional results from increase() over integer counter because of extrapolation. See this issue for details.
Prometheus may return lower than expected results from increase(m[d]) because it doesn't take into account possible counter increase between the last raw sample just before the specified lookbehind window [d] and the first raw sample inside the lookbehind window [d]. See this article and this comment for details.
Prometheus skips the increase for the first sample in a time series. For example, increase() over the following series of samples would return 1 instead of 11: 10 11 11. See these docs for details.
These issues are going to be fixed according to this design doc. In the mean time it is possible to use other Prometheus-like systems such as VictoriaMetrics, which are free from these issues.
Using Prometheus for things that are per second works really great and I've had great success with rate and irate. I am just at a loss how to graph something that's happening very rarely and is a big deal.
So I have a counter I am incrementing that's called job_failed. Whenever that happens it shows up in my instant-vector. If I graph it directly it always goes up and I see a bump in the graph, but this isn't giving me clear enough indication that a job has failed. So I'd like to have it be a spike in a zeroed graph.
If I do a rate(job_failed[15s]) I get my spike - but it's a per second spike so it's value is 0.1 although the change I want is 1.
I tried increase(job_failed[1m]) but that is also not adding up correctly, occasionally leaving me with values like 2.18 etc.
Is there a way to only see a single spike? This seems like a rather trivial thing but I can't figure it out.
Prometheus is suited more to high volume than low volume events, as at low volumes artifacts from how we keep things accurate on average show up.
So for example rate(job_failed[15s]) with an increase of 1 over the 15 seconds is 1/15 = 0.066/s. Rounding could make that show as 0.1.
https://www.youtube.com/watch?v=67Ulrq6DxwA goes into more detail as to how this all works.
The short version is what you're doing now is the way to do it.
For similar requirement, I was using delta function with threshold configured as per requirement.
https://prometheus.io/docs/querying/functions/#delta
I've been trying to run a dynamic/explicit simulation of the response of a small building under a combination of stresses to a scenario where a supporting column is suddenly lost.
However, when the simulation completed I did not get any stresses or deformations in the results.I applied the dead load and live load of 2,500Pa to each floor, however in the results I am not getting any reactions or stress distributions in the model at all.
What might be the problem?
I am still quite new to the Abaqus, so I might be missing something obvious.
Thank you
One guess could be your loading magnitude. Increase your loading to see if your problem remain or not!
Make sure you have created a field output
in CAE
select the "Step" Module
click the "Create Field Output" button
this allows you to select the outputs you want to view from your analysis
I would guess you require displacements and stresses
I've tried every possible fields but can not find the number of times functions are called.
Besides, I don't get Self and # Self. What do these two numbers mean?
There are several other ways to accomplish this. One is obviously to create a static hit counter and an NSLog that emits and increments a counter. This is intrusive though and I found a way to do this with lldb.
Set a breakpoint
Execute the program until you hit the breakpoint the first time and note the breakpoint number on the right hand side of the line you hit (e.g. "Thread 1: breakpoint 7.1", note the 7.1)
Context click on the breakpoint and choose "Edit Breakpoint"
Leave condition blank and choose "Add Action"
Choose "Debugger Command"
In the command box, enter "breakpoint list 7.1" (using the breakpoint number for your breakpoint from step 2). I believe you can use "info break " if you are using gdb.
Check Options "Automatically Continue after evaluating"
Continue
Now, instead of stopping, llvm will emit info about the breakpoint including the number of times it has been passed.
As for the discussion between Glenn and Mike on the previous answer, I'll describe a performance problem where function execution count was useful: I had a particular action in my app where performance degraded considerably with each execution of the action. The Instruments time profiler showed that each time the action was executed, a particular function was taking twice as long as the time before until quickly the app would hang if the action was performed repeatedly. With the count, I was able to determine that with each execution, the function was called twice as many times as it was during the previous execution. It was then pretty easy to look for the reason, which turned out to be that someone was re-registering for a notification in NotificationCenter on each event execution. This had the effect of doubling the number of response handler calls on each execution and thus doubling the "cost" of the function each time. Knowing that it was doubling because it was called twice as many times and not because the performance was just getting worse caused me to look at the calling sequence rather than for reasons the function itself could be degrading over time.
While it's interesting, knowing the number of times called doesn't have anything to do with how much time is spent in them. Which is what Time Profiler is all about. In fact, since it does sampling, it cannot answer how many times.
It seems you cannot use Time Profiler for counting function calls. This question seems to address potential methods for counting.
W/ respect to self and #self:
Self is "The number of times the symbol calls itself." according to the Apple Docs on the Time Profiler.
From the way the numbers look though, it seems self is the summed duration of samples that had this symbol at the bottom of its stack trace. That would make:
# self: the number of samples where this symbol was at the bottom of the stack trace
% self: the percent of self samples relative to total samples of currently displayed call tree
(eg - #self / total samples).
So this wouldn't tell you how many times a method was called. But it would give you an idea how much time is spent in a method or lower in the call tree.
NOTE: I too am unsure about the various 'self' meanings though. Would love to see someone answer this authoritatively. Arrived here searching for that...
IF your objective is to find out what you need to fix to make the program as fast as possible,
Number of calls and self time may be interesting but are irrelevant.
Look at my answer to this question, in particular points 6 and 8.
EDIT: To clarify the point further, suppose the following is the timeline of execution of the program. Some of that time (in this case about 50%) is spent in an activity that can be removed, if you know what it is, such as needless buried I/O, excessive calls to new, runaway notifications, or "insignificant" data validation. If a random-time sample is taken, it has a 50% chance of occurring in that activity, and an examination of the call stack and/or program variables shows that it is doing something that can be removed. Then, if 10 such samples are taken, the activity will be seen on roughly 5 of them, regardless of whether the activity occurs in a few large chunks of time, or many small ones. The activity may be a few lines of code in a function doing something unnecessary, or it may be something much more generalized. Regardless, you recognize it, fix it, and get roughly a factor of 2 speedup. Call counts and self time contribute nothing to this process.
I am searching for ideas/examples on how to store path patterns from users - with the goal of analysing their behaviours and optimizing on "most used path" when we can detect them somehow.
Eg. which action do they do after what, so that we later on can check to see if certain actions are done over and over again - therefore developing a shortcut or assembling some of the actions into a combined multiaction.
My first guess would be some sort of "simple log", perhaps stored in some SQL-manner, where we can keep each action as an index and then just record everything.
Problem is that the path/action might be dynamically changed - even while logging - so we need to be able to take care of this fact too, when looking for patterns later.
Would you log everthing "bigtime" first and then POST-process every bit of details after some time or do you have great experience with other tactics?
My worry is that this is going to take up space, BIG TIME while logging 1000 users each day for a month or more.
Hope this makes sense and I am curious to see if anyone can provide sample code, pseudocode or perhaps links to something usefull.
Our tools will be C#, SQL-database, XML and .NET 3.5 - clients could also get .NET 4.0 if needed.
Patterns examples as we expect them
...
User #1001: A-B-A-A-A-B-C-E-F-G-H-A-A-A-C-B-A
User #1002: B-A-A-B-C-E-F
User #1003: F-B-B-A-E-C-A-A-A
User #1002: C-E-F
...
etc. no real way to know what they do next nor how many they will use, how often they will do it.
A secondary goal, if possible, if we later on add a new "action" called G (just sample to illustrate, there will be hundreds of actions) how could we detect these new behaviours influence on the previous patterns.
To explain it better, my thought here would be some way to detect "patterns within patterns", sort of like how compressions work, so that "repeative patterns" are spottet. We dont know how long these patterns might be, nor how often they might come. How do we break this down into "small bits and pieces" - whats the best approach you think?
I am not sure what you mean by path, but, if you gave every action in a path a unique symbol, you could reduce the problem to longest common substring or subsequence.
Or have a map of paths to the number of times that action occurred. Every time a certain path happens, increment the count for that path. Then sort to find the most common.
Pseudo idea/implementation so far
Log ever users action into a list/series of actions, bulk kinda style (textfiles/SQL - what ever, just store the whole thing for post-processing)
start counting every "1 action", "2 actions", "3 actions" up til a certain amount (lets say 30 levels)
sort them all, by giving values of importants to some of the actions (might be those producing end results)
A usefull result perhaps?
If we count all [A], [A-A], [A-B], [A-C], [A-A-A], [A-A-B] etc. its going to make a LONG and fine list of which actions are used in row frequently, and thats in the right direction, because if some of these results gets too high, we might need a shorter path. Problem is then, whats too few actions to be optimized and whats the longest needed actionlist to search for? My guess is that we need to do this counting first, then examine the numbers.
Problem is that this would be part of an analyzing tool we are developing and we dont have data until implementation, so we dont know what to look for before its actually done. hmm... wondering if there really IS an answer to this one.