Metastock to MQL4 conversion - mql4

I would like to know how I can use the Ref() function of metastock in MQL4.
Ref() in metastock is used to take the previous values of the given data array.
for example:
Ref(C,-1)
gives the value of previous day's close.

iClose(_Symbol,PERIOD_D1,1) for close. In Mql4 0 means the current candle, and increases to the left, so -1 in your case becomes 1; this is true when accessing candle data. For regular arrays, e.g., obtained by CopyBuffer or looping, array indexes are 0 to ArraySize()-1

Related

Lua length operator (#) with nil values

After reading this topic and after experimenting a bit, I am trying to understand how the Lua length operator works when a table contains nil values.
Before I started to investigate, I thought that the length was simply the number of consecutive non-nil elements, starting at index 1:
print(#{nil}) -- 0
print(#{"o"}) -- 1
print(#{"o",nil}) -- 1
print(#{"o","o"}) -- 2
print(#{"o","o",nil}) -- 2
That looks pretty simple, right?
But my headache started when I accidentally added an element after a nil-terminated table:
print(#{"o",nil,"o"})
My guess was that it should probably print 1 because it would stop counting when the first nil is found. Or maybe it should print 2 if the length operator is greedy enough to look for non-nil elements after the first nil. But the above code prints 3.
So I’ve ran several other tests to see what happens:
-- nil before the end
print(#{nil,"o"}) -- 2
print(#{nil,"o","o"}) -- 3
print(#{"o",nil,"o"}) -- 3
-- several nil elements
print(#{"o",nil,nil}) -- 1
print(#{nil,"o",nil}) -- 0
print(#{nil,nil,"o"}) -- 3
I should mention that repl.it currently uses Lua 5.1.5 which is rather old, but if you test with the Lua demo, which currently uses Lua 5.3.5, you’ll get the same results.
By looking at those results and by looking at this answer, I assume that:
if the last element is not nil, the length operator returns the full size of the table, including nil entries if any
if the last element is nil, it counts the number of consecutive non-nil and stops counting at the first nil
Are those assumptions correct?
Can we predict a 100% well-defined behavior when a table contains one or several nil values?
The Lua documentation states that the length of a table is only defined if the table is a sequence. Does that mean that the length operator has undefined behavior for non-sequences?
Apart from the length operator, can nil values cause any trouble in a table?
We can predict some behaviour, but it is not standardised, and as such you should never rely on it. It's quite possible that the behaviour may change within this major version of Lua.
Should you ever need to fill a table with nil values, I suggest wrapping the table and replace holes with a unique placeholder value (eg. NIL={}; if v==nil then t[k]=NIL end, this is quite cheap to test against and safe.).
That said...
As there is even a difference in the result of # depending on how the table is defined, you'll have to distinguish between statically defined (constant) tables and dynamic defined (muted) tables.
Static table definitions:
#{nil,nil,nil,nil,nil, 1} -- 6
#{3, 2, nil, 1} -- 4
#{nil,nil,nil, 1, 1,nil} -- 0
#{nil,nil, 1, 1, 1,nil} -- 5
#{nil, 1, 1, 1, 1,nil} -- 5
#{nil,nil,nil,nil, 1,nil} -- 0
#{nil,nil, 1,nil, 1,nil,nil} -- 5
#{nil,nil,nil, 1,nil,nil, 1,nil} -- 4
Using this kind of definition, as long as the last value is non-nil, you will get a length equal to the position of the last value. If the last value is nil, Lua starts a (non-linear) search from the tail until it finds the first non-nil value.
Dynamic data definition
local x={}; x[5]=1;print(#x) -- 0
local x={}; x[1]=1;x[2]=1;x[3]=1;x[5]=1;print(#x) -- 3
local x={}; x[1]=1;x[2]=1;x[4]=1;x[5]=1;print(#x) -- 5
#{[5]=1} -- 0
local x={nil,nil,nil,1};x[5]=1;print(#x) -- 0
As soon as the table was changed once, the operator works the other way (that includes static definitions with []). If the first element is nil, # always returns 0, but if not it starts a search that I did not investigate further (I guess you can check the sources, though I don't think it's a standard binary search), until it finds a nil value that is preceded by a non-nil value.
As said before, relying on this behaviour is not a good idea, and invites lots of issues down the road. Though if you want to make a nasty unmaintainable program to mess with a colleague, that's a sure way to do it.
When a table is a sequence (all numeric keys start at 1 and there are no nil gaps), # is defined to be precisely the count of those elements.
For non-sequence tables, it is a bit more complicated. Lua 5.2 seems to leave the result as undefined. For 5.1 and 5.3, the result of the operation is a border.
A border in a table is any positive index that contains a non-nil value followed by nil, or 0 if the first element is nil. # is defined to return any value that satifies these conditions.
Looking at it from another perspective, since tables contain an "array" part and a "map" part, Lua has no way of knowing where the "map" indices start. For example, you can create a table with 1000 values and then set the first 999 of them to nil; that could leave you with a table of "size" 1000. However, you can also start with an empty table and set the 1000th element, having a table of "size" 0 but still structurally equivalent to the first one. The result of # is then simply the first valid value the internal algorithm finds.
The length operator produces undefined behaviour for tables that aren't sequences (i.e. tables with nil elements in the middle of the array). This means that even if the Lua implementation always behaves in a certain way, you shouldn't rely on that behaviour, as it may change in future versions of Lua, or in different implementations like LuaJIT.
You can use nils in tables - there is nothing wrong with that - just don't use the length operator on a table which might have nils before non-nil values.
The post you linked to contains more details about how the actual algorithm works. It mentions counting elements with a "binsearch", i.e. a binary search. This is not the same as just counting the elements one by one - if there are nils in the table, then depending on their exact position, the binary search algorithm may treat them as the end of the table, or may just ignore them.
To sum up, the algorithm is harder to predict than you were assuming, and even though it is technically possible to predict what will happen in any given case, you shouldn't rely on that behaviour as it is liable to change.

Logic behind COBOL code

I am not able to understand what is the logic behind these lines:
COMPUTE temp = RESULT - 1.843E19.
IF temp IS LESS THAN 1.0E16 THEN
Data definition:
000330 01 VAR1 COMP-1 VALUE 3.4E38. // 3.4 x 10 ^ 38
Here are those lines in context (the sub-program returns a square root):
MOVE VAR1 TO PARM1.
CALL "SQUAREROOT_ROUTINE" USING
BY REFERENCE PARM1,
BY REFERENCE RESULT.
COMPUTE temp = RESULT - 1.843E19.
IF temp IS LESS THAN 1.0E16 THEN
DISPLAY "OK"
ELSE
DISPLAY "False"
END-IF.
These lines are just trying to test if the result returned by the SQUAREROOT_ROUTINE is correct. Since the program is using float-values and rather large numbers this might look a bit complicated. Let's just do the math:
You start with 3.4E38, the squareroot is 1.84390889...E19.
By subtracting 1.843E19 (i.e. the approximate result) and comparing the difference against 1.0E16 the program is testing whether the result is between 1.843E19 and 1.843E19+1.0E16 = 1.844E19.
Not that this test would not catch an error if the result from SQUAREROOT_ROUTINE was too low instead of too high. To catch both types of wrong results you should compare the absolute value of the difference against the tolerance.
You might ask "Why make things so complicated"? The thing is that float-values usually are not exact and depending on the used precision you will get sightly different results due to rounding-errors.
well the logic itself is very straight forward, you are subtracting 1.843*(10^19) from the result you get from the SQUAREROOT_ROUTINE and putting that value in the variable called temp and then If the value of temp is less than 1.0*(10^16) you are going to print a line out to the SYSOUT that says "OK", otherwise you are going to print out "False" (if the value was equal to or greater than).
If you mean the logic as to why this code exists, you will need to talk to the author of the code, but it looks like a debugging display that was left in the program.

What is the correct constant to use when comparing with the Minimal Single Number in Delphi?

In a loop like this:
cur := -999999; // represent a minimal possible value hold by a Single type
while ... do
begin
if some_value > cur then
cur := some_value;
end;
There is MaxSingle/NegInfinitydefined in System.Math
MaxSingle = 340282346638528859811704183484516925440.0;
NegInfinity = -1.0 / 0.0;
So should I use -MaxSingle or NegInfinity in this case?
I assume you are trying to find the largest value in a list.
If your values are in an array, just use the library function MaxValue(). (If you look at the implementation of MaxValue, you'll see that it takes the first value in the array as the starting point.)
If you must implement it yourself, use -MaxSingle as the starting value, which is approximately -3.40e38. This is the most negative value that can be represented in a Single.
Special values like Infinity and NaN have special rules in comparisons, so I would avoid these unless you are sure about what those rules are. (See also How do arbitrary floating point values compare to infinity?. In fact, it seems NegInfinity would work OK.)
It might help to understand the range of values that can be represented by a Single. In order, most negative to most positive, they are:
NegInfinity
-MaxSingle .. -MinSingle
0
MinSingle .. MaxSingle
Infinity

What value should be given to a Tcl dict for minimum memory?

I need a dictionary, "just for the keys", that is, the values are of lesser importance.
Which value will consume the least amount of memory? "0" "", something else?
Thanks.
Tcl shares constants under the covers, so you can use pretty much anything as long as it is a literal. But the empty string is almost certainly going to be a pre-defined constant in your script anyway (even if you don't notice it) and is pretty short. Or go with a single-character alphanumeric string, which will generate a shorter string form of dictionary (and have practically no difference otherwise). A 0 is a single-character alphanumeric string, of course.
In my own code, I would mostly use something like "dummy value" for the value. The cost is only a few bytes total more most of the time, and yet it's much clearer to me that it doesn't mean anything, so if I come back to the code later I don't try to figure out what I was doing with the value…
Not an answer, but I believe better expressed as one rather than a continuation of the dicussion in comments...
I follow the argument in the comments above about the comparative efficiency of list and dict key searching, I thought I would do some tests and present the results.
I constructed a dictionary with a million keys, an array with a million entries and a list with the dictionary keys via:
for {set i 0} {$i < 1000000} {incr i} {
set idx [expr rand()]
incr arr($idx)
dict incr dic $idx
}
set rands [dict keys $dic]
Incidentally, this took about 5.88 seconds excluding the final set rands....
I then timed searching for different random numbers in the list with lsearch, the dictionary keys with dict exists and the array with info exists arr() with the following results for 1000 iterations:
lsearch $rands [expr rand()] 26349.115 microseconds per iteration
dict exists $dic [expr rand ()] 14.357 microseconds per iteration
info exists arr([expr rand()]) 14.652 microseconds per iteration
Interesting. Not surprising, but I do like to see some numbers.
I then reread the OP's last comment, noticed the reference to sorting, tried sorting the list and using lsearch -sorted. Sorting the list took about a second.
lsearch -sorted $sortedRands [expr rand ()] 19.304 microseconds per iteration
it's reassuring that lsearch works pretty well with a sorted list, when you tell it that the list is sorted.
However, if I leave out the -sorted option, it takes much longer
lsearch $sortedRands [expr rand()] 120206.369 microseconds per iteration
123604.209 microseconds per iteration
I was so surprised by this that I did it twice. I also repeated the info exists arr(... and found the same results as earlier, so it isn't simply that my machine has got much slower.
Doeas anyone have any ideas why the normal search of a sorted list is so slow?

adding a big offset to an os.time{} value

I'm writing a Wireshark dissector in lua and trying to decode a time-based protocol field.
I've two components 1)
local ref_time = os.time{year=2000, month=1, day=1, hour=0, sec=0}
and 2)
local offset_time = tvbuffer(0:5):bytes()
A 5-Byte (larger than uint32 range) ByteArray() containing the number of milliseconds (in network byte order) since ref_time. Now I'm looking for a human readable date. I didn't know this would be so hard, but 1st it seems I cannot simple add an offset to an os.time value and 2nd the offset exceeds Int32 range ...and most function I tested seem to truncate the exceeding input value.
Any ideas on how I get the date from ref_time and offset_time?
Thank you very much!
Since ref_time is in seconds and offset_time is in milliseconds, just try:
os.date("%c",ref_time+offset_time/1000)
I assume that offset_time is a number. If not, just reconstruct it using arithmetic. Keep in mind that Lua uses doubles for numbers and so a 5-byte integer fits just fine.

Resources