Wrong values received by MQL5 iCustom function - mql4

I am trying to call the MQL4 indicator with MQL5 using iCustom() .
int test = iCustom(_Symbol,_Period,"ADXmagic.ex4");
I have copies the two buffers with number 0 and in the a and b of double type.
CopyBuffer(test,0,0,5,a);
CopyBuffer(test,1,0,5,b);
ArrayPrint(a);
ArrayPrint(b);
But the output received is completely garbage and no values:
6E+39 2E+92 0.00000 +0.00000 +0.00000
4E+230 0.00000 +0.00000 +0.00000 +0.00000
6E+39 2E+92 0.00000 +0.00000 +0.00000
4E+230 0.00000 +0.00000 +0.00000 +0.00000
6E+39 2E+92 0.00000 +0.00000 +0.00000
4E+230 0.00000 +0.00000 +0.00000 +0.00000
6E+39 2E+92 0.00000 +0.00000 +0.00000
4E+230 0.00000 +0.00000 +0.00000 +0.00000
Kindly, let me know am I correct on calling the indicator? Is there a flaw I made?

You cannot call ex4 files from MT5 I am afraid. And it is a bad idea even if that could be possible because MT5 indicators work faster then MT4 indicators.

Related

How do I round a number in lua to a specific decimal digit?

I need a way to round my decimal in lua.
Sometimes my number looks like this:
When I search for it online, I only find solutions to round it to a whole number, but I don't want to round my variable to 0.00, 1.00, or 2.00, how would I round it to a specific decimal digit?
Code:
health = 1
maxhp = 2
function hp_showcase()
makeLuaText("hpcounter", "HP: "..health.."/"..maxhp.."", 2250, 30, 350)
addLuaText("hpcounter")
end
function opponentNoteHit(id, noteData, noteType, isSustainNote)
hp_showcase();
end
You can define a function that takes in the value to be rounded and the digit position you would like to round to, for this example positions in front of the . are positive and behind are negative so 2 rounds to the nearest 100 and -2 rounds to the nearest 0.01
local value = 0.79200750000001
local function round(number, digit_position)
local precision = math.pow(10, digit_position)
number = number + (precision / 2); -- this causes value #.5 and up to round up
-- and #.4 and lower to round down.
return math.floor(number / precision) * precision
end
print(value)
print(round(value, -2))
print(round(value, -1))
print(round(value, 0))
Results:
0.79200750000001
0.79
0.8
1

How to assign names to the bands in Dask.array when importing Geotiff files?

I am trying to import a Geotiff with multiple bands using Dask and xarray and the following code:
import xarray as xr
chunks = {'x': 15886, 'y': 2400, 'band': 1}
df= xr.open_rasterio('multiband.tif',chunks = chunks)
df
which df looks like:
<xarray.DataArray (band: 6, y: 2400, x: 15886)>
dask.array<open_rasterio-b9dd4de67eb722145cdc7b5a3510e05e<this-array>, shape=(6, 2400, 15886), dtype=uint8, chunksize=(1, 2400, 15886), chunktype=numpy.ndarray>
Coordinates:
* band (band) int32 1 2 3 4 5 6
* y (y) float64 70.0 69.99 69.99 69.99 69.98 ... 60.01 60.01 60.01 60.0
* x (x) float64 -146.2 -146.2 -146.2 -146.2 ... -80.01 -80.0 -80.0
Attributes:
transform: (0.0041666666662862895, 0.0, -146.190219951, 0.0, -0.0...
crs: +init=epsg:4326
res: (0.0041666666662862895, 0.0041666666662862895)
is_tiled: 0
nodatavals: (nan, nan, nan, nan, nan, nan)
scales: (1.0, 1.0, 1.0, 1.0, 1.0, 1.0)
offsets: (0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
AREA_OR_POINT: Area
TIFFTAG_SOFTWARE: HEG-Modis Reprojection Tool Nov 4, 2004
And the bands are stored in a Dask.array. I wonder how can I give names to each band (similar to "data variables" in xarray). Then for example I can get access to each band by:
df['band1name']
Currently what I'm doing to get access to the bands is something like:
df.isel(band=1)
which is not that intuitive.
Thanks
I found a simple way to this for anyone else stumbling upon this issue in the Digital Earth Australia documentation (https://docs.dea.ga.gov.au/notebooks/Frequently_used_code/Opening_GeoTIFFs_NetCDFs.html).
We simply convert our dataarray to a dataset
ds = df.to_dataset('bands')
to get all bands as separate variables.
Afterwards, we rename the variables to the correct names:
ds = ds.rename({i + 1: name for i, name in enumerate(bandnames)}
The i + 1 stems from the beginning of the indexing at 1 in bands and bandnames needs to be given.
If you download your GeoTiffs from GEE, you can get all bandnames in ds.attrs['long_name'].

Calculating the bearing change from latitude/longitude

I have at hand, a dataset of GPS logs containing GPS speeds as well. Here's how the dataset looks like:
id | gpstime | lat | lon | speed
--------+------------+------------+------------+---------
157934 | 1530099776 | 41.1825026 | -8.5996864 | 3.40901
157934 | 1530099777 | 41.1825114 | -8.599722 | 3.43062
157934 | 1530099778 | 41.1825233 | -8.5997594 | 3.45739
157934 | 1530099779 | 41.1825374 | -8.5997959 | 3.40025
157934 | 1530099780 | 41.1825519 | -8.5998337 | 3.41673
(5 rows)
Now I want to compute the bearing change, for each point with respect to the true north.
But I have these questions I am yet to find answers to:
Based on my reading, I come across the formula (as in this answer):
Bearing = atan(y,x)
where x and y are the quantities
y = sin(Blon-Alon) * cosBlat
x = cosAlat * sinBlat -sinAlat * cosBlat * cos(Blon-Alon)
respectively for points A and B. Then from another source, the formula here, the formula is written:
Bearing = atan2(y,x)
So I'm confused, which of the formula should I use?
lat and lon should be converted from degrees to radian before passing to quantities x and y. Being that the values of lon in my dataset are negatives, should I take the absolute value of each?
I think for GPS tracks this would be an overkill. In case the distance between two point are not to big (let's say a few hundreds of meters) I assume this simplified calculation is sufficient.
The latitude/longitude differences are app.
Δlat = 111km * (lat1 - lat2)
Δlon = 111km * cos(lat) * (lon1 - lon2)
So bearing would be
bearing = atan(Δlon / Δlat) * 180/π
bearing = atan(cos(lat) * (lon1 - lon2) / (lat1 - lat2)) * 180/ACOS(-1)
for lat use either lat1 or lat2 or the middle if you like.
lat = (lat1 + lat2)/2 * π/180 = (lat1 + lat2)/2 * ACOS(-1)/180
Consider Δlat or Δlat could be 0

Regarding precision and recall

Suppose we have 99% non-span and 1% span. Here I have written function as below
function y = predictSpam(x)
y = 0;
return
here we have true positive's are zero. And accuracy is 99%. In this case precision and recall is zero. Is my understanding is right? Request to provide to fill in below table in case of below scenario
actualclass1 | actualclass0
predict class1 0 | 0
-------------------------------------------------
predict class0 1 | 99
m = 100. Is above table is filled correctly.
When using precision and recall I quite always look again this image:
So we have:
precision = true_positive / true_positive + false_positive
recall = true_positive / true_positive + false_negative
In your data, 99 is correctly classified 0, 1 is classified 0 when it should be 1.
With your data:
- true_positive = 0
- true_negative = 99
- false_positive = 0
- false_negative = 1
Your true positive is 0, so yes, both recall and precision will be 0.
Accuracy is indeed 99%.

Highchart axis 2 decimal places

I have a question about formatting my axis on highcharts. I want to display the ticks on the Y axis in decimals with 2 decimal places.
The current situation:
0
0.05
0.1
0.15
0.2
I want to display my number like this:
0.00
0.05
0.10
0.15
0.20
Is there an easy way to do this?
-->API
yAxis: {
labels: {
format: '{value:.2f}'
}
}

Resources