The multicurrencies indicator doesn't draw and works fine in mql4 - mql4

I wrote a mql4 multi-currency indicator (later I'll do it in mql5) which plots 8 buffers, 1 for each currency (AUD, CAD, etc...). The resulting value is then passed with a double smoothing (like the TSI indicator), basically I'm trying to reproduce the FOTSI indicator (here the Tradingview code https://it.tradingview.com/script/xjLnpOze/).
The problems I encounter are:
if I load the indicator on two different charts (e.g. EURUSD and GBPAUD) it draws differently even though there is nothing in the code that recalls the symbol of the chart it runs on.
if I update the graph after n bars (right mouse button) the indicator changes values and redraws itself. It is not a repainting in bad faith but evidently recharges the new values.
if I try to do the backtest with visualization, the indicator goes bust, i.e. it draws badly and goes haywire... even without visualization, when the backtest ends and it plots itself on the graph, it is slightly different from the one on the demo.
In the MT4 platform I only have the H1 data (all other timeframes empty) for all 28 symbols and it is disconnected from the internet, each bar takes the closing values from the 7 charts involved (EUR from EURAUD, EURCAD, EURCHF, etc.. ...).
I also thought about the asynchronous receipt of ticks, i.e. when the tick arrives on the main chart (where the indicator runs) it may not have arrived on some of the other charts so incorrect values are created which are fixed upon updating, right? But I can't figure out if that's the problem...
Maybe it's better to run the calculations only when all the charts have changed the bar then take the close[1]?
Here part of the code (only for AUD), the formula function calculats value Close[i] - Open [i] and I have different formulas.
int OnCalculate(const int rates_total,
const int prev_calculated,
const datetime &time[],
const double &open[],
const double &high[],
const double &low[],
const double &close[],
const long &tick_volume[],
const long &volume[],
const int &spread[])
{
int i, total;
total = prev_calculated == 0 ? rates_total - 1 : rates_total - prev_calculated;
//---- AUD
for(i = total; i >= 0; i--)
{
mtmAUD[i] = formula(formulas, currencies[0], i);
mtmAbsAUD[i] = MathAbs(mtmAUD[i]);
smooth1AUD[i] = iMAOnArray(mtmAUD, 0, firstR, 0, MODE_EMA, i);
smoothAbs1AUD[i] = iMAOnArray(mtmAbsAUD, 0, firstR, 0, MODE_EMA, i);
smooth2AUD[i] = iMAOnArray(smooth1AUD, 0, (uchar)MathFloor(firstR / divPeriod), 0, MODE_EMA, i);
smoothAbs2AUD[i] = iMAOnArray(smoothAbs1AUD, 0, (uchar)MathFloor(firstR / divPeriod), 0, MODE_EMA, i);
if(smoothAbs2AUD[i] == 0)
{
Print("Zero divide of 2nd smoothing at index: #" + (string)i);
continue;
}
aud[i] = 100 * smooth2AUD[i] / smoothAbs2AUD[i];
}
Image attached shows the indicator (only with AUD plotted) which is different on two charts.
I've tryed on demo and backtest but nada... :-(

Related

Can't calculate the right Volume RSI in MQL4 with a functioning Pine-Script Example

I want to "translate" a Pine-Script to MQL4 but I get the wrong output in MQL4 compared to the Pine-Script in Trading-view.
I wrote the Indicator in Pine-Script since it seems fairly easy to do so.
After I got the result that I was looking for I shortened the Pine-Script.
Here the working Pine-Script:
// Pinescript - whole Code to recreate the Indicator
study( "Volume RSI", shorttitle = "VoRSI" )
periode = input( 3, title = "Periode", minval = 1 )
VoRSI = rsi( volume, periode )
plot( VoRSI, color = #000000, linewidth = 2 )
Now I want to translate that code to MQL4 but I keep getting different outputs.
Here is the MQL4 code I wrote so far:
// MQL4 Code
input int InpRSIPeriod = 3; // RSI Period
double sumn = 0.0;
double sump = 0.0;
double VoRSI = 0.0;
int i = 0;
void OnTick() {
for ( i; i < InpRSIPeriod; i++ ) {
// Check if the Volume is buy or sell
double close = iClose( Symbol(), 0, i );
double old_close = iClose( Symbol(), 0, i + 1 );
if ( close - old_close < 0 )
{
// If the Volume is positive, add it up to the positive sum "sump"
sump = sump + iVolume( Symbol(), 0, i + 1 );
}
else
{
// If the Volume is negative, add it up to the negative sum "sumn"
sumn = sumn + iVolume( Symbol(), 0, i + 1 );
}
}
// Get the MA of the sump and sumn for the Input Period
double Volume_p = sump / InpRSIPeriod;
double Volume_n = sumn / InpRSIPeriod;
// Calculate the RSI for the Volume
VoRSI = 100 - 100 / ( 1 + Volume_p / Volume_n );
// Print Volume RSI for comparison with Tradingview
Print( VoRSI );
// Reset the Variables for the next "OnTick" Event
i = 0;
sumn = 0;
sump = 0;
}
I already checked if the Period, Symbol and timeframe are the same and also have a Screenshoot of the different outputs.
I already tried to follow the function-explanations in the pine-script for the rsi, max, rma and sma function but I cant get any results that seem to be halfway running.
I expect to translate the Pine-Script into MQL4.
I do not want to draw the whole Volume RSI as a Indicator in the Chart.
I just want to calculate the value of the Volume RSI of the last whole periode (when new candel opens) to check if it reaches higher than 80.
After that I want to check when it comes back below 80 again and use that as a threshold wether a trade should be opened or not.
I want a simple function that gets the Period as an input and takes the current pair and Timeframe to return the desired value between 0 and 100.
Up to now my translation persists to provide the wrong output value.
What am I missing in the Calculation? Can someone tell me what is the right way to calculate my Tradingview-Indicator with MQL4?
Q : Can someone tell me what is the right way to calculate my Tradingview-Indicator with MQL4?
Your main miss of the target is in putting the code into a wrong type of MQL4-code. MetaTrader Terminal can place an indicator via a Custom Indicator-type of MQL4-code.
There you have to declare so called IndicatorBuffer(s), that contain pre-computed values of the said indicator and these buffers are separately mapped onto indicator-lines ( depending on the type of the GUI-presentation style - lines, area-between-lines, etc ).
In case you insist on having a Custom-Indicator-less indicator, which is pretty legal and needed in some use-cases, than you need to implement you own "mechanisation" of drawing lines into a separate sub-window of the GUI in the Expert-Advisor-code, where you will manage all the settings and plotting "manually" as you wish, segment by segment ( we use this for many reasons during prototyping, so as to avoid all the Custom-Indicator dependencies and calling-interface gritty-nitties during the complex trading exosystem integration - so pretty well sure about doability and performance benefits & costs of going this way ).
The decision is yours, MQL4 can do it either way.
Q : What am I missing in the Calculation?
BONUS PART : A hidden gem for improving The Performance ...
In either way of going via Custom-Indicator-type-of-MQL4-code or an Expert-Advisor-type-of-MQL4-code a decision it is possible to avoid a per-QUOTE-arrival re-calculation of the whole "depth" of the RSI. There is a frozen-part and a one, hot-end of the indicator line and performance-wise it is more than wise to keep static records of "old" and frozen data and just update the "live"-hot-end of the indicator-line. That saves a lot of the response-latency your GUI consumes from any real-time response-loop...

MQL4 iCustom returns always the same (wrong) value (0x7FFFFFFF)

I wrote a custom indicator Speed.mq4 as follows:
double SpeedBuffer[]; // a Custom Indicator BUFFER
int OnInit() {
SetIndexBuffer( 0, SpeedBuffer ); // an access INDEX 0->BUFFER
...
}
int OnCalculate( const int rates_total,
const int prev_calculated,
const datetime &time[],
const double &open[],
const double &high[],
const double &low[],
const double &close[],
const long &tick_volume[],
const long &volume[],
const int &spread[]
) {
int start;
if ( prev_calculated == 0 ) start = 1;
else start = prev_calculated - 1;
for ( int i = start; i < rates_total - 1 ; i++ ) { // CPU-WASTED BY AN INEFFICIENT BACK-STEPPING IN TIME
double curTypical = typical( high[i], low[i], close[i] );
double prevTypical = typical( high[i+1],low[i+1],close[i+1] ); // CPU-WASTED, NO NEED TO RECALC ...
double curSpeed = curTypical - prevTypical;
SpeedBuffer[i] = curSpeed;
}
//--- return value of prev_calculated for next call
return( rates_total );
}
The indicator works fine in the application and the chart is plotted correctly.
When I try retrieving the last value in the ExpertAdvisor I always receive the same value:
double speed = iCustom( NULL, 0, "Speed", 2, 0, 0 );
Print( "speed is: " + speed );
prints:
speed is: 2147483647
It's always the same number. I'm not sure where's the problem.
from the Print in the indicator I can see the values are calculated correctly. but when I use iCustom I receive only that value.
MQL4 Custom Indicator iCustom() mechanics
MQL4 and even the New-MQL4 ( sometime called an MQL4.5 ) use a rather complicated interfacing model to handle Expert Advisor call / Custom Indicator computations.
A first thing to realise is, that iCustom() is not a call to a function, but rather a method, that indirectly "asks" a by-a-filename referred Custom Indicator to retrieve one, specific, value from a "pre-calculated" DataSTORE.
While it may sound complicated, it is the very nature of a CPU-efficient calculation factory, that the Custom Indicators were designed for in the early days of MQL4 world.
iCustom() is thus just a syntax-sugar to initiate the retrieval method, that gets the relevant piece of pre-calculated value back into the hands of Expert Advisor.
Custom Indicators put all the pre-calculated values into BUFFER(s), co-aligned with the TimeSeries style of ordering the DataSTORE cells ( [0] == "Now, the current Bar", going forwards [1], [2], [3], ... deeper and deeper back into the history )
The iCustom() passes the shift value, as a number of bars -- i.e. how deep into the history the retrieving method has to go, so as to pick the requested value from the respective BUFFER, and also the BUFFER identification INDEX ( 0 in our case above, as there is just a single BUFFER, with INDEX == 0 ). This is used for the sake of the EA being fully agnostic of the Custom Indicator internal variable names et al.
Just ask which BufferINDEX for which BarNUMBER one wants the value to be read from.
The Buffer is the one, who is to be blamed for the (wrong) value
The first line of the code says:
double SpeedBuffer[]; // a Custom Indicator BUFFER
If not handled otherwise in OnInit(){} all cells in the SpeedBuffer will have
EMPTY_VALUE
Empty value in an indicator buffer has by default this value == 2147483647 (0x7FFFFFFF) as has been objected above.
Q.E.D.
One may state in OnInit(){ ArrayInitialize( SpeedBuffer, 0.123456 ); } to have any other value for a cell-initialisation ( that for a TimeSeries-alligned BUFFERs happens upon each new-Bar event ( all cells get reshuffled by one backwards & cell[0] becomes "empty", pre-loaded with a default value discussed here ) ).
One may also add a step in an indicator OnCalculate(){ ... SpeedBuffer[0] = -9.87654; ...} so as to avoid leaving the cell[0] in a context-inconsistent state, not in a "just"-initialised state / value.
Caller-side interface ( how to reduce a risk on a weak-integration interface )
Nevertheless, the responsibility of the value retrieval is on the Expert Advisor side, as it fills the parameters of the iCustom() interface-proxy.
One may use preventive steps as shown in >>> https://stackoverflow.com/a/26389823/3666197
to minimise a risk of improper parameters ordering / values once calling an external Custom Indicator(s) to retrieve a set of values.
This simple method may save you literally tens of man*days of testing/debugging once a rich-extern-parametrised Custom Indicator with several indicator buffers has fallen into suspects due to serving "wrong" numbers ( just due to iCustom() call parameters being "invisibly" out of proper order/context )
Another thing to keep in mind when a custom indicator shows different values on the chart then are reported to the ExpertAdvisor, is that the execution flow through OnCalculate() needs to be different for an ExpertAdvisor than it is for a chart; specifically, the chart initially kicks off a call to OnCalculate() with prev_calculated = 0, whereas an EA (whether run with Strategy Tester or live) will always have prev_calculated = rates_total-1, so that the number of bars to compute the indicator value for is rates_total - prev_calcualted = 1 (ie. just the current bar).
You do account for this in your code, by setting start, but in general for a complicated indicator (that often involves referencing more than just the previous bar) one needs to be mindful of this difference and never assume that if the indicator looks good on the chart that means it's actually working. It needs to be tested separately with an EA.
I reviewed my code and finally figured that:
double speed=iCustom(NULL,0,"Speed",2,0,0);
was using the last value, which was not yet calculated by the custom indicator,
changing it to:
double speed=iCustom(NULL,0,"Speed",2,0,1);
did the trick.

Set the minimum grid resolution in AChartEngine?

I am using AchartEngine library to plot the measurements from a sensor. The values are in the order of 1E-6.
When I try to plot the values they are shown correctly but as I zoom the plot, the maximum resolution I can see in the x Labels is in the order of 1E-4. I am using following code to change the number of labels:
mRenderer.setXLabels(20);
mRenderer.setYLabels(20);
I am also changing the range of the y axis, but the resolution remains unchanged. Has anyone found this problem before?
EDIT
I do not have enough reputation to post images, but the following link shows the chartview that I am getting.
https://dl.dropboxusercontent.com/u/49921111/measurement1.png
What I want is to have more grid lines between 3.0E-5 and 4.0E-5. Unfortunately I have not found how to do that. I also tried changing the renderer pan, initial range of the plot and zoom limits. all without sucess. I was thinking the only option left would be to override some of the draw methods but I have no clue how to do that.
I Have digged into the source code of AChartEngine and found the problem that it has when small numbers are to be plotted. It is in a static function used to draw labels by every chart:
private static double[] computeLabels(final double start, final double end,
final int approxNumLabels) {
// The problem is right here in this condition.
if (Math.abs(start - end) < 0.000001f) {
return new double[] { start, start, 0 };
}
double s = start;
double e = end;
boolean switched = false;
if (s > e) {
switched = true;
double tmp = s;
s = e;
e = tmp;
}
double xStep = roundUp(Math.abs(s - e) / approxNumLabels);
// Compute x starting point so it is a multiple of xStep.
double xStart = xStep * Math.ceil(s / xStep);
double xEnd = xStep * Math.floor(e / xStep);
if (switched) {
return new double[] { xEnd, xStart, -1.0 * xStep };
}
return new double[] { xStart, xEnd, xStep };
}
So this function basically takes the start (minimum) and and end (maximum) values of the plot and the aproximate number of labels. Then it rounds the values and computes the step of the grid (xStep). If the difference between start and end is too small (0.000001f) then the start and end are the same and the step is 0. That is why its not showing any labels in between this small values nor any grid lines!. So I just need to change the 0.000001f with a smaller number or with a variable in order to control the resolution of the grid. I hope this can help someone.

What format should the data be for vDSP_ctoz in iOS Accelerate framework

I am trying to display a spectrum analyser for iOS and am stuck after two weeks. I have read pretty much every post about FFT and the Accelerate Frameworks on here and have downloaded the aurioTouch2 example from Apple.
I think I understand the mechanism of FFT (did it in Uni 20 years ago) and am a fairly experienced iOS programmer but I have hit a wall.
I am using AudioUnit to play mp3, m4a, and wav files and have that working beautifully. I have attached a Render Callback to the AUGraph and I can plot Waveforms to the music. The waveform goes with the music nicely.
When I take the data from the Render Callback which is in Float form in the range 0 .. 1 and attempt to pass that through the FFT code (either my own or aurioTouch2's FFTBufferManager.mm) I get something thats not completely wrong, but is not correct either. or instance this is a 440Hz sine wave:
That peak value is -6.1306, followed by -24. -31., -35. and those values towards the end are around -63.
Animated gif for "Black Betty":
Animated gif for "Black Betty
The format I receive from the Render callback:
AudioStreamBasicDescription outputFileFormat;
outputFileFormat.mSampleRate = 44100;
outputFileFormat.mFormatID = kAudioFormatLinearPCM;
outputFileFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
outputFileFormat.mBitsPerChannel = 32;
outputFileFormat.mChannelsPerFrame = 2;
outputFileFormat.mFramesPerPacket = 1;
outputFileFormat.mBytesPerFrame = outputFileFormat.mBitsPerChannel / 8;
outputFileFormat.mBytesPerPacket = outputFileFormat.mBytesPerFrame;
In looking at the aurioTouch2 example it looks like they are receiving their data in a signed int format but then running an AudioConverter to convert it to Float. Their format is hard to decipher but is using a macro:
drawFormat.SetAUCanonical(2, false);
drawFormat.mSampleRate = 44100;
XThrowIfError(AudioConverterNew(&thruFormat, &drawFormat, &audioConverter), "couldn't setup AudioConverter");
In their render callback they are copying the data out of the AudioBufferList into mAudioBuffer (Float32*) and passing it to the CalculateFFT method which calls vDSP_ctoz
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)mAudioBuffer, 2, &mDspSplitComplex, 1, mFFTLength);
I think this is where my problem is. What format does vDSP_ctoz expect? It is cast as a (COMPLEX*) but I cannot find anywhere in the aurioTouch2 code which puts the mAudioBuffer data into the (COMPLEX*) format. So is must be coming from the Render Callback in this format?
typedef struct DSPComplex {
float real;
float imag;
} DSPComplex;
typedef DSPComplex COMPLEX;
If I don't have the format correct at this point (or understand the format) then there is no point in debugging the rest of it.
Any help would be greatly appreciated.
Code from AurioTouch2 that I am using:
Boolean FFTBufferManager::ComputeFFTFloat(Float32 *outFFTData)
{
if (HasNewAudioData())
{
// Added after Hotpaw2 comment.
UInt32 windowSize = mFFTLength;
Float32 *window = (float *) malloc(windowSize * sizeof(float));
memset(window, 0, windowSize * sizeof(float));
vDSP_hann_window(window, windowSize, 0);
vDSP_vmul( mAudioBuffer, 1, window, 1, mAudioBuffer, 1, mFFTLength);
// Added after Hotpaw2 comment.
DSPComplex *audioBufferComplex = new DSPComplex[mFFTLength];
for (int i=0; i < mFFTLength; i++)
{
audioBufferComplex[i].real = mAudioBuffer[i];
audioBufferComplex[i].imag = 0.0f;
}
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)audioBufferComplex, 2, &mDspSplitComplex, 1, mFFTLength);
//Take the fft and scale appropriately
vDSP_fft_zrip(mSpectrumAnalysis, &mDspSplitComplex, 1, mLog2N, kFFTDirection_Forward);
vDSP_vsmul(mDspSplitComplex.realp, 1, &mFFTNormFactor, mDspSplitComplex.realp, 1, mFFTLength);
vDSP_vsmul(mDspSplitComplex.imagp, 1, &mFFTNormFactor, mDspSplitComplex.imagp, 1, mFFTLength);
//Zero out the nyquist value
mDspSplitComplex.imagp[0] = 0.0;
//Convert the fft data to dB
vDSP_zvmags(&mDspSplitComplex, 1, outFFTData, 1, mFFTLength);
//In order to avoid taking log10 of zero, an adjusting factor is added in to make the minimum value equal -128dB
vDSP_vsadd( outFFTData, 1, &mAdjust0DB, outFFTData, 1, mFFTLength);
Float32 one = 1;
vDSP_vdbcon(outFFTData, 1, &one, outFFTData, 1, mFFTLength, 0);
free( audioBufferComplex);
free( window);
OSAtomicDecrement32Barrier(&mHasAudioData);
OSAtomicIncrement32Barrier(&mNeedsAudioData);
mAudioBufferCurrentIndex = 0;
return true;
}
else if (mNeedsAudioData == 0)
OSAtomicIncrement32Barrier(&mNeedsAudioData);
return false;
}
After reading the answer below I tried adding this to the top of the method:
DSPComplex *audioBufferComplex = new DSPComplex[mFFTLength];
for (int i=0; i < mFFTLength; i++)
{
audioBufferComplex[i].real = mAudioBuffer[i];
audioBufferComplex[i].imag = 0.0f;
}
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)audioBufferComplex, 2, &mDspSplitComplex, 1, mFFTLength);
And the result I got was this:
I am now rendering the 5 last results, they are the faded ones behind.
After adding hann window:
Now looks a lot better after applying the hann window (Thanks hotpaw2). Not worried about the mirror image.
My main problem now is using a real song it doesn't look like other Spectrum Analysers. Everything is always pushed high on the left no matter what music i push through it. After applying the window it seems to go to the beat a lot better though.
The AU render callback only returns the real part of the complex input required. To use a complex FFT, you need to fill an equal number of imaginary components with zeros yourself, and copy over the elements of the real part, if needed.

How to stop a for loop (OpenCV)

I am using Processing (processing.org) for a project that requires face tracking. The problem now is that the program is going to run out of memory because of a for loop. I want to stop the loop or at least solve the problem of running out of memory. This is the code.
import hypermedia.video.*;
import java.awt.Rectangle;
OpenCV opencv;
// contrast/brightness values
int contrast_value = 0;
int brightness_value = 0;
void setup() {
size( 900, 600 );
opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection description, here-> front face detection : "haarcascade_frontalface_alt.xml"
// print usage
println( "Drag mouse on X-axis inside this sketch window to change contrast" );
println( "Drag mouse on Y-axis inside this sketch window to change brightness" );
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
// grab a new frame
// and convert to gray
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
// display the image
image( opencv.image(), 0, 0 );
// draw face area(s)
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
}
}
void mouseDragged() {
contrast_value = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
Thank you!
A few points...
1 As George mentioned in the comments, you can reduce the size of the capture area, which will exponentially reduce the amount of RAM that your sketch is using to analyze the face tracking. Try making two global variables called CaptureWidth and CaptureHeight and set them to 320 and 240 - which is totally sufficient for this.
2 You can increase the amount of memory that your sketch uses by default in the Java Virtual Machine. Processing defaults to 128 I think, but if you go to the Preferences, you will see a checkbox to "Increase maximum available memory to [x]" ... I usually make mine 1500 mb, but it depends on your machine what you can handle. Dont try to make it bigger than 1800mb unless you are on a 64-bit machine and are using Processing 2.0 in 64-bit mode...
3 To actually break the loop... use the 'break' command http://processing.org/reference/break.html ... but please understand why you want to use that first, as this will simply jump you out of your loop.
4 If you only want to show a certain number of faces, you can test if faces[i] == 1, et cetera, which might help....
But I think the loop itself isn't the culprit here, it's more likely the memory footprint. Start with suggestions 1 & 2 and report back...

Resources