MQL create new candles - mql4

the bid price in my MT4 are roughly 1 pip below what my broker's trade prices are. The reason is that the price server serving the charts has a spread of 2.5 pips where my broker is much narrower in spreads (0.2 pips).
I need to redraw the candles on the fact that everything should be about 1 pip higher than what it is being drawn.
I've studied histogram drawing but it doesn't help me with the drawing of the entire candle.
Is it possible to redraw the candle in its entirety about 1 pip higher than what the price server is giving me?
If the candles could be drawn on the halfway mark between bid and ask, it would work close on correctly.

Add all candles in array (CopyRates or loop over them from Bars-1 to 0) then create a new symbol as it is shown in PeriodConverter and write all candles from 0 to last to have an offline chart, include your manipulations with price here. Then update the chart if you need to have live chart, or resave under existing name in order to backtest(do not forget to disconnect first, otherwise mt4 will upload quotes from the server and overwrite over your file)

Related

MQL4 gets different data than the MT4 History Center shows

I'm quite new to MQL so this may be a misunderstanding.
This code:
Print( TimeToStr( iTime(NULL,PERIOD_M5,i), TIME_DATE|TIME_MINUTES), iOpen(NULL,PERIOD_M5,i), iHigh(NULL,PERIOD_M5,i), iLow(NULL,PERIOD_M5,i), iClose(NULL,PERIOD_M5,i) );
Prints this:
2023.01.05 23:25, 0.91648, 0.91678, 0.91636, 0.91676
If I export data from the History Center I get slightly different numbers:
2023.01.05T23:25, 0.91646 ,0.91675, 0.91633 ,0.91675, 146
The values are just a tiny bit smaller. This is consistent - ALL of the values exported from the History Center are a bit lower than what I see when I examine the data with iOpen() etc. The difference seems to range from 0.1 to 0.3 pips.
I assume this is a Bid vs Ask thing. Is it? I understand the iOpen() etc returns Bid prices. But does the History database contain both Bid and Ask? Is there a way to see both prices?
Or it it just guessing? If so, it does not seem to be adjusting the prices consistently.
Edit: I am running in the Strategy Tester.
Edit2: Today I downloaded a bunch of M1 data from Dukascopy and imported it into MT4 History Center. Now I get exactly the results I expect from my program.
But my question stands, about the data MT4 downloads from the broker.

Sound Wave Peak Detection (Cricket Chirping)

I am looking for a way to code or find a program that can record the chirps of crickets either live or through a prerecorded audio file (large ~24 hours) for a lab experiment.
I'm not too sure how to approach this as I'm a web developer, but I have experience with JS and python, along with libraries. My initial idea was to use Matplotlib to produce an audio visualizer, and then count each time a certain range of db is reached which matches the db of a cricket chirp, but I have no idea how to approach it.
I have successfully visualized the chirps on a online spectrum analyzer (Spectrum Visualizer of Audio Chirps), and can see it clearly, however I don't know how I can use code to count each "chirp" and record it along with the date and time for each chirp in a table of values / dataset of some sort.
Any guidance or help would be greatly appreciated!
Really naive solution: For every unit of time (column of pixels in the spectrogram), you can calculate the sum of all the values in a column. For example, if all the pixels in a column are black, the sum for that column will be 0, if some of them are colored, the sum will be >0.
Then loop over the sums: if at a certain step you go from 0 to > 0, and then after a while you go back to 0, you've hit a peak.
If the output is too "noisy", you can use a small threshold value instead of 0. Tweak it until results seem OK. This works well when the input background noise is pretty much the same all the way through, but if it's skewed up and down during the whole recording, you need something more complicated (for example, the threshold constantly changes with the average of the last n sums)
Or you can Google "peak detection algorithm" and implement one of them. Or you can Google "peak detection library" in your favourite language and use one of them.

Plot raw time series with Kibanan Timelion

I might not get something. How can I plot a raw time series with Timelion without applying any further aggregation? Just the raw data of a field over time that I have in an index. Of course I select the proper time window for the data.
I was trying to achieve the same thing, but didn't fully get what I wanted, but maybe these steps will help you.
My data was on by minute basis, so I don't want any more frequent fragmentation. Selecting interval = 1m helps only for short periods of time, but adding "interval=1m" into .es() block works on long periods, too.
To have lines not to return to 0 in between points, use .es().fit(carry)
.es().scale_interval(1m).fit(scale) - this is my chart to return to 0 if there were no data for certain period rather than carrying the line on the same level.
.es(metric=max:value_field) helps not to sum up the values, but show the max of the aggregated set.
My charts are still weirdly aggregated, but maybe it'll help someone.
Useful links:
Sparse time series in timelion
https://www.elastic.co/blog/sparse-timeseries-and-timelion
Scaling issue 1
https://discuss.elastic.co/t/diferent-value-on-y-axis-depending-on-time-interval/67785
Scaling issue 2
https://discuss.elastic.co/t/timelion-giving-wrong-metric-aggregate-value-on-enlarging/132789
Scaling issue 3
https://discuss.elastic.co/t/re-timelion-giving-wrong-metric-aggregate-value-on-enlarging/132925

How does OpenTSDB downsample data

I have a 2 part question regarding downsampling on OpenTSDB.
The first is I was wondering if anyone knows whether OpenTSDB takes the last end point inclusive or exclusive when it calculates downsampling, or does it count the end data point twice?
For example, if my time interval is 12:30pm-1:30pm and I get DPs every 5 min starting at 12:29:44pm and my downsample interval is summing every 10 minute block, does the system take the DPs from 12:30-12:39 and summing them, 12:40-12:49 and sum them, etc or does it take the DPs from 12:30-12:40, then from 12:40-12:50, etc. Yes, I know my data is off by 15 sec but I don't control that.
I've tried to calculate it by hand but the data I have isn't helping me. The numbers I'm calculating aren't adding up to the above, nor is it matching what the graph is showing. I don't have access to the system that's pushing numbers into OpenTSDB so I can't setup dummy data to check.
The second question is how does downsampling plot its points on the graph from my time range and downsample interval? I set downsample to sum 10 min blocks. I set my range to be 12:30pm-1:30pm. The graph shows the first point of the downsampled graph to start at 12:35pm. That makes logical sense.I change the range to be 12:24pm-1:29pm and expected the first point to start at 12:30 but the first point shown is 12:25pm.
Hopefully someone can answer these questions for me. In the meantime, I'll continue trying to find some data in my system that helps show/prove how downsampling should work.
Thanks in advance for your help.
Downsampling isn't currently working the way you expect, although since this is a reasonable and commonly made expectations, we are thinking of changing this in a later release of OpenTSDB.
You're assuming that if you ask for a "10 min sum", the data points will be summed up within each "round" (or "aligned") 10 minute block (e.g. 12:30-12:39 then 12:40-12:49 in your example), but that's not what happens. What happens is that the code will start a 10-minute block from whichever data point is the first one it finds. So if the first one is at time 12:29:44, then the code will sum all subsequent data points until 600 seconds later, meaning until 12:39:44.
Within each 600 second block, there may be a varying number of data points. Some blocks may have more data points than others. Some blocks may have unevenly spaced data points, e.g. maybe all the data points are within one second of each other at the beginning of the 600s block. So in order to decide what timestamp will result from the downsampling operation, the code uses the average timestamp of all the data points of the block.
So if all your data points are evenly spaced throughout your 600s block, the average timestamp will fall somewhere in the middle of the block. But if you have, say, all the data points are within one second of each other at the beginning of the 600s block, then the timestamp returned will reflect that by virtue of being an average. Just to be clear, the code takes an average of the timestamps regardless of what downsampling function you picked (sum, min, max, average, etc.).
If you want to experiment quickly with OpenTSDB without writing to your production system, consider setting up a single-node OpenTSDB instance. It's very easy to do as is shown in the getting started guide.

Esper (C.E.P.) query to calculate candlesticks every full minute

I am using Complex Event Processing (Esper) technology to provide a real-time candlestick calculations in my system. I am doing fine with calculating values, however I find it difficult to ensure that candle window starts at full minutes (for one minute candle) and ends before the next minute starts (i.e. candle 1[06:00.000 - 06:00.999], candle 2[06:01.000 - 06:01.999], etc... ).
Is there a pattern or command in Esper's query language that is able to provide such functionality?
I'd appreciate constructive comments and directions.
In Esper you can use a pattern to fire every minute at the zero second, i.e.
insert into TriggerEvent select * from pattern[pattern[every timer:interval(1 min).]
// named window to hold candle data, compute next candle
on TriggerEvent select * from NamedWindowCandle ....
// delete old data
on TriggerEvent delete from NamedWindowCandle
-rg
Local time is often different from exchange time, also there is latency in delivering tick data. Minute bars are often computed using exchange timestamp. The exchange timestamp value must be extracted from tick events. New minute bar event is sent when the tick timestamps enter new minute.

Resources