Implementing news filter in MQL4 programs - mql4

Am trying to implement a news filter in a FOREX robot am creating an using MQL4.
My main issue is getting the news and the time it'll occur. I've seen posts that suggests using ffcal indicator. However I want a more direct approach. I want to get the news details and have the EA (robot) process them on a case by case bases.
Any ideas on how to achieve this or something similar will be appreciated?

Doable . . .
Best, yet, outside the MQL4/5 QUOTE-stream event-loop OnTick(){...} event-handler.
Any idea ?
One may create a persistent autonomous News-[Collector|Processor|Analyser|TradingResponder] pipeline and let MQL4/5 EA-s just ask and reflect the TradingResponder-delivered advice.
This way your intention will not block / deteriorate the MetaTrader Terminal platform processing stability and your trading infrastructure may enjoy distributed architecture performance and robustness if one will.
Having a similar architecture implemented and operated for an externalised QuantFX-processing of both TA and FA input factors ( yes, also the News ), there is less than about ~ < 80 [ms] Turn-Around-Time to jump into XTO after receiving each QUOTE-event, so if your target is not under HFT-grade latency-tresholds of TAT << units of [us], you may enjoy the same.

Related

How to use LogiCORE DSP48 Macro?

I want to learn how to use LogiCORE DSP48 Macro. I'm reading the Xilinx documentation but I cannot understand well how to start my first design with DSP48 Macro. Can anyone help me to make a simple design to get a better understanding of this IP core please?
Thanks in advance!
In many cases you would use DSP48 by writing Verilog/VHDL expressions containing add, subtract, and multiply.
x = a * b + c
A problem with the above expression is that the multiplication and addition take place in a single cycle. You can run the expression at a higher frequency if the operation could be pipelined. Vivado can sometimes retime these expressions across registers in order to make use of the DSP48 pipeline registers.
However, I understand wanting to use the DSP48 directly. You instantiate DSP48's just like other RTL modules. The ports, parameters, and behaviors are described in the DSP Slice User Guide for the FPGA logic that you are using.
wire [47:0] c;
wire [24:0] a;
wire [17:0] b;
DSP48E1#() dsp(
.a(a),
.b(b),
.c(c),
.p(x),
.opmode(5),
.alumode(0)
);
This instance is copied from one of my inner-product implementations. It is fully pipelined because I was aiming for 500MHz operation. Only achieved 400MHz due to other combinational paths.
For Xilinx 7 Series:
DSP48E1 Slice User Guide
For Xilinx Ultrascale:
DSP48E2 Slice User Guide

Wrapper for slow indicator for backtesting

If a technical indicator works very slow, and I wish to include it in an EA ( using iCustom() ), is there a some "wrapper" that could cache the indicator results to a file based on the particular indicator inputs?
This way I could get a better speed next time when I backtest it using the same set of parameters, since the "wrapper" could read the result from file rather than recalculate the result from the indicator.
I heard that some developers did that for their needs in order to speed up backtesting, but as far as i know, there's no publicly available solution.
If I had to solve this problem, I would create a class with two fields (datetime and indicator value, or N buffers of the indicator), and a collection class similar to CArrayObj.mqh but with an option to apply binary search, or to start looking for element from a specific index, not from the very beginning of the array.
Recent MT4 Builds added VERY restrictive conditions for Indicators
In early years of MT4, this was not so cruel as it is these days.
FACT#1: fileIO is 10.000x ~ 100.000x slower than memIO:
This means, there is no benefit from "pre-caching" values to disk.
FACT#2: Processing Performance has HARD CEILING:
All, yes ALL, Custom Indicators, that are being used in MetaTrader4 Terminal ( be it directly in GUI, or indirectly, via Template(s) or called via iCustom() calls & in Strategy Tester via .tpl + iCustom() ) ALL THESE SHARE A SINGLE THREAD ...
FACT#3: Strategy Tester has the most demanding needs for speed:
Thus - eliminate all, indeed ALL, non-core indicators from tester.tpl template and save it as "blank", to avoid any part of such non-core processing.
Next, re-design the Custom Indicator, where possible, so as to avoid any CPU-ops & MEM-allocation(s), that are not necessary.
I remember a Custom Indicatore designs with indeed deep-convolutions, which could have been re-engineered so as to keep just a triangular sparse-matrix with necessary updates, that has increased the speed of Indicator processing more than 10.000x, so code-revision is the way.
So, rather run a separate MetaTrader4 Terminal, just for BackTesting, than having to wait for many hours just due to un-compressible nature of numerical processing under a traffic-jam congestion in the shared use of the CustomIndicator-solo-Thread that none scheduling could improve.
FACT#4: O/S can increase a process priority:
Having got to the Devil's zone, it is a common practice to spin-up the PRIO for the StrategyTester MT4, up to the "RealTime PRIO" in the O/S tools.
One may even additionally "lock" this MT4-process onto a certain CPU-core(s) and setup all other processes with adjacent CPU-core-AFFINITY, so that these two distinct groups of processes do not jump one to the other group's CPU-core(s). Hard, but if squeezing the performance to the bleeding edge, this is a must.

Dynamic Process parameter adjustment in Semiconductor manufacturing data

I have process parameter data from semiconductor manufacturing.and requirement is to suggest what could be the best parameter adjustment to be made to process parameter to get better yield ie best path for high yield. what machine learning /Statistical models best suits this requirement
Note:I have thought of using decision tree which can give us best path for high yield.
Would like to know it any other methods that can be more efficient
data is like
lotno x1 x2 x3 x4 x5 yield(%)
<95% yield is considered as 0 and >95% as 1
I'm not really sure of the question here, but as a former semiconductor process engineer, here is how I look at the yield improvement approach - perspective.
Process Development.
DOE: Typically, I would run structured DOEs to understand my process (#4). I would first identify "potential" 'factors', and run various "screening" experiments to identify statistical significance. With the goal basically here to identify the most statistically significant (and for that matter, least significant) factors. So these are inherently simple experiments, low # of "levels" which don't target understanding of the curvature of the response surface, they just look for magnitude change of response vs factor. Generally, I am most concerned with 'Process' factors, but it is important to recognize that the influence of variable inputs can come from more than just "machine knobs' as example. Variable can arise from 1) People, 2) Environment (moisture, temp, etc), 3) consumables (used in the process), 4) Equipment (is 40 psi on this tool really 40 psi and the same as 40 psi on a different tool) 4) Process variable settings.
With the most statistically significant factors, I would run more elaborate DOE using the major factors and analyze this data to develop a model. There are generally more 'levels' used here to allow for curvature insight of the response surface via the analysis. There are many types of well known standard experimental design structures here. And there is software such as JMP that is specifically set up to do this analysis.
From here, the idea would be to generate a model in the form of Response = F (Factors). That allows you to essentially optimize the response based upon these factors where the response is a reflection of your yield criteria.
From here, the engineer would typically execute confirmation runs with optimized factors to confirm optimized response.
Note that the software analysis typically allows for the engineer to illuminate any run order dependence. The execution of the DOE is typically performed in a randomized cell fashion. (Each 'cell' is a set of conditions for the experiment). Similarly the experiments include some level of repetition to gauge 'repeatability' of the 'system'. This inclusion can be explicit (run the same cell twice), but there is also some level of repeatability inherent in the design as well since you are running multiple cells, albeit at difference settings. But generally, the experiment includes explicitly repeated cells.
And finally there is the concept of manufacturability, which includes constraints of time, cost, physical limits, equipment capability, etc. (The ideal process works great, but it takes 10 years, costs 1 million dollars and requires projected settings outsides the capability of the tool.)
Since you have manufacturing data, hopefully, you have the data that captures the other types of factors as well (1,2,3), so you should specifically analyze the data to try to identify such effects. This is typically done as A vs B comparisons. Person A vs B, Tool A vs B, Consumable A vs B, Consumable lot A vs B, Summer vs Winter, etc.
Basically, there are all sorts of comparisons you could envision here and check for statistically differences across two sets of populations.
A comment on response: What is the yield criteria? You should know this in order to formulate the model. For semiconductors, we have both line yield (process yield) but there is also device yield. I assume for your work, you are primarily concerned with line yield. So minimizing variability in the factors (from 1,2,3,4) to achieve the desired response (target response(s) with minimal variability) is the primary goal.
APC (Advanced Process Control).
In many cases, there is significant trending that results from whatever reason; crappy tool control (the tool heats up), crappy consumable (the target material wears, the polishing pad wears, the chemical bath gets loaded, whatever), and so the idea here is how to adjust the next batch/lot/wafer based upon the history of what came prior. Either improve the manufacturing to avoid/minimize this trending (run order dependence) or adjust process to accommodate it to achieve the desired response.
Time for lunch, hope this helps...if you post on the specific process module type, and even equipment and consumables, I might be able to provide more insight.

How to download data from an SQL-database and annotate an MT4-chart

I have recently download the MetaTrader Terminal platform ( MT4 ).
I have my own back testing engine which stores some output in my SQL-server database. The output depends on the model I am testing. However, the output can just be as simple as the time of entry of a trade.
What I would like to know
Is it possible in MQL4 to download data from a SQL-server database and then annotate the chart with a simple "B" for a buy entry or "S" for a sell entry?
So I have run a back test simulation ( i.e. EURUSD from 2010 to 2011 ) and stored the time of the buy and sell entries. I would then like to go to my MetaTrader 4 platform and run a script which would download the time of all the buy and sell entries from my SQL-database and on my EURUSD chart label these XTO-s.
Yes, this is possible
MQL4 language, incl the "New"-MQL4 ( aka MQL4.5 ), has syntactical support for importing DLL-based services, that allow re-integration of tools, the closed-syntax of the MQL4 does not allow to gain in a more natural way.
//+------------------------------------------------------------------+ // msMOD(s) 2014 >>> [dev]_test_(python)_.PUB__(mql).SUB_with_KBD_and_SIG___StatefullGrammarFSA
//| Ver 4.00, Build 509 [dev]__********.mq4 | // msMOD(s) 2013
//+------------------------------------------------------------------+ //
#property copyright "[dev] msMOD(s) (c) 1987-2014" //
// ---------------------------------------------------------------------<#import>.start
#import "msLIB_services.ex4"
void msLIB.aSnapshot.MAKE();
#import
// ---------------------------------------------------------------------<#import>.end
// ZMQ LIBRARY |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
#include <mql4zmq.mqh> // Include the libzmq.dll abstraction wrapper
// |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
This way your code, be it MQL4-Script or MQL4-ExpertAdvisor can communicate with external processes, incl. any reasonably working DBMS.
Beware
There are few important features a code-design and integration architecture should bear in mind. MQL4, since the earliest days, is not a plain sequential-processor, namely the MQL-CustomIndicator is by far distant from this paradigm. The code ( except for the case of MQL4-Script ) acts as an event-driven factory, that is initiated by asynchronous flow of incoming Market Events. User is responsible for all measures to not violate real-time stability of this Alpha&Omega principle-of-MQL4-principles. In other words, a poor design, that may get some I/O-blocking ( due to RDBMS processing et al ), will be most probably a reason for Trading Terminal crash(es), which is the last thing anybody is willing to experience ( be it in live-trading or back-testing phase ), isn't it?
So, a sound non-blocking, heterogenous, parallel multi-processing integration architecture & code-design is to be used for this task.
It works great, if done professionally
Keeping the said in mind allows very smart, fast and (almost) unlimited architectures to work together with Trading Terminal(s). Having multiple cases with near-real-time messaging between MT4/MQL4 code and AI/ML engine via python, fast FIX-Protocol streaming engine for real-time data input from Liquidity Pool provider, using a remote NVIDIA/GPU computation fabric, remote co-integrated IRC/skype/email Signal provisioning channels.
So a can do philosophy is in place. SQL is nothing extra in this sense. Puting labels is trivial in the same sense. Just of your imagination, MQL4 allows to build ( again, using a near-real-time design ) responsive/interactive GUI-layer, that allows, within a few [msec] stability barrier, work with the Trading Terminal in a pure-graphical manner ( long ago before a One-Click-Trading marketing tag came with just a click-To-Buy / click-To-Sell ) working fully interactively with line-controlled / graphical-objects visual trading aids, be it for a fully automated trade-execution ( with an indirect GUI-configuration of the rules-set ), or for an Augmented Trading style.
Yes, can make your MT4 Trading Terminal a sort of "remote-programmable charting display", driven not by FX-Market, but from a Cloud-processor, where your remote Strategy Testing Engine rulez ...

Omniture: Creating Specific Context Variables

Was wondering if anyone out there can help.......
My company works in the travel industry and one of the product we provide is the function of buying a flight and hotel together.
One of the advantages of this is that sometimes a visitor can save on a hotel if they buy the package together.
What I want to be able to track is the following:
The hotel which has the saving on it (accomodation code); the saving that they will make; the price of the package that they will pay.
I am new to implementing but have been told by a colleague that I can use a context variable.
Would anyone be able to tell me how I should write this please?
Kind Regards
Yaser
Here is the document entry for Context Data Variables
For example, in the custom code section of the on-page code, within s_doPlugins or via some wrapper function that ultimately makes a s.t() or s.tl() call, you would have:
s.contextData['package.code'] = "accommodation code";
s.contextData['package.savings'] = "savings";
s.contextData['package.price'] = "price";
Then in the interface you can go to processing rules and map them to whatever props or eVars you want.
Having said that...processing rules are pretty basic at the moment, and to be honest, it's not really worth it IMO. Firstly, you have to get certified (take an exam and pass) to even access processing rules. It's not that big a deal, but it's IMO a pointless hoop to jump through (tip: if you are going to go ahead and take this step, be sure to study up on more than just processing rules. Despite the fact that the exam/certification is supposed to be about processing rules, there are several questions that have little to nothing to do with them)
2nd, context data doesn't show up in reports by themselves. You must assign the values to actual props/eVars/events through processing rules (or get ClientCare to use them in a vista rule, which is significantly more powerful than a processing rule, but costs lots of money)
3rd, the processing rules are pretty basic. Seriously, you're limited to just simple stuff like straight duping, concatenating values, etc.
4th, processing rules are limited in setting events, and won't let you set the products string. IOW, You can set a basic (counter) event, but not a numeric or currency event (an event with a custom value associated with it). Reason I mention this is because those price and savings values might be good as a numeric or currency event for calculated metrics. Well since you can't set an event as such via processing rules, you'd have to set the events in your page code anyways.
The only real benefit here is if you're simply looking to dupe them into a prop/eVar and that prop/eVar varies from report suite to report suite (which FYI, most people try to keep them consistent across report suites anyways, and people rarely repurpose them).
So if you are already being consistent across multiple report suites (or only have like 1 report suite in the first place), since you're already having to put some code on the site, there's no real incentive to just pop the values in the first place.
I guess the overall point here is that since the overall goal is to get the values into actual props, eVars and possibly events, and processing rules fail on a lot of levels, there's no compelling reason not to just pop them in the first place.

Resources