Silver RSI indicator - zero gradient for one hour - trading

I have been following the value of silver recently. My question concerns the RSI indicator for silver US$ / OZ (symbol: SILVER), on 9th July 21:00 - 21:59. The indicator gradient was zero for that hour. The chart below copied from TradingView (1 minute) shows equal opening and closing values with no variation. Interestingly, silver was 'overbought' (above 70 RSI) after this period.
I'm new to trading but I'd like to know what situation would cause the silver RSI indicator to behave that way.
Thanks.

Q : "... what situationwould cause the silver RSI indicator to behave that way ( zero gradient for one hour ) ?"
The Silver-trading ( an XAGUSD on typical FX-Broker mediated access to market, used as an example ) exhibits during a regular trading week each day a one hour long period, when the market is closed. I.e. not producing any Quotes, not receiving any XTO-Orders to be executed AtMarket. The exact time of the market being closed is given by the Terms & Conditions of your Market-Access mediator ( plus GMT-offsets, Daylight Savings Time shifts and Bank Holidays create more surprises for us, if being unaware or less pedantic in making the software to deploy only if using a TZ-fully-qualified time, both for the respective Banking Holiday calenders and the Broker T&C and our Trading platform automation ... at least be warned about these known troubles ... )
This is the WHY reason for having a known "gap" in the flow of the TimeSeries-data. Given your data-processing assumed using an absolute-time (not the Open-Market relative-time), your Indicator must have produced the RSI-values that have surprised you (as the "gap" is both principal and long enough to "visually" remind you about this processing-artifact.
Knowing thisthe rest is easy ...

The relative strength index (RSI) is based on price changes, the original formula can be simplified to:
diff[t] = close[t] - close[t-1]
rsi[t] = RMA(max(diff[t],0),length)/RMA(abs(diff[t]),length)*100
where RMA is the function of a Wilder moving average with period length (the RMA can be described as an exponential moving average where the smoothing coefficient is 1/length).
So the RSI can be described as the ratio between the RMA of positive price changes and the RMA of the absolute price changes, this result is then multiplied by 100.
In the chart you are showing, during the period where price changes are equal to 0, the value of both the RMA of positive price changes and absolute price changes decay exponentially without reaching 0 (this is because the RMA is an IIR filter)
Finally, when the bullish candle occurs, we have both the RMA of positive price changes and the RMA of absolute price changes approximately equal to each other, thus giving a result for the RSI approximately equal to 100.

Related

What is the meaning of OneMinuteRate in JMX?

I am trying to calculate the Read/second and Write/Second in my Cassandra 2.1 cluster. After searching and reading, I came to know about JMX bean
org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency
Here I can see oneMinuteRate. I have started a brand new cluster and started collected these metrics from 0.
When I started my first record, I can see
Count = 1
OneMinuteRate = 0.01599111...
Does it mean that my write/s is 0.0159911?
Or does it mean that based on 1 minute data, my write latency is 0.01599 where Write Latency refers to the response time for writing a record?
Please help me understand the value.
Thanks.
It means that in the last minute, your writes per second were occuring at a rate of .01599 writes per second. Think about it this way: the rate of writes in the last 60 seconds would be
WritesInLastMinute ÷ 60
So in your case
1 ÷ 60 = 0.0166
Or more precisely, .01599.
If you observed no further writes after that, the value would descend down to zero over the next minute.
OneMinuteRate, FiveMinuteRate, and FifteenMinuteRate are exponential moving averages, meaning they are not simply dividing readings against time, instead, as the name implies they take an exponential series of averages as below:
result(t) = (1 - w) * result(t - 1) + (w) * event_this_period
where w is the weighting factor, t is the ticking time, in other words, simply they take 20% or the new reading and 80% of old readings, it's the same way UNIX systems measure CPU loads.
however, if this applies to requests that the server receives, below is a chart from one request to a server, measures taken by dropwizard.
as you can see, from one request a curve is drawn by time, it's really useful to determine trends, but not sure if they are great to monitor live traffic and especially critical one.

My EA loses money when I breakeven

I have a very successful EA which is designed to move my stop loss to breakeven when I get 50 pips "in the money". So pretty basic stuff, however, I still lose a small amount of money on the trade that hit the new breakeven price - of course the breakeven price being equal to the OrderOpenPrice.
Granted, I don't lose as much as I would if my price were to hit the original S/L but my net profit on trading position that hit the breakeven price was NIL. I've made no modifications to the EA code.
I'm thinking my broker maybe moved the stopLevel figures so my breakeven price can no longer reach the OrderOpenPrice but I can't be sure.
Does anyone have this issue and how would I go about solving this?
Here is the code. The relevant code starts on line 537 ;
https://github.com/indomtrading/ea/commit/5de74283f02ebee634952d5d204e21749ea25714
To reach a B/E-state, there are two distinct processes to watch:
one is the PriceDOMAIN distance between the XTO OrderOpenPrice() and the "new" value one wishes to set a "future" XTO OrderStopLoss().
the other is the Broker-side accrued sum of all Commissions + Fees + Swaps
While
OrderCommission() + OrderSwap() could be inspected explicitly ( as have been accrued and have been a part of "a-just-theoretical" OrderProfit() ), any additional costs, associated by your Broker's "Terms & Conditions" with an XTO on OrderClose() or on any of materialised { OrderStopLoss() | OrderTakeProfit() } does not show up, until the XTO operation is finished and such costs get visible after such position was terminated.
If the EA does not precisely account for both of these principal P/L-drivers in evaluating BreakEven,
it may systematically move your money into losses.
Check both of these in the EA B/E-driving policies against your Broker "Terms & Conditions" so as to avoid the so far observed losses.
Nota Bene:
while the slippage may appear during a live-trading session, the nature of the slippage-mechanics ought be ( sure, outside of the Fundamental major events ) principally symmetric .. sometimes gaining, sometimes loosing a pip or a few. In case your Broker does not exhibit a symmetrical nature, some investigation is in place, but that does not explain a systematically loosing EA-trade automation.
As discussed, when you move your OrderStopLoss() to OrderOpenPrice() it may close with slippage so loss instead of breakeven. In order to fight with that, OrderModify() your OrderStopLoss()to OrderOpenPrice()+2*Point, if there's a small slippage you will have a tick gain or zero
You need to factor in Swap and Commission.

How to read an actual price of another currency pair, not available directly in a [ Strategy Tester ] - for a Multi-Currency Strategy?

In the Internet, across many boards one can read that it is impossible to make use of MarketInfo() function in the Strategy Tester. It's the limitation of the platform.
I haven't found on the web any workaround for this. However, since the need is the mother of invention, and my need was to make USDJPY market decisions with EA that depends on the state of EURUSD market I've found the workaround ( which is good enough for me ). I use iMA() with a period of one, and M1 resolution.
iMA( "EURUSD", PERIOD_M1, 1, 0, MODE_SMA, PRICE_MEDIAN, i )
The question is: since MetaTrader is able to calculate Moving Average for another currency ( which is surely based on the actual price of the pair! ),Q1: why can't one has access to the current value : directly?
And a follow-up question:Q2: Is there any other ( more accurate ) workaround for this limitation?
REASON: The reason to that question is because of "Tick". Because a "tick" in one currency happens independently from the "tick" of another currency, it is not possible to accurately determine the Price of another currency basing on the current "Tick" of one currency. The iMA is calculated using the OHLC of the M1 candle, and not the actual "Tick" (which is not the same as "Tick" data).
Rephrase: Let's say we are on USDJPY, and this "tick" happens at 12:00:00.210 (12mid-night at the 210th millisecond). When that "tick" happens, the start() event gets triggered. In that function, we look for Bid of EURUSD. However, there is no "Tick" for EURUSD as of that time (the USDJPY and EURUSD do not "tick" at the same time), thus it is not possible to determine the exact price of EURUSD at-that-point-in-time.
There is no workaround because it is impossible to determine the price on the "Tick" level because MQL4's datetime variable is an integer and accurate only to the seconds, and the HistoryCenter>Export are OHLC only.
Your iMA() is as good as it gets.
Q1: was well served by #JosephLee + there is one more option (ref. below)
Q2: deserves a word:
Yes, there is a workaround.
While MetaTrader Terminal 4 has many weaknesses, not worth one's time to spend time here, there are also nice things one can do with it.
Some five years ago, there was a project-need to integrate a distributed-processing for MT4, so as to circumvent it's given weaknesses.
That happened. This way you can benefit from a distributed-processing framework and can have down to a few nanoseconds ( latency-wise ) exact prices of all instruments at will ( based on a remote QUOTE-stream processing ) independently of your localhost MT4.graph _Symbol
Do not hesitate to ask more
&
welcome to the
Worlds of MQL4
New-MQL4.56789 has an iClose() functionality for multi-currency
Returns Close price value for the bar of specified symbol with timeframe and shift.
double iClose( string symbol, // symbol
int timeframe, // timeframe
int shift // shift
);
Parameterssymbol[in] Symbol name. NULL means the current symbol.timeframe[in] Timeframe. It can be any of ENUM_TIMEFRAMES enumeration values. 0 means the current chart timeframe.shift[in] Index of the value taken from the indicator buffer (shift relative to the current bar the given amount of periods ago).Returned valueClose price value for the bar of specified symbol with timeframe and shift. If local history is empty (not loaded), function returns 0. To check errors, one has to call the GetLastError() function.
Be cautious on it's use in StrategyTester with a due care taken on error-handling the cases the history data is not in local database and a need to provide a remedy-handling procedures for remote-retrieval from server appears.
Print( "A first date in the history for the EURUSD on the [MT4SERVER] = ",
(datetime) SeriesInfoInteger( "EURUSD", 0, SERIES_SERVER_FIRSTDATE )
);
Similarly need to provide some measures for cases the above indicated need ERR_HISTORY_WILL_UPDATED appears while the remote-server is not online / market is closed ERR_MARKET_CLOSED / the requested date is already before the SERIES_SERVER_FIRSTDATE etc.
In a corner-case, there is always a possibility to create a special-setup with step-wise updating both local CCY_PAIR and a REMOTE_CCY_PAIR totally independent of a Broker-side equipment status.
All these are important aspects of this new MQL4 feature.

What should i do to maintain performance of a mobile app which is using database?

I'm building an app using database.
I have a words table and everytime user types something, this app will record and update word the database.
And the frequency field will be auto increase after user enter one matched word.
But the trouble is user type day by day and i afraid the search performance will be reduce after times and also the Int field will reach to the limit (max limit Int) someday.
So, i limit the database to around less than 50.000 records.
I delete less-used records after a certain time.
But i don't know how to deal with frequency Int field of each word?
How to know exactly frequency usage of each word without increasing the field forever?
I recommend that you use a logarithmic scale for the frequency values. That's what is often done in situations like this. See Wikipedia to learn about logarithmic scales.
For example, if you have a word MAN that has a frequency of 15, the value you store in the database would be log(15) ~= 1.17609125906.
If you then find 4 new occurrences of MAN, then you want to add 4 to the field. You cannot add the log values directly because log(x)+log(y)=log(x*y). (See the Logarithm Rules section of this article for more information on log rules.)
Instead -- assuming you use a base 10 logarithm, you would use this formula:
SET frequency = log(10^frequency+4)
Depending on the length of your words, the few bytes for the frequency don't matter. With an unsigned four bytes integer, you can count up to more than two billion, which is way above the number of words what the user can type in in their whole lifespan.
So may want to go for two or three bytes, but the savings may be negligible.
Anyway, there are the following approaches for preventing overflow:
You can detect it, and then undo the operations, scale everything down by some factor of two, and then redo.
You can periodically check all your numbers and do the scaling when approaching the limit.
You can do a probabilistic update like below.
Probabilistic update
Instead of simply incrementing the frequency every time by one, you do it only with a probability which gets lower and lower as the counter grows. For example, you can do the increment with a probability of 1.0 / (oldValue + 1) or 2 ** -oldValue. The latter leads to a logarithmic growth, but, unlike the idea in the other answer, it works.
There are obviously some disadvantages due to the randomness and precision loss, but when all you care about is the relative frequency, it should be good enough.

Mahout Recommender: What relative preference values are suitable for a GenericUserBasedRecommender?

In mahout, I'm setting up a GenericUserBasedRecommender, pretty straight forward for now, typical settings.
In generating a "preference" value for an item, we have the following 5 data points:
Positive interest
User converted on item (highest possible sign of interest)
Normal like (user expressed interest, e.g. like buttons)
Indirect expression of interest (clicks, cursor movements, measuring "eyeballs")
Negative interest
Indifference (items the user ignored when active on other items, a vague expression of disinterest)
Active dislike (thumbs down, remove item from my view, etc)
Over what range I should express these different attributes, let's use a 1-100 scale for discussion?
Should I be keeping the 'Active dislike' and 'Indifference' clustered close together, for example, at 1 and 5 respectively, with all the likes clustered in the 90-100 range?
Should 'Indifference' and 'Indirect expressions of interest' by closer to the center? As in 'Indifference' in the 20-35 range and 'Indirect like' in the 60-70 range?
Should 'User conversion' blow the scale away and be heads and tails higher than the others? As in: 'User Conversion' # 100, 'Lesser likes' # ~65, 'Dislikes' clustered in the 1-10 range?
On the scale of 1-100 is 50 effectively "null", or equivalent to no data point at all?
I know the final answer lies in trial and error and in the meaning of our data, but as far as the algorithm goes, I'm trying to understand at what point I need to tip the scales between interest and disinterest for the algorithm to function properly.
The actual range does not matter, not for this implementation. 1-100 is OK, 0-1 is OK, etc. The relative values are all that really matters here.
These values are estimated by a simple (linearly) weighted average. Therefore the response ought to be "linear". It ought to match an intuition that if action X gets a score 2x higher than action Y, then X should be an indicator of twice as much interest in real life.
A decent place to start is to simply size them relative to their frequency. If click-to-conversion rate is 2%, you might make a click worth 2% of a conversion.
I would ignore the "Indifference" signal you propose. It is likely going to be too noisy to be of use.

Resources