I'm trying to add the (if) condition to my EA for MT4, as this( if amount loss is higher than profit amount then complete martingale until losses are recovered, any one can give an Idea about the script?
So if amount loss is higher than profit set recovering to true and multiply trade lot amount x3.
Related
I'm trying to build a classifier to predict stock prices. I generated extra features using some of the well-known technical indicators and feed these values, as well as values at past points to the machine learning algorithm. I have about 45k samples, each representing an hour of ohlcv data.
The problem is actually a 3-class classification problem: with buy, sell and hold signals. I've built these 3 classes as my targets based on the (%) change at each time point. That is: I've classified only the largest positive (%) changes as buy signals, the opposite for sell signals and the rest as hold signals.
However, presenting this 3-class target to the algorithm has resulted in poor accuracy for the buy & sell classifiers. To improve this, I chose to manually assign classes based on the probabilities of each sample. That is, I set the targets as 1 or 0 based on whether there was a price increase or decrease.
The algorithm then returns a probability between 0 and 1(usually between 0.45 and 0.55) for its confidence on which class each sample belongs to. I then select some probability bound for each class within those probabilities. For example: I select p > 0.53 to be classified as a buy signal, p < 0.48 to be classified as a sell signal and anything in between as a hold signal.
This method has drastically improved the classification accuracy, at some points to above 65%. However, I'm failing to come up with a method to select these probability bounds without a large validation set. I've tried finding the best probability values within a validation set of 3000 and this has improved the classification accuracy, yet the larger the validation set becomes, it is clear that the prediction accuracy in the test set is decreasing.
So, what I'm looking for is any method by which I could discern what the specific decision probabilities for each training set should be, without large validation sets. I would also welcome any other ideas as to how to improve this process. Thanks for the help!
What you are experiencing is called non-stationary process. The market movement depends on time of the event.
One way I used to deal with it is to build your model with data in different time chunks.
For example, use data from day 1 to day 10 for training, and day 11 for testing/validation, then move up one day, day 2 to day 11 for training, and day 12 for testing/validation.
you can save all your testing results together to compute an overall score for your model. this way you have lots of test data and a model that will adapt to time.
and you get 3 more parameters to tune, #1 how much data to use for train, #2 how much data for test, # per how many days/hours/data points you retrain your data.
what is the exact definition of error rate in classification? why
some researchers use error rate to report their results instead of accuracy?
I'm trying to compare my results for text classification with other methods in the literature, but they used error rate instead of accuracy and I can't find the exact definition/equation to find error rate of my method.
For classification, your output is discrete (as if you were putting items into buckets) so accuracy has a really straightforward definition:
accuracy = (# classified correct) / (# classified total)
Error rate is equally simple:
error rate = 1 - accuracy = 1 - (# classified correct) / (# classified total)
= (# classified incorrect) / (# classified total)
Note that things are much more complicated for tasks with continuous output. If instead of placing items into buckets, I'm asking a model to place items on a number line, accuracy is no longer a matter of "right" and "wrong" but rather how close my model is to right. This could be the average closeness, median closeness, etc. There are more complex measures which differ mainly in how heavily they weigh distance as it increases. Perhaps being off by a little bit is much much less bad than being off by a lot, so a Root Mean Square error measure is appropriate. On the other hand, it may be that being off by more than a small amount is awful whether it's off by a little or off by a lot, so a logarithmic error measure would be better.
To answer the last part of your question: in the discrete case, why would one choose accuracy vs. error? Optics is one thing: "99% accurate" sends a different psychological message than "has an error rate of 1%". Furthermore, an increase in accuracy from 99% to 99.9% is increasing accuracy by 1%, but a decrease in error from 1% to .1% is a decrease in error of 90%, even though the two express the same real-world change.
Otherwise, it may be personal preference or writing style.
EDIT: you may also be interested in this post on the Statistics Stack Exchange
When training a net does it matter if the number of samples in the epoch is not an exact multiple of the batch size? My training code doesnt seem to mind if this is the case, though my loss curve is pretty noisy at the moment (in case that is a related issue).
This would be useful to know, as if it is not an issue it saves on messing around with the dataset to make it quantized by batch size. It may also be less wasteful of captured data.
does it matter if the number of samples in the epoch is not an exact multiple of the batch size
No, it does not. Your number of samples can be say 1000, and your batch size can be 400.
You can decide the total number of iterations (where each iteration = sampling a batch and doing gradient descent) based on the overall number of epochs you want to cover. Say, you want to have roughly 5 epochs, then roughly your number of iterations >= 5 * 1000 / 400 = 13. So you will sample a random batch 13 times to get roughly 5 epochs.
In the context of Convolution Neural Networks (CNN), Batch size is the number of examples that are fed to the algorithm at a time. This is normally some small power of 2 like 32,64,128 etc. During training an optimization algorithm computes the average cost over a batch then runs backpropagation to update the weights. In a single epoch the algorithm is run with $n_{batches} = {n_{examples} \over batchsize} $ times. Generally the algorithm needs to train for several epochs to achieve convergence of weight values. Every batch is normally sampled randomly from the whole example set.
The idea is this: mini-batch optimization wrt (x1,..., xn) is equivalent to consecutive optimization steps wrt x1, ..., xn inputs, because the gradient is a linear operator. This means that mini-batch update equals to the sum of its individual updates. Important note here: I assume that NN doesn't apply batch-norm or any other layer that adds an explicit variation to the inference model (in this case the math is a bit more hairy).
So the batch size can be seen as a pure computational idea that speeds up the optimization through vectorization and parallel computing. Assuming that one can afford arbitrarily long training and the data are properly shuffled, the batch size can be set to any value. But it isn't automatically true for all hyperparameters, for example very high learning rate can easily force the optimization to diverge, so don't make a mistake thinking hyperparamer tuning isn't important in general.
I implement the ResNet for the cifar 10 in accordance with this document https://arxiv.org/pdf/1512.03385.pdf
But my accuracy is significantly different from the accuracy obtained in the document
My - 86%
Pcs daughter - 94%
What's my mistake?
https://github.com/slavaglaps/ResNet_cifar10
Your question is a little bit too generic, my opinion is that the network is over fitting to the training data set, as you can see the training loss is quite low, but after the epoch 50 the validation loss is not improving anymore.
I didn't read the paper in deep so I don't know how did they solved the problem but increasing regularization might help. The following link will point you in the right direction http://cs231n.github.io/neural-networks-3/
below I copied the summary of the text:
Summary
To train a Neural Network:
Gradient check your implementation with a small batch of data and be aware of the pitfalls.
As a sanity check, make sure your initial loss is reasonable, and that you can achieve 100% training accuracy on a very small portion of
the data
During training, monitor the loss, the training/validation accuracy, and if you’re feeling fancier, the magnitude of updates in relation to
parameter values (it should be ~1e-3), and when dealing with ConvNets,
the first-layer weights.
The two recommended updates to use are either SGD+Nesterov Momentum or Adam.
Decay your learning rate over the period of the training. For example, halve the learning rate after a fixed number of epochs, or
whenever the validation accuracy tops off.
Search for good hyperparameters with random search (not grid search). Stage your search from coarse (wide hyperparameter ranges,
training only for 1-5 epochs), to fine (narrower rangers, training for
many more epochs)
Form model ensembles for extra performance
I would argue that the difference in data pre processing makes the difference in performance. He is using padding and random crops, which in essence increases the amount of training samples and decreases the generalization error. Also as the previous poster said you are missing regularization features, such as the weight decay.
You should take another look at the paper and make sure you implement everything like they did.
I am designing a new network architecture for semantic segmentation. The training loss reduces when training iteration increases. However, when I measure the testing accuracy. I got the below figure
From 0 to 20.000 iterations, the accuracy increase. However, after 20.000 iterations, the testing accuracy reduce. I guess it is overfitting issue.
I tried to add dropout to the network, but the graph trend is similar. Could you suggest to me the reason and how can I solve it? I think early stopping is not good solution. Thanks
Be sure to randomize your data on your training, also you can start testing with a higher learning rate (say 0.1) to get out of local minima then decrease it to a very small value to let settle down things. To do this change the step size to say 1000 iterations to reduce the size of the learning rate every 1000 iterations.