How can I get local minimum point? - minimum

I'm trying to get a minimum point of the below graph.
I can get the candidates of local minima, that is 28, 63, 97, 130, and so on. However, I don't know how to filter out real local minimum from the candidates. Is there any algorithm to get local minima?

Related

From an input vector of parabolic shape values, predict a scalar value using machine learning

I was wondering if you could train a neural network model where from a vector a parabolic shape values you could predict a scalar value.
For example :
let's say the input vector is [5, 10, 15, 20, 22, 25, 22, 15, 10, 5] then the ouput should be 23.
And to train it I just give the model lots of input vector like the one in the example and the values that should be returned for each one of these vectors.
I looked it up on the internet but didn't find anything that was matching my case but I'm a newbie in this so maybe I just don't understand certain algorithms.

Time series forecasting (DeepAR): Prediction results seem to have basic flaw

I'm using the DeepAR algorithm to forecast survey response progress with time. I want the model to predict the next 20 data points in the survey progress. Each survey is a time series in my training data. The length of each time series is the # days for which the survey ran. For example, the below series indicates that the survey started on 29-June-2011 and the last response was received on 24-Jul-2011 (25 days is the length).
{"start":"2011-06-29 00:00:00", "target": [37, 41.2, 47.3, 56.4, 60.6, 60.6,
61.8, 63, 63, 63, 63.6, 63.6, 64.2, 65.5, 66.1, 66.1, 66.1, 66.1, 66.1, 66.1,
66.1, 66.1, 66.1, 66.1, 66.7], "cat": 3}
As you can see the values in the time series can remain the same or increase. The training data would never indicate a downward trend. Surprisingly, when I generated predictions, I noticed that the predictions had a downward trend. When there is no trace of downward trend in the training data, I'm wondering how the model could have possibly learned this. To me, this seems to be a basic flaw in the predictions. Can someone please throw some light on why the model might behave in this way? I build the DeepAR model with the below hyper parameters. The model was tested and the RMSE is about 9. Would it help if I change any of the hyper parameters? Any recommendations for this.
time_freq= 'D',
context_length= 30,
prediction_length= 20,
cardinality= 8,
embedding_dimension= 30,
num_cells= 40,
num_layers= 2,
likelihood= 'student-T',
epochs= 20,
mini_batch_size= 32,
learning_rate= 0.001,
dropout_rate= 0.05,
early_stopping_patience= 10
If there is an up-trend in all time series, there should not be a problem learning this. If your time series usually have a rising and then falling period, then the algorithm may learn this and thus generate a similar pattern, even though the example you forecast only had an up-trend so far.
How many time series do you have and how long are they on average?
All your hyper parameters look reasonable and it is a bit hard to tell, what to improve without knowing more about the data. If you don't have that many time series, you can try to increasing the number of epochs (perhaps try a few hundred) and increase early stopping to 20 - 30.

TensorFlow Concat

I am trying to rebuild the 3D U-Net in this paper:
https://arxiv.org/pdf/1606.06650.pdf
And unfortunately, when I get to the first merge, I get the following error from Keras:
ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 512, 14, 8, 10), (None, 256, 15, 8, 10)]
I understand that based on this thread:
https://github.com/fchollet/keras/issues/633
The following is true:
the concat axis is the axis along which to concatenate the two tensors.
Lets say you have two three-dimensional tensors of shape (2,3,5) and (2,3,7). Then you can only concatenate them along the third (zero based index: 2) axis, because then – figuratively – the two "faces" of the "cuboid" that you "glue together" are each 2-by-3 and only those fit. So you need to set concat_axis = 2 (or -1, since it is the last one) resulting in a new tensor (2,3,12).
Typically in a NN you would merge along the axis of the features, which depends on the type of layers you use and the implementation in keras. If you are not sure you can try out a few, most likely only one will work for the reason given above. If the figurative "faces don't fit" you will get an error message like the one in my opening post."
Which means I should be merging on the 14 and 15, which are axis=0, correct?
Can someone help explain what I am missing in this setup?
Thanks!

How to work out z-score normalisation?

I am confused as how to do z-score normalisation. I have found the equation to do this, required the mean and standard deviation, but I'm not sure how to work this out given my situation.
I have 2 classifiers in my system. To use the scores together, I know that I need to normalise them because they will differ in scales, etc. I wish to use z-score normalisation for this. My question is, given the 2 scores from the two classifiers, what do I need to do with the scores to z-score normalise them? I want to able to combine/compare them.
My (probably flawed!) understanding is that for a classifier score set we use the mean and the standard deviation. But we can't always assume we will already have a score set to get the mean and standard deviation from, can we?
To compute the z-scores of a given set of numbers you need to compute the sample mean and the sample deviation. From each score subtract the mean and divide the standard deviation.
Consider the set of numbers below where each observation is test scores ranging from 0 to 100.
{40, 50, 60, 55, 70, 80, 90}
If you wanted to compare them another set of test scores where the test scores ranged from 0 to 250 such as:
{100, 115, 214, 50, 200, 80, 90}
You couldn't directly compare the compare them. I.e. a score of 80 in the second set is clearly worse than than 80 in the first set (80/250 vs 80/100). One way to do this is using z-scores. They are computing as follows:
Find the mean
mean of the first set is: 63.57143
mean of the second set is: 121.2857
Subtract the sample mean from each score. This will give you a set of numbers that are centered on zero.
{-23.571429, -13.571429, -3.571429, -8.571429, 6.428571, 16.428571, 26.428571}
{-21.285714, -6.285714, 92.714286, -71.285714, 78.714286, -41.285714, -31.285714}
Compute the standard deviation from the original set and divide the "centered" scores by that number:
Set 1 sigma = 17.49149
Set 2 sigma = 61.98041
This is computed to be:
{-1.3475937, -0.7758873, -0.2041809. -0.4900341, 0.3675256, 0.9392320, 1.5109384}
{-0.3434265, -0.1014145, 1.4958643, -1.1501330, 1.2699865, -0.6661091, -0.5047678}
Now you have two sets of numbers are directly comparable. A value of zero means that it is the set's average value. A value of 1 standard deviation above the set's average. A value of -1 means that its is one standard deviation below the average and so on.

Guessing next K values of a sequence

Say we have sampled a function in a constant rate and recieved x1,...,xn then we are requested to guess the next k values xn+1,...,xn+k, is it a known problem? is there a known algorithms or approaches to deal with that kind of a problem?
This problem is not well specified.
Who says, the next elements are not: 42, 42, 42, 42, 42, 42, 42, pi, ...
Any sequence of integers is mathematically equally likely, unless you specify your problem more precisely.
(Also, "data mining" is probably the wrong terminology. This is a TV "intelligence test" puzzle problem, not so much a real data problem.)

Resources