HMM : Learning states in a signal. (using hmmlearn python library) - time-series

I have time series data for which I am trying to learn 3 states in my HMM model. The output which I am getting is this. I want to get three separate states for the 3 distinct clusters of data points. But the first cluster has mixed states learned from HMM instead of a single state.
Here is my code for the same.
model = hmm.GaussianHMM(n_components=3, covariance_type="full", verbose = True, n_iter =1000)
model.fit(train_array1.reshape((-1,1)))
Z1 = model.predict(test_array1_s.reshape(-1,1))
p = model.predict_proba(test_array1_s.reshape(-1,1))
plt.plot(p)
The plot of probabilites of each state for the test data is this.
How can I get three distinct states for the the data?

Related

How to use DeepQLearning in Julia for very large states?

I would like to use the DeepQLearning.jl package from https://github.com/JuliaPOMDP/DeepQLearning.jl. In order to do so, we have to do something similar to
using DeepQLearning
using POMDPs
using Flux
using POMDPModels
using POMDPSimulators
using POMDPPolicies
# load MDP model from POMDPModels or define your own!
mdp = SimpleGridWorld();
# Define the Q network (see Flux.jl documentation)
# the gridworld state is represented by a 2 dimensional vector.
model = Chain(Dense(2, 32), Dense(32, length(actions(mdp))))
exploration = EpsGreedyPolicy(mdp, LinearDecaySchedule(start=1.0, stop=0.01, steps=10000/2))
solver = DeepQLearningSolver(qnetwork = model, max_steps=10000,
exploration_policy = exploration,
learning_rate=0.005,log_freq=500,
recurrence=false,double_q=true, dueling=true, prioritized_replay=true)
policy = solve(solver, mdp)
sim = RolloutSimulator(max_steps=30)
r_tot = simulate(sim, mdp, policy)
println("Total discounted reward for 1 simulation: $r_tot")
In the line mdp = SimpleGridWorld(), we create the MDP. When I was trying to create the MDP, I had the problem of very large state space. A state in my MDP is a vector in {1,2,...,m}^n for some m and n. So, when defining the function POMDPs.states(mdp::myMDP), I realized that I must iterate over all the states which are very large, i.e., m^n.
Am I using the package in the wrong way? Or we must iterate the states even if there are exponentially many? If the latter, then what is the point of using Deep Q Learning? I thought, Deep Q Learning can help when the action and state spaces are very large.
DeepQLearning does not require to enumerate the state space and can handle continuous space problems.
DeepQLearning.jl only uses the generative interface of POMDPs.jl. As such, you do not need to implement the states function but just gen and initialstate (see the link on how to implement the generative interface).
However, due to the discrete action nature of DQN you also need POMDPs.actions(mdp::YourMDP) which should return an iterator over the action space.
By making those modifications to your implementation you should be able to use the solver.
The neural network in DQN takes as input a vector representation of the state. If your state is a m dimensional vector, the neural network input will be of size m. The output size of the network will be equal to the number of actions in your model.
In the case of the grid world example, the input size of the Flux model is 2 (x, y positions) and the output size is length(actions(mdp))=4.

Size Mismatch when passing a state batch to network

Since I’m a beginner in ML, this question or the design overall may sound silly, sorry about that. I’m open to any suggestions.
I have a simple network with three linear layers one of which is output layer.
self.fc1 = nn.Linear(in_features=2, out_features=12)
self.fc2 = nn.Linear(in_features=12, out_features=16)
self.out = nn.Linear(in_features=16, out_features=4)
My states are consisting of two values, coordinate x and y. That’s why input layer has two features.
In main.py I’m sampling and extracting memories in ReplayMemory class and pass them to get_current function:
experiences = memory.sample(batch_size)
states, actions, rewards, next_states = qvalues.extract_tensors(experiences)
current_q_values = qvalues.QValues.get_current(policy_net, states, actions)
Since a single state is consisting of two values, length of the states tensor is batchsize x 2 while length of the actions is batchsize. (Maybe that’s the problem?)
When I pass “states” to my network in get_current function to obtain predicted q-values for the state, I get this error:
size mismatch, m1: [1x16], m2: [2x12]
It looks like it is trying to grab the states tensor as if it is a single state tensor. I don’t want that. In the tutorials that I follow, they pass the states tensor which is a stack of multiple states, and there is no problem. What am I doing wrong? :)
This is how I store an experience:
memory.push(dqn.Experience(state, action, next_state, reward))
This is my extract tensors function:
def extract_tensors(experiences):
# Convert batch of Experiences to Experience of batches
batch = dqn.Experience(*zip(*experiences))
state_batch = torch.cat(tuple(d[0] for d in experiences))
action_batch = torch.cat(tuple(d[1] for d in experiences))
reward_batch = torch.cat(tuple(d[2] for d in experiences))
nextState_batch = torch.cat(tuple(d[3] for d in experiences))
print(action_batch)
return (state_batch,action_batch,reward_batch,nextState_batch)
Tutorial that I follow is this project's tutorial.
https://github.com/nevenp/dqn_flappy_bird/blob/master/dqn.py
Look between 148th and 169th lines. And especially 169th line where it passes the states batch to the network.
SOLVED. It turned out that I didn't know how to properly create 2d tensor.
2D Tensor must be like this:
states = torch.tensor([[1, 1], [2,2]], dtype=torch.float)

How the initial kmeans points works in to BigQuery ML?

I'm using BigQuery for machine learning, more specifically the k-means method for an unlabeled dataset where I'm trying to find clusters.
I'd like to know if someone has discovered how the BQ ML initiates the centroids.
I already tried looking at the documentation but either there is nothing or I couldn't find it.
CREATE MODEL `project.dataset.model_name`
OPTIONS(
model_type = "kmeans",
num_clusters = 3,
distance_type = "euclidean",
early_stop = TRUE,
max_iterations = 20,
standardize_features = TRUE)
AS
(SELECT * FROM `project.dataset.sample_date_to_train`
)
The results differ a little each time I run.
Has someone experience with that subject?
For someone who is still looking for an answer, recently there has been an update on BigQuery ML about this topic. Two new paramaters have been added to the CREATE MODEL statement, i.e.:
KMEANS_INIT_METHOD
KMEANS_INIT_COL
Basically you can set your custom K observations (belonging to the data table) that will serve as initial centroids for your K-means algorithm. You can find the relative documentation at this link. Maybe it's not the most exciting solution to your problem, but it's still something you can work with if you need reproducibility.
If I had to guess, it probably uses a similar logic to TensorFlow (BQML might be using TF under the hood as it is). Random partitioning seems to be the TensorFlow default, so that would be my guess.
The reason you are seeing different results each time you train the model, is due to the random nature of the initial values assigned to the centroids. The K-means algorithm begins by randomly selecting a value(position) for the k number of centroids chosen. If you review this documentation it explains the exact process when using the K-means algorithm1.

Learning a Sin function

I'm new to Machine Learning
I' building a simple model that would be able to predict simple sin function
I generated some sin values, and feeding them into my model.
from math import sin
xs = np.arange(-10, 40, 0.1)
squarer = lambda t: sin(t)
vfunc = np.vectorize(squarer)
ys = vfunc(xs)
model= Sequential()
model.add(Dense(units=256, input_shape=(1,), activation="tanh"))
model.add(Dense(units=256, activation="tanh"))
..a number of layers here
model.add(Dense(units=256, activation="tanh"))
model.add(Dense(units=1))
model.compile(optimizer="sgd", loss="mse")
model.fit(xs, ys, epochs=500, verbose=0)
I then generate some test data, which overlays my learning data, but also introduces some new data
test_xs = np.arange(-15, 45, 0.01)
test_ys = model.predict(test_xs)
plt.plot(xs, ys)
plt.plot(test_xs, test_ys)
Predicted data and learning data looks as follows. The more layers I add, the more curves network is able to learn, but the training process increases.
Is there a way to make it predict sin for any number of curves? Preferably with a small number of layers.
With a fully connected network I guess you won't be able to get arbitrarily long sequences, but with an RNN it looks like people have achieved this. A google search will pop up many such efforts, I found this one quickly: http://goelhardik.github.io/2016/05/25/lstm-sine-wave/
An RNN learns a sequence based on a history of inputs, so it's designed to pick up these kinds of patterns.
I suspect the limitation you observed is akin to performing a polynomial fit. If you increase the degree of polynomial you can better fit a function like this, but a polynomial can only represent a fixed number of inflection points depending on the degree you choose. Your observation here appears the same. As you increase layers you add more non-linear transitions. However, you are limited by a fixed number of layers you chose as the architecture in a fully connected network.
An RNN does not work on the same principals because it maintains a state and can make use of the state being passed forward in the sequence to learn the pattern of a single period of the sine wave and then repeat that pattern based on the state information.

Predictive modelling

How to perform regression(Random Forest,Neural Networks) for this kind of data?
The data contains features and we need to predict sales qty based on week and attributes
here I am attaching the sample data
Here we are trying to predict sales quantity based on other attributes
Multivariate linear regression
Assuming
input variables x[][] (each row corresponds to a sample, each column corresponds to a variable such as week, season, ..)
expected output y[] (as many rows as x)
parameters being learned theta[] (as many as there are input variables + 1)
you are optimizing a function h:
h = sum for all j of { x[j][i] * p[i] - y[j] } is minimal
This can easily be achieved through gradient descent.
You can also include combinations of parameters (and simply include more thetas for those pseudo-parameters)
I have some code lying around in a GitHub repository that performs basic multivariate linear regression (for a course I sometimes teach).
https://github.com/jorisschellekens/ml/tree/master/linear_regression

Resources