create a loop in random forest - machine-learning

I receive some bits of advice from a friend about how to improve the model, but I can't follow this instruction very well. Would someone please help me to work it out? Below is my prediction model
fit1 <- cforest(SWL ~ affect + negemo+ future+swear+sad
+negate+sexual+death + filler+leisure + conj+ funct +discrep + i
+future + past + bio + body + cogmech + death + cause + quant
+ future +incl + motion + sad + tentat + excl+insight +percept +posemo
+ppron +quant + relativ + space + article + age + s_all + s_sad + gender
, data = trainset1,
controls=cforest_unbiased(ntree=500, mtry= 1))
test_predict1<-predict(fit1, newdata=testset1, type='response')
My friend's advice:
You simply run a loop for e.g. 500 times. In each loop you create a temporary copy of the data.frame and then switch all the SWL values around randomly. Then you run the whole process on that messed up data.frame and finally save the total accuracy at the end. When it finishes you have e.g. 500 saved accuracies that you can compare to your best result.

Related

Quadratic cost using np linalg norm doesn't work

Why does this line:
prog.AddQuadraticErrorCost(np.identity(len(q)), q0, q)
works.
But this:
prog.AddCost(np.linalg.norm(q_variables - q_nominal)**2)
RuntimeError: Expression pow(sqrt((pow(q(0), 2) + pow(q(2), 2) +
pow(q(4), 2) + pow(q(6), 2) + pow(q(7), 2) + pow(q(8), 2) + pow((-1 +
q(5)), 2) + pow((-0.59999999999999998 + q(1)), 2) + pow((1.75 + q(3)),
2))), 2) is not a polynomial. ParseCost does not support
non-polynomial expression.
does not?
Are the expressions not mathematically identical?
They are mathematically identical, but our symbolic engine is not yet powerful enough to recognize that sqrt(x)**2 should be simplified as x.
You can also write the expression using the symbolic form
prog.AddQuadraticCost((q-q0).dot(q-q0))
if you prefer readable code.

Number of parameters for Keras SimpleRNN

I have a SimpleRNN like:
model.add(SimpleRNN(10, input_shape=(3, 1)))
model.add(Dense(1, activation="linear"))
The model summary says:
simple_rnn_1 (SimpleRNN) (None, 10) 120
I am curious about the parameter number 120 for simple_rnn_1.
Could you someone answer my question?
When you look at the headline of the table you see the title Param:
Layer (type) Output Shape Param
===============================================
simple_rnn_1 (SimpleRNN) (None, 10) 120
This number represents the number of trainable parameters (weights and biases) in the respective layer, in this case your SimpleRNN.
Edit:
The formula for calculating the weights is as follows:
recurrent_weights + input_weights + biases
*resp: (num_features + num_units)* num_units + num_units
Explanation:
num_units = equals the number of units in the RNN
num_features = equals the number features of your input
Now you have two things happening in your RNN.
First you have the recurrent loop, where the state is fed recurrently into the model to generate the next step. Weights for the recurrent step are:
recurrent_weights = num_units*num_units
The secondly you have new input of your sequence at each step.
input_weights = num_features*num_units
(Usually both last RNN state and new input are concatenated and then multiplied with one single weight matrix, nevertheless inputs and last RNN state use different weights)
So now we have the weights, whats missing are the biases - for every unit one bias:
biases = num_units*1
So finally we have the formula:
recurrent_weights + input_weights + biases
or
num_units* num_units + num_features* num_units + biases
=
(num_features + num_units)* num_units + biases
In your cases this means the trainable parameters are:
10*10 + 1*10 + 10 = 120
I hope this is understandable, if not just tell me - so I can edit it to make it more clear.
It might be easier to understand visually with a simple network like this:
The number of weights is 16 (4 * 4) + 12 (3 * 4) = 28 and the number of biases is 4.
where 4 is the number of units and 3 is the number of input dimensions, so the formula is just like in the first answer: num_units ^ 2 + num_units * input_dim + num_units or simply num_units * (num_units + input_dim + 1), which yields 10 * (10 + 1 + 1) = 120 for the parameters given in the question.
I visualize the SimpleRNN you add, I think the figure can explain a lot.
SimpleRNN layer, I'm a newbie here, can't post images directly, so you need to click the link.
From the unrolled version of SimpleRNN layer,it can be seen as a dense layer. And the previous layer is a concatenation of input and the current layer(previous step) itself.
So the number of parameters of SimpleRNN can be computed as a dense layer:
num_para = units_pre * units + num_bias
where:
units_pre is the sum of input neurons(1 in your settings) and units(see below),
units is the number of neurons(10 in your settings) in the current layer,
num_bias is the number of bias term in the current layer, which is the same as the units.
Plugging in your settings, we achieve the num_para = (1 + 10) * 10 + 10 = 120.

implementing LSTM with theano scan, way slower then using loops

I am using Theano/Pylearn2 to implement LSTM model inside my own network. However, I've found that Theano scan is much, much slower than using plain loops. I used the Theano profiler
<% time> <sum %> <apply time> <time per call> <type> <#call> <#apply> <Class name>
95.4% 95.4% 25.255s 4.31e-02s Py 586 3 theano.scan_module.scan_op.Scan
1.8% 97.2% 0.466s 4.72e-05s C 9864 41 theano.sandbox.cuda.basic_ops.GpuElemwise
0.8% 97.9% 0.199s 8.75e-05s C 2276 10 theano.sandbox.cuda.basic_ops.GpuAlloc
0.7% 98.7% 0.196s 1.14e-04s C 1724 8 theano.sandbox.cuda.blas.GpuDot22
0.3% 99.0% 0.087s 1.06e-04s C 828 3 theano.sandbox.cuda.basic_ops.GpuIncSubtensor
0.2% 99.2% 0.051s 1.66e-04s Py 310 2 theano.sandbox.cuda.basic_ops.GpuAdvancedSubtensor1
and the Ops,
<% time> <sum %> <apply time> <time per call> <type> <#call> <#apply> <Op name>
77.2% 77.2% 20.433s 7.40e-02s Py 276 1 forall_inplace,gpu,grad_of_lstm__layers}
18.2% 95.4% 4.822s 1.56e-02s Py 310 2 forall_inplace,gpu,lstm__layers}
So lots and lots of time are spent on Scan (which is kind of as expected, but I didn't expect it to be soo slow).
The main body of my code is
def fprop(self, state_below, state_prev = 0, cell_prev = 0):
if state_prev == None:
state_prev = self.state_prev;
if cell_prev == None:
cell_prev = self.cell_prev;
i_gate = T.nnet.sigmoid(T.dot(state_below,self.Wi) +
T.dot(state_prev,self.Ui));
f_gate = T.nnet.sigmoid(T.dot(state_below,self.Wf) +
T.dot(state_prev,self.Uf));
C = T.tanh(T.dot(state_below, self.Wc) +
T.dot(state_prev, self.Uc));
C = i_gate * C + f_gate * cell_prev;
o_gate = T.nnet.sigmoid(T.dot(state_below,self.Wo) +
T.dot(state_prev,self.Uo) +
T.dot(C, self.Vo));
h_out = o_gate * T.tanh(C);
return h_out, C
And I wrote my scan as:
[h,c,out], _ = theano.scan(fn=self.fprop_with_output,
sequences=[X.T,Y[:,1:].T],
outputs_info=[dict(initial=h_,taps=[-1]), dict(initial=c_,taps=[-1]),None],n_steps=X.shape[1]-1);
One thing I've noticed is that the type of Theano scan uses Python implementation (?) is that the reason why this is ridiculously slow? or did I do something wrong? Why is Theano python implementation of Scan instead of C's.
(I said using loops is faster, but it's faster at runtime, for large model I didn't manage to compile the version of using loops within reasonable amount of time).
This was asked a while ago but I had/have the same problem. Answer is that scan is slow on the GPU.
See: https://github.com/Theano/Theano/issues/1168
It takes time for Theano developers to implement scan and gradient-of-scan using C and GPU, because it is much more complicated than other functions. That is why when you profile it, it shows GpuElemwise, GpuGemv, GpuDot22, etc., but you don't see a GpuScan or GpuGradofScan.
Meanwhile, you can only fall back to for loops.

Aggregating timeseries from sensors

I have about 500 sensors which emit a value about once a minute each. It can be assumed that the value for a sensor remains constant until the next value is emitted, thus creating a time series. The sensors are not synchronized in terms of when they emit data (so the observation timestamps vary), but it's all collected centrally and stored per sensor (to allow filtering by subset of sensors).
How can I produce an aggregate time series that gives the sum of the data from the sensors? n
(need to create a time series over 1 day's set of observations - so will need to take into account 24x60x500 observations per day). The calculations also need to be fast, preferrably run in in < 1s.
Example - raw input:
q)n:10
q)tbl:([]time:n?.z.t;sensor:n?3;val:n?100.0)
q)select from tbl
time sensor val
----------------------------
01:43:58.525 0 33.32978
04:35:12.181 0 78.75249
04:35:31.388 0 1.898088
02:31:11.594 1 16.63539
07:16:40.320 1 52.34027
00:49:55.557 2 45.47007
01:18:57.918 2 42.46532
02:37:14.070 2 91.98683
03:48:43.055 2 41.855
06:34:32.414 2 9.840246
The output I'm looking for should show the same timestamps, and the sum across sensors. If a sensor doesn't have a record defined at a matching timestamp, then it's previous value should be used (the records only imply times when the output from the sensor changes).
Expected output, sorted by time
time aggregatedvalue
----------------------------
00:49:55.557 45.47007 / 0 (sensor 0) + 0 (sensor 1) + 45.47007 (sensor 2)
01:18:57.918 42.46532 / 0 (sensor 0) + 0 (sensor 1) + 42.46532 (new value on sensor 2)
01:43:58.525 75.7951 / 33.32978 + 0 + 42.46532
02:31:11.594 92.43049 / 33.32978 + 16.63539 + 42.46532
02:37:14.070 141.952 / 33.32978 + 16.63539 + 91.98683
03:48:43.055 91.82017 / 33.32978 + 16.63539 + 41.855
04:35:12.181 137.24288 / 78.75249 + 16.63539 + 41.855
04:35:31.388 60.388478 / 1.898088 + 16.63539 + 41.855
06:34:32.414 28.373724 / 1.898088 + 16.63539 + 9.840246
07:16:40.320 64.078604 / 1.898088 + 52.34027 + 9.840246
I'm assuming the records are coming in in time order, therefore tbl will be sorted by time. If this is not the case, sort the table by time first.
d is a dictionary of last price by sensor at each time. The solution below is probably not the most elegent and I can imagine a more performant method is available that would not require the each.
q)d:(`long$())!`float$()
q)f:{d[x]::y;sum d}
q)update agg:f'[sensor;val] from tbl
time sensor val agg
-------------------------------------
00:34:28.887 2 53.47096 53.47096
01:05:42.696 2 40.66642 40.66642
01:26:21.548 1 41.1597 81.82612
01:53:10.321 1 51.70911 92.37553
03:42:39.320 1 17.80839 58.47481
05:15:26.418 2 51.59796 69.40635
05:47:49.777 0 30.17723 99.58358
11:32:19.305 0 39.27524 108.6816
11:37:56.091 0 71.11716 140.5235
12:09:18.458 1 78.5033 201.2184
Your data set of 720k records will be relatively small, so any aggregations should be well under a second. If you storing many days of data you may want to consider some of the techniques (splaying, partitioning etc) outlined here .
Its been a while since I have spent a lot of time with this. Would it help to go back, after you have a large batch and perform linear interpolation calculations at specific intervals and store this data. I have worked on sensor data that comes in ordered by time but the sensors only send data when the data actually changes. To accelerate reporting and other calculations, we actually aggregate the data in certain periods (like 1 second, 30 seconds, 1 min), often doing the averaging you are talking about along the way. While we are doing this we also perform linear interpolation, as well.
The downside is it requires additional storage space. But the performance gains are significant.
Looks like you have a great proposed solution already.

Matrix of several perspective transformations

I am writing some image processing program using OpenCV.
I need to transform the image using several perspective transformations.
Perspective transformation is defined by the matrix. I know, that we can get complex affine transform by multiplication of the simple transform matriсes (rotation, translation, etc.).
But when I tried to multiply two perspective transformation matrices, I didn't get the transformation matrix, that corresponds to the consequently used first and second matrix.
So, how can I get the matrix of several consequent perspective transformations?
Let you have two perspective matrices C:(x,y)->(u,v) and D:(u,v)->(r,g):
And you try to get M:(x,y)->(r,g)
You should substitute ui and vi from (1),(2) to the equations (3),(4).
 ui = (c00*xi + c01*yi + c02) / (c20*xi + c21*yi + c22) (1)
 vi = (c10*xi + c11*yi + c12) / (c20*xi + c21*yi + c22) (2)
 ri = (d00*ui + d01*vi + d02) / (d20*ui + d21*vi + d22) (3)
 gi = (d10*ui + d11*vi + d12) / (d20*ui + d21*vi + d22) (4)
After that you can see that M = D*C

Resources