Vectorizing the labels for Multi-layer perceptron - machine-learning

I am trying to build a MLP model to solve .csv data which has 9 classes and some textual data. These are labels and their counts.
7:953,
4:686,
1:568,
2:452,
6:275,
5:242,
3:89,
9:37,
8:19.
For MLP, I got to know that we need to vectorize class labels. Is it so?
And when I run following code to do so, I'm facing the following error:
y_tr = keras.utils.to_categorical(y_tr, num_classes = 9) # y_tr is numpy array with class labels
y_te = keras.utils.to_categorical(y_te, num_classes = 9) # y_te also a numpy array
**IndexError:** index 9 is out of bounds for axis 1 with size 9
Help me resolve this.Thank you!!
Sample data:
ID Gene Variation Class Text
1 1 CBL W802* 2 tumorigenesis
2 2 CBL Q249E 2 tumorigenesis
3 3 CBL N454D 3 attractive
4 4 CBL L399V 4 cancer
5 5 CBL V391I 4 cancer
6 6 CBL V430M 5 cancer
7 7 CBL Deletion 1 56
8 8 CBL Y371H 4 cancer
9 9 CBL C384R 4 mutations
10 10 CBL P395A 4 cancer

Related

Calculate Positional Difference based on row for string values for two tables

Table 1:
Position
Team
1
MCI
2
LIV
3
MAN
4
CHE
5
LEI
6
AST
7
BOU
8
BRI
9
NEW
10
TOT
Table 2
Position
Team
1
LIV
2
MAN
3
MCI
4
CHE
5
AST
6
LEI
7
BOU
8
TOT
9
BRI
10
NEW
Output I'm looking for is
Position difference = 10 as that is the total of the positional difference. How can I do this in excel/google sheets? So the positional difference is always a positive even if it goes up or down. Think of it as a league table.
Table 2 New (using formula to find positional difference):
Position
Team
Positional Difference
1
LIV
1
2
MAN
1
3
MCI
2
4
CHE
0
5
AST
1
6
LEI
1
7
BOU
0
8
TOT
2
9
BRI
1
10
NEW
1
Try this:
=IFNA(ABS(INDEX(A:B,MATCH(E2,B:B,0),1)-D2),"-")
Assuming that table 1 is at columns A:B:

Clustering to achieve heterogeneous groups

I want to group 100 users based on a categorical variable (which can be low, medium, or high). The group size should be 3. I want to get the maximal heterogeneity within groups, assuming that users are distributed equally. I wonder if I can use some clustering algorithm to group based on the dissimilarity? Any suggestions?
I don't believe you need a clustering algorithm to group the data based upon a categorical variable.
Based on you question, I think this should work.
# Code
from sklearn.model_selection import train_test_split
group1, group23 = train_test_split(data, test_size=2/3., stratify=data['lab'])
group2, group3 = train_test_split(group23, test_size=1/2., stratify=group23['lab'])
Stratify makes sure that the maximum heterogeneity is maintained for the given categorical value.
# Sample output
print(data)
val1 val2 lab
0 1 1 L
1 2 2 L
2 3 3 L
3 4 4 M
4 5 5 M
5 6 6 M
6 7 7 H
7 8 8 H
8 9 9 H
print(group1)
val1 val2 lab
4 5 5 M
1 2 2 L
6 7 7 H
print(group2)
val1 val2 lab
8 9 9 H
2 3 3 L
3 4 4 M
print(group3)
val1 val2 lab
0 1 1 L
7 8 8 H
5 6 6 M
train_test_split() Documentation

How to use multi inputs or multi features for RNN or LSTM

I have a time series dataframe like blow, and the number in it is meaning less, and I have some problems when applying LSTM.
I have saw some demos of LSTM, mostly use this pattern: [y_{t-2},y_{t-1},y_{t}] to predict [y_{t+1}], but just as the dataframe blow, I also have featureA, featureB, featureC, so my quesiton is: how to use multi inputs or multi features for LSTM
time featureA featureB featureC target
1 2 5 6 1
2 4 1 7 3
3 6 2 1 5
4 2 4 0 7
5 7 6 1 5
6 9 3 2 8
7 1 2 3 5
8 2 9 5 10
9 1 10 7 6
10 3 2 2 11
For RNN/LSTM, it is more like this: [..., y_{t-2}(x_{t-2}), y_{t-1}(x_{t-1})] to predict [y_{t}(x_{t})]
Or more succinctly:
y_{t} = f(y_{t-1}, x_{t})
So in feed forward you still use your inputs x_{t} (i.e. your features) plus the outputs from previous timesteps to make the prediction at the current timestep.

Apply function to each row in Torch

I know that tensors have an apply method, but this only applies a function to each element. Is there an elegant way to do row-wise operations? For example, can I multiply each row by a different value?
Say
A =
1 2 3
4 5 6
7 8 9
and
B =
1
2
3
and I want to multiply each element in the ith row of A by the ith element of B to get
1 2 3
8 10 12
21 24 27
how would I do that?
See this link: Torch - Apply function over dimension
(Thanks to Alexander Lutsenko for providing it. I just moved it to the answer.)
One possibility is to expand B as follow:
1 1 1
2 2 2
3 3 3
[torch.DoubleTensor of size 3x3]
Then you can use element-wise multiplication directly:
local A = torch.Tensor{{1,2,3},{4,5,6},{7,8,9}}
local B = torch.Tensor{1,2,3}
local C = A:cmul(B:view(3,1):expand(3,3))

Clustering unique datasets based on similarities (equality)

I just entered into the space of data mining, machine learning and clustering. I'm having special problem, and do not know which technique to use it for solving it.
I want to perform clustering of observations (objects or whatever) on specific data format. All variables in each observation is numeric. My data input looks like this:
1 2 3 4 5 6
1 3 5 7
2 9 10 11 12 13 14
45 1 22 23 24
Let's say that n represent row (observation, or 1D vector,..) and m represents column (variable index in each vector). n could be very large number, and 0 < m < 100. Also main point is that same observation (row) cannot have identical values (in 1st row, one value could appear only once).
So, I want to somehow perform clustering where I'll put observations in one cluster based on number of identical values which contain each row/observation.
If there are two rows like:
1
1 2 3 4 5
They should be clustered in same cluster, if there are no match than for sure not. Also number of each rows in one cluster should not go above 100.
Sick problem..? If not, just for info that I didn't mention time dimension. But let's skip that for now.
So, any directions from you guys,
Thanks and best regards,
JDK
Its hard to recommend anything since your problem is totally vague, and we have no information on the data. Data mining (and in particular explorative techniques like clustering) is all about understanding the data. So we cannot provide the ultimate answer.
Two things for you to consider:
1. if the data indicates presence of species or traits, Jaccard similarity (and other set based metrics) are worth a try.
2. if absence is less informative, maybe you should be mining association rules, not clusters
Either way, without understanding your data these numbers are as good as random numbers. You can easily cluster random numbers, and spend weeks to get the best useless result!
Can your problem be treated as a Bag-of-words model, where each article (observation row) has no more than 100 terms?
Anyway, I think your have to give more information and examples about "why" and "how" you want to cluster these data. For example, we have:
1 2 3
2 3 4
2 3 4 5
1 2 3 4
3 4 6
6 7 8
9 10
9 11
10 12 13 14
What is your expected clustering? How many clusters are there in this clustering? Only two clusters?
Before you give more information, according to you current description, I think you do not need a cluster algorithm, but a structure of connected components. The first round you process the dataset to get the information of connected components, and you need a second round to check each row belong to which connected components. Take the example above, first round:
1 2 3 : 1 <- 1, 1 <- 2, 1 <- 3 (all point linked to the smallest point to
represent they are belong to the same cluster of the smallest point)
2 3 4 : 2 <- 4 (2 and 3 have already linked to 1 which is <= 2, so they do
not need to change)
2 3 4 5 : 2 <- 5
1 2 3 4 : 1 <- 4 (in fact this change are not essential because we have
1 <- 2 <- 4, but change this can speed up the second round)
3 4 6 : 3 <- 6
6 7 8 : 6 <- 7, 6 <- 8
9 10 : 9 <- 9, 9 <- 10
9 11 : 9 <- 11
10 11 12 13 14 : 10 <- 12, 10 <- 13, 10 <- 14
Now we have a forest structure to represent the connected components of points. The second round you can easily pick up one point in each row (the smallest one is the best) and trace its root in the forest. The rows which have the same root are in the same, in your words, cluster. For example:
1 2 3 : 1 <- 1, cluster root 1
2 3 4 5 : 1 <- 1 <- 2, cluster root 1
6 7 8 : 1 <- 1 <- 3 <- 6, cluster root 1
9 10 : 9 <- 9, cluster root 9
10 11 12 13 14 : 9 <- 9 <- 10, cluster root 9
This process takes O(k) space where k is the number of points, and O(nm + nh) time, where r is the height of the forest structure, where r << m.
I am not sure if this is the result you want.

Resources