How can i do caret training with cross validation on predefined (grouped) splits in the training data? - machine-learning

I would like to train a ML model in Caret based on training data.
I have a training data from the following structure:
df <- data.frame(Label = c("A","A","A","B","A", "A","A","B","B","A", "B","B","A","A","A"), EXPERIMENT = c("X","X","X","X","X", "Y","Y","Y","Y","Y", "Z","Z","Z","Z","Z"), VALUE1 = c( 1, 2, 1, 5, 1, 3, 1, 5, 6, 1, 7, 5, 1, 2, 2), VALUE2 = c( 9, 7, 8, 1, 8, 2, 1, 9, 8, 2, 7, 7, 2, 1, 1) )
I would want to use train and split the data according to experiments for cross-validation training (in this experiment 3 crossvalidation splits).
that is
Split1: training = X,Y and validation = Z
Split2: training = X,Z and validation = Y
Split2: training = Y,Z and validation = X
How can I do that? With traincontrol?
I found a index option in traincontrol, but did not understand, if that can do it.

Related

How can we reduce the size of the graph generated by Maximal Clique and remove the nodes of specific cliques?

I am using the networkx—find_cliques library for finding the maximal cliques in a graph. I want to reduce the size of this graph based on maximal cliques.
Here is the code:
from torch_geometric.utils.convert
import to_networkx from torch_geometric.data import Data
import networkx as nx
edge_list = torch.tensor([
[0, 1, 1, 2, 2, 2, 3, 4, 5, 6, 6, 6, 7, 7, 8 ], # Source Nodes
[1, 2, 3, 4, 5, 3, 9, 5, 6, 7, 8, 9, 8, 9, 9 ] # Target Nodes
], dtype=torch.long)
node_features = torch.tensor([
[-8, 1, 5, 8, 2, -3], # Features of Node 0
[-1, 0, 2, -3, 0, 1], # Features of Node 1
[1, -1, 0, -1, 2, 1], # Features of Node 2
[0, 1, 4, -2, 3, 4], # Features of Node 3
],dtype=torch.long)
data = Data(x=node_features, edge_index=edge_list, edge_attr=edge_weight)
G_directed = to_networkx(data)
G_undirected = G_directed.to_undirected()
no_cliques= nx.find_cliques(G, nodes=None)
print(No_cliques)
List of Maximal Cliques = {1,[1,2], 2,[2, 3, 4], 3,[4, 5, 6], 4,[6, 7], 5, [7, 8, 9, 10], 6, [10,3]}
In the next step, we reduce the size of the original graph in the coarsened graph as we consider one clique as one node and joint the edge based on this rule, joining two cliques if they are not disjoint. I want to remove such a clique whose node already appeared in other cliques. In the above example, the nodes of clique 6 are already assigned in cliques no 2 and 5. So in the new graph, this clique should be removed from the clique list.
For better understanding, I am posting the picture.
hierarchy of a graph as the coarsened graph at each level
I want to make this type of graph hierarchy based on maximal clique. Does anyone know about it? How can I do it?

How to generate conditions within constraints in Z3py

Let us assume there are 5-time slots and at each time slot, I have 4 options to choose from, each with a known reward, for eg. rewards = [5, 2, 1, -3]. At every time step, at least 1 of the four options must be selected, with a condition that, if option 3 (with reward -3) is chosen at a time t, then for the remaining time steps, none of the options should be selected. As an example, considering the options are indexed from 0, both [2, 1, 1, 0, 3] and [2, 1, 1, 3, 99] are valid solutions with the second solution having option 3 selected in the 3rd time step and 99 is some random value representing no option was chosen.
The Z3py code I tried is here:
T = 6 #Total time slots
s = Solver()
pick = [[Bool('t%d_ch%d' %(j, i)) for i in range(4)] for j in range(T)]
# Rewards of each option
Rewards = [5, 2, 1, -3]
# Select at most one of the 4 options as True
for i in range(T):
s.add(Or(Not(Or(pick[i][0], pick[i][1], pick[i][2], pick[i][3])),
And(Xor(pick[i][0],pick[i][1]), Not(Or(pick[i][2], pick[i][3]))),
And(Xor(pick[i][2],pick[i][3]), Not(Or(pick[i][0], pick[i][1])))))
# If option 3 is picked, then none of the 4 options should be selected for the future time slots
# else, exactly one should be selected.
for i in range(len(pick)-1):
for j in range(4):
s.add(If(And(j==3,pick[i][j]),
Not(Or(pick[i+1][0], pick[i+1][1], pick[i+1][2], pick[i+1][3])),
Or(And(Xor(pick[i+1][0],pick[i+1][1]), Not(Or(pick[i+1][2], pick[i+1][3]))),
And(Xor(pick[i+1][2],pick[i+1][3]), Not(Or(pick[i+1][0], pick[i+1][1]))))))
if s.check()==False:
print("unsat")
m=s.model()
print(m)
With this implementation, I am not getting solutions such as [2, 1, 1, 3, 99]. All of them either do not have option 3 or have it in the last time slot.
I know there is an error inside the If part but I'm unable to figure it out. Is there a better way to achieve such solutions?
It's hard to decipher what you're trying to do. From a basic reading of your description, I think this might be an instance of the XY problem. See https://xyproblem.info/ for details on that, and try to cast your question in terms of what your original goal is; instead of a particular solution, you're trying to implement. (It seems to me that the solution you came up with is unnecessarily complicated.)
Having said that, you can solve your problem as stated if you get rid of the 99 requirement and simply indicate -3 as the terminator. Once you pick -3, then all the following picks should be -3. This can be coded as follows:
from z3 import *
T = 6
s = Solver()
Rewards = [5, 2, 1, -3]
picks = [Int('pick_%d' % i) for i in range(T)]
def pickReward(p):
return Or([p == r for r in Rewards])
for i in range(T):
if i == 0:
s.add(pickReward(picks[i]))
else:
s.add(If(picks[i-1] == -3, picks[i] == -3, pickReward(picks[i])))
while s.check() == sat:
m = s.model()
picked = []
for i in picks:
picked += [m[i]]
print(picked)
s.add(Or([p != v for p, v in zip(picks, picked)]))
When run, this prints:
[5, -3, -3, -3, -3, -3]
[1, 5, 5, 5, 5, 1]
[1, 2, 5, 5, 5, 1]
[2, 2, 5, 5, 5, 1]
[2, 5, 5, 5, 5, 1]
[2, 1, 5, 5, 5, 1]
[1, 1, 5, 5, 5, 1]
[2, 1, 5, 5, 5, 2]
[2, 5, 5, 5, 5, 2]
[2, 5, 5, 5, 5, 5]
[2, 5, 5, 5, 5, -3]
[2, 1, 5, 5, 5, 5]
...
I interrupted the above as it keeps enumerating all the possible picks. There are a total of 1093 of them in this particular case.
(You can get different answers depending on your version of z3.)
Hope this gets you started. Stating what your original goal is directly is usually much more helpful, should you have further questions.

How to compute mean/max of HuggingFace Transformers BERT token embeddings with attention mask?

I'm using the HuggingFace Transformers BERT model, and I want to compute a summary vector (a.k.a. embedding) over the tokens in a sentence, using either the mean or max function. The complication is that some tokens are [PAD], so I want to ignore the vectors for those tokens when computing the average or max.
Here's an example. I initially instantiate a BertTokenizer and a BertModel:
import torch
import transformers
from transformers import AutoTokenizer, AutoModel
transformer_name = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(transformer_name, use_fast=True)
model = AutoModel.from_pretrained(transformer_name)
I then input some sentences into the tokenizer and get out input_ids and attention_mask. Notably, an attention_mask value of 0 means that the token was a [PAD] that I can ignore.
sentences = ['Deep learning is difficult yet very rewarding.',
'Deep learning is not easy.',
'But is rewarding if done right.']
tokenizer_result = tokenizer(sentences, max_length=32, padding=True, return_attention_mask=True, return_tensors='pt')
input_ids = tokenizer_result.input_ids
attention_mask = tokenizer_result.attention_mask
print(input_ids.shape) # torch.Size([3, 11])
print(input_ids)
# tensor([[ 101, 2784, 4083, 2003, 3697, 2664, 2200, 10377, 2075, 1012, 102],
# [ 101, 2784, 4083, 2003, 2025, 3733, 1012, 102, 0, 0, 0],
# [ 101, 2021, 2003, 10377, 2075, 2065, 2589, 2157, 1012, 102, 0]])
print(attention_mask.shape) # torch.Size([3, 11])
print(attention_mask)
# tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]])
Now, I call the BERT model to get the 768-D token embeddings (the top-layer hidden states).
model_result = model(input_ids, attention_mask=attention_mask, return_dict=True)
token_embeddings = model_result.last_hidden_state
print(token_embeddings.shape) # torch.Size([3, 11, 768])
So at this point, I have:
token embeddings in a [3, 11, 768] matrix: 3 sentences, 11 tokens, 768-D vector for each token.
attention mask in a [3, 11] matrix: 3 sentences, 11 tokens. A 1 value indicates non-[PAD].
How do I compute the mean / max over the vectors for the valid, non-[PAD] tokens?
I tried using the attention mask as a mask and then called torch.max(), but I don't get the right dimensions:
masked_token_embeddings = token_embeddings[attention_mask==1]
print(masked_token_embeddings.shape) # torch.Size([29, 768] <-- WRONG. SHOULD BE [3, 11, 768]
pooled = torch.max(masked_token_embeddings, 1)
print(pooled.values.shape) # torch.Size([29]) <-- WRONG. SHOULD BE [3, 768]
What I really want is a tensor of shape [3, 768]. That is, a 768-D vector for each of the 3 sentences.
For max, you can multiply with attention_mask:
pooled = torch.max((token_embeddings * attention_mask.unsqueeze(-1)), axis=1)
For mean, you can sum along the axis and divide by attention_mask along that axis:
mean_pooled = token_embeddings.sum(axis=1) / attention_mask.sum(axis=-1).unsqueeze(-1)
In addition to #Quang, you can have a look at sentence_transformers Pooling layer.
For max pooling, they do this:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
pooled = torch.max(token_embeddings, 1)[0]
And for mean pooling they do the following:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
pooled = sum_embeddings / sum_mask
The max pooling presented in the accepted answer will suffer when the max is negative, and the implementation from sentence transformers changes token_embeddings, which throw an error when you want to use the embedding for back propagation:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
If you're interested on anything back-prop related, you can do something like this:
input_mask_expanded = torch.where(attention_mask==0, -1e-9, 0.).unsqueeze(-1).expand(token_embeddings.size()).float()
pooled = torch.max(token_embeddings-input_mask_expanded, 1)[0] # Set padding tokens to large negative value
It's the same idea of making all masked tokens to be very small, but it doesn't change the token_embeddings while at it.
Alex is right.
Look on hidden states for strings that go into tokenizer. For different strings, padding will have different embeddings.
So, in order to properly pool embeddings, you need to ignore those padding vectors.
Let's say you want to get embeddings out of the last 4 layers of BERT (as it yields the best classification results):
#iterate over the last 4 layers and get embeddings for
#strings without having embeddings from PAD tokens
m = []
for i in range(len(hidden_states[0])):
m.append([hidden_states[j+9][i,:,:][tokens["attention_mask"][i] !=0] for j in range(4)])
#average over all tokens embeddings
means = []
for i in range(len(hidden_states[0])):
means.append(torch.stack(m[i]).mean(dim=1))
#stack embeddings for all strings
pooled = torch.stack(means).reshape(-1,1,3072)

Data shuffling for Image Classification

I want to develop a CNN model to identify 24 hand signs in American Sign Language. I created a custom dataset that contains 3000 images for each hand sign i.e. 72000 images in the entire dataset.
For training the model, I would be using 80-20 dataset split (2400 images/hand sign in the training set and 600 images/hand sign in the validation set).
My question is:
Should I randomly shuffle the images when creating the dataset? And Why?
Based on my previous experience, it led to validation loss being lower than training loss and validation accuracy more than training accuracy. Check this link.
Random shuffling of data is a standard procedure in all machine learning pipelines, and image classification is not an exception; its purpose is to break possible biases during data preparation - e.g. putting all the cat images first and then the dog ones in a cat/dog classification dataset.
Take for example the famous iris dataset:
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
y
# result:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
As you can clearly see, the dataset has been prepared in such a way that the first 50 samples are all of label 0, the next 50 of label 1, and the last 50 of label 2. Try to perform a 5-fold cross validation in such a dataset without shuffling and you'll find most of your folds containing only a single label; try a 3-fold CV, and all your folds will include only one label. Bad... BTW, it's not just a theoretical possibility, it has actually happened.
Even if no such bias exists, shuffling never hurts, so we do it always just to be on the safe side (you never know...).
Based on my previous experience, it led to validation loss being lower than training loss and validation accuracy more than training accuracy. Check this link.
As noted in the answer there, it is highly unlikely that this was due to shuffling. Data shuffling is not anything sophisticated - essentially, it is just the equivalent of shuffling a deck of cards; it may have happened once that you insisted on "better" shuffling and subsequently you ended up with a straight flush hand, but obviously this was not due to the "better" shuffling of the cards.
Here is my two cents on the topic.
First of all make sure to extract a test set that has equal number of samples for each hand sign. (hand sign #1 - 500 samples, hand sign #2 - 500 samples and so on)
I think this is referred to as stratified sampling.
When it comes to the training set, there is no huge mistake in shuffling the entire set. However, when splitting the training set into training and validation set make sure that the validation set is good enough to be a representation for the test set.
One of my personal experiences with shuffling:
After splitting the training set into training and validation sets, the validation set turned out to be very easy to predict. Therefore, I saw good learning metric values. However, the performance of the model on the test set was horrible.

Tensorflow conv2d_transpose size error "Number of rows of out_backprop doesn't match computed"

I am creating a convolution autoencoder in tensorflow. I got this exact error:
tensorflow.python.framework.errors.InvalidArgumentError: Conv2DBackpropInput: Number of rows of out_backprop doesn't match computed: actual = 8, computed = 12
[[Node: conv2d_transpose = Conv2DBackpropInput[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](conv2d_transpose/output_shape, Variable_1/read, MaxPool_1)]]
Relevant code:
l1d = tf.nn.relu(tf.nn.conv2d_transpose(l1da, w2, [10, 12, 12, 32], strides=[1, 1, 1, 1], padding='SAME'))
where
w2 = tf.Variable(tf.random_normal([5, 5, 32, 64], stddev=0.01))
I checked the shape of the input to conv2d_transpose i.e. l1da and it is correct(10x8x8x64). The batch size is 10, input to this layer is in the form of 8x8x64, and the output is supposed to be 12x12x32.
What am I missing?
Found the error. Padding should be "Valid", not "Same".

Resources