Roberta on local CPU tensor mismatch at non-singleton dimension 1 - tensor

I downloaded https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment model to my local pc.
When I pull the model from the website it works perfectly fine but it gave me tensor mismatch error on local.
`self.MODEL = "C:/Users/metehan/project1/MLTools/twitter-roberta-base-sentiment"
self.model = AutoModelForSequenceClassification.from_pretrained(self.MODEL)
self.tokenizer = AutoTokenizer.from_pretrained(self.MODEL)
self.labels = ['Negative', 'Neutral', 'Positive']`
Vocabulary sizes of model and tokenizer are the same and I don't use GPU so model, tokenizer and inputs are at the same location.
`encoded_tweet = self.tokenizer(eng_tweet, return_tensors='pt')
output = self.model(**encoded_tweet)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
max_value = max(scores)`
(base) C:\Users\metehan\project1>python test.py
Traceback (most recent call last):
File "C:\Users\metehan\project1\MLTools\analyze_tweets.py", line 34, in analyze
output = self.model(**encoded_tweet)
File "C:\Users\metehan\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\metehan\AppData\Roaming\Python\Python39\site-packages\transformers\models\roberta\modeling_roberta.py", line 1206, in forward
outputs = self.roberta(
File "C:\Users\metehan\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\metehan\AppData\Roaming\Python\Python39\site-packages\transformers\models\roberta\modeling_roberta.py", line 814, in forward
buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
RuntimeError: The expanded size of the tensor (685) must match the existing size (514) at non-singleton dimension 1. Target sizes: [1, 685]. Tensor sizes: [1, 514]
I tried adding padding and truncation to tokenizer but an index error has occured. Also adding tokenizer a max length didn't work.
Any idea how to fix this?

Related

Pytorch ResNet152 Model Not Predicting

I have a Pytorch resnet152 model, initialized with the following:
model = torchvision.models.resnet152()
model.load_state_dict(torch.load("resnet152_weights.pth"))
for parameter in model.parameters():
parameter.requires_grad = False
model.fc = torch.nn.Linear(2048, 10)
And "resnet152_weights.pth" contains the weights of the model, which is the exact same as torchvision.models.ResNet152_Weights.IMAGENET1K_V2. I downloaded it because my IDE (Pycharm) could not find the URL.
When my model is trained, the code output = model(images) returns the following error:
Traceback (most recent call last):
File "deep_learning_model.py", line 184, in <module>
main()
File "deep_learning_model.py", line 168, in main
model = train(model, 2)
File "deep_learning_model.py", line 141, in train
output = model(images)
File "torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "torchvision\models\resnet.py", line 285, in forward
return self._forward_impl(x)
File "torchvision\models\resnet.py", line 268, in _forward_impl
x = self.conv1(x)
File "torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: expected scalar type Byte but found Float
Can you please help me fix this bug (if you want me to send more code, please specify which block).

How can I fix the “TypeError: 'Tensor' object is not callable” error in Pytorch?

I am trying to compute a linear function an image's pixels, followed by log softmax (it's for a classification task). I am not sure how to do this without getting errors. Here is the relevant code:
...
torch.nn.functional.nll_loss(output, target) # error happens here
...
def __init__(self):
super(NetLin, self).__init__()
self.in_out = torch.nn.Linear(28, 2)
def forward(self, input):
out_sum = self.in_out(input)
output = torch.nn.LogSoftmax(out_sum)
return output
and the full error message I get is:
Traceback (most recent call last):
File "copy.py", line 98, in <module>
main()
File "copy.py", line 94, in main
train(args, net, device, train_loader, optimizer, epoch)
File "copy.py", line 21, in train
loss = torch.nn.functional.nll_loss(output, target)
File "/usr/local/lib/python3.7/site-packages/torch/nn/functional.py", line 2107, in nll_loss
dim = input.dim()
TypeError: 'Tensor' object is not callable
I have tried a few different solutions to this based on other answers online but they just result in different error messages. Clearly I am doing something fundamentally wrong here but I haven't used Pytorch before so I'm not sure what it is. Thank you
Edit:
My code is now:
def train(args, model, device, train_loader, optimizer, epoch):
if args.net == 'lin':
model = NetLin()
model.train()
loss = nn.NLLLoss()
for batch_idx, (data, target) in enumerate(train_loader):
data.requires_grad = True
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = loss(model(input), target)
F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
class NetLin(nn.Module):
def __init__(self):
super(NetLin, self).__init__()
self.in_out = torch.nn.Linear(28 * 28, 2)
def forward(self, input):
input = input.view(-1, 28 * 28)
out_sum = self.in_out(input)
output = torch.nn.LogSoftmax(out_sum, dim=1)
return output
and my error message is now:
Traceback (most recent call last):
File "copy.py", line 102, in <module>
main()
File "copy.py", line 98, in main
train(args, net, device, train_loader, optimizer, epoch)
File "copy.py", line 24, in train
output = loss(model(input), target)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/.../copy.py", line 15, in forward
input = input.view(-1, 28 * 28)
AttributeError: 'builtin_function_or_method' object has no attribute 'view'
As you can kind of see the data and target are read in from a file (they are from KMNIST actually) so I can't control their format exactly, but I do know the image sizes are all [1,28,28], i.e. a 28*28 greyscale image. Also the batch size is 64 in case that matters.
Did you remember to set your model to training mode in your train loop with model.train()? Also, nll_loss takes in 2 tensors, but the first entry (the input tensor) needs to have requires_grad=True before it goes through the model, which is also why you need to set model.train() before training.
So you would have something like this:
model = NetLin()
model.train()
loss = nn.NLLLoss()
input = torch.randn(7, 4, requires_grad=True) # your input image (tensor)
target = torch.tensor([1, 0]) # image label for image belonging to first class
output = loss(model(input), target)
I am also a bit concerned about your self.in_out = torch.nn.Linear(28, 2). This says that your linear layer is expecting 28 features, implying that your input images are either 7x4, 14x2 or 28x1, which doesn't seem right in my opinion? Aren't you using images of size 28x28 (very typical size in this context)? In which case, you would have your linear layer modified as self.in_out = torch.nn.Linear(28*28, 2), and your forward pass will have to be modified as follows:
def forward(self, input):
input = input.view(-1, 28*28)
out_sum = self.in_out(input)
output = torch.nn.LogSoftmax(out_sum)
return output

Stack overflow on dask __array__

I have a rather simple program using dask:
import dask.array as darray
import numpy as np
X = np.array([[1.,2.,3.],
[4.,5.,6.],
[7.,8.,9.]])
arr = darray.from_array(X)
arr = arr[:,0]
a = darray.min(arr)
b = darray.max(arr)
quantiles = darray.linspace(a, b, 4)
print(np.array(quantiles))
Running this program results in an error like this:
Traceback (most recent call last):
File "discretization.py", line 12, in <module>
print(np.array(quantiles))
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__
x = np.array(x)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__
x = np.array(x)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__
x = np.array(x)
[Previous line repeated 325 more times]
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1337, in __array__
x = self.compute()
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 166, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 434, in compute
dsk = collections_to_dsk(collections, optimize_graph, **kwargs)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 220, in collections_to_dsk
[opt(dsk, keys, **kwargs) for opt, (dsk, keys) in groups.items()],
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 220, in <listcomp>
[opt(dsk, keys, **kwargs) for opt, (dsk, keys) in groups.items()],
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/optimization.py", line 42, in optimize
dsk = optimize_blockwise(dsk, keys=keys)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/blockwise.py", line 547, in optimize_blockwise
out = _optimize_blockwise(graph, keys=keys)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/blockwise.py", line 572, in _optimize_blockwise
if isinstance(layers[layer], Blockwise):
File "/anaconda3/lib/python3.7/abc.py", line 139, in __instancecheck__
return _abc_instancecheck(cls, instance)
RecursionError: maximum recursion depth exceeded in comparison
Python is version 3.7.1 and dask is version 2.15.0.
What is wrong with this program?
Thanks in advance.
linspace does not (yet) accept lazy inputs from other dask things, you need real numbers. Use compute to materialize these numbers as follows:
a, b = dask.compute(darray.min(arr), darray.max(arr))
quantiles = darray.linspace(a, b, 4)
With either one of these package combinations:
dask==2.15.0
numpy<1.16.0
toolz==0.9.0
dask==2.16.0
numpy<1.17.0
toolz==0.9.0
The following program can be executed without an issue:
import dask.array as darray
import numpy as np
X = np.array([[1.,2.,3.],
[4.,5.,6.],
[7.,8.,9.]])
arr = darray.from_array(X)
arr = arr[:,0]
a = darray.min(arr)
b = darray.max(arr)
q0 = darray.linspace(a, b, 4)
print(np.array(q0))
The key in the above package lists is numpy. Newer versions of numpy may cause an error.
As #mdurant suggested, the implementation of linspace does not yet accept lazy inputs; hence the fact that these combinations of packages work might be actually an coincidence.
I will leave this question open until I fully understand what is happening here.

Neural Network Dense Layer Error in Shape attribute

I have created a feed forward neural network but but it is giving a Type Error despite changing the datatype of the parameter. I am really new to keras and Machine Learning so I would appreciate as detailed help as possible. I am attaching the code snippet and the error log below. CODE-
num_of_features = X_train.shape[1]
nb_classes = Y_train.shape[1]
def baseline_model():
def branch2(x):
x = Dense(np.floor(num_of_features*50), activation='sigmoid')(x)
x = Dropout(0.75)(x)
x = Dense(np.floor(num_of_features*20), activation='sigmoid')(x)
x = Dropout(0.5)(x)
x = Dense(np.floor(num_of_features), activation='sigmoid')(x)
x = Dropout(0.1)(x)
return x
main_input = Input(shape=(num_of_features,), name='main_input')
x = main_input
x = branch2(x)
main_output = Dense(nb_classes, activation='softmax')(x)
model = Model(input=main_input, output=main_output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', 'categorical_crossentropy'])
return model
model = baseline_model()
ERROR-
Traceback (most recent call last):
File "h2_fit_neural.py", line 143, in <module>
model = baseline_model()
File "h2_fit_neural.py", line 137, in baseline_model
x = branch2(x)
File "h2_fit_neural.py", line 124, in branch2
x = Dense(np.floor(num_of_features*50), activation='sigmoid')(x)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/keras/engine/base_layer.py", line 432, in __call__
self.build(input_shapes[0])
File "/home/shashank/tensorflow/lib/python3.6/site-packages/keras/layers/core.py", line 872, in build
constraint=self.kernel_constraint)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/keras/engine/base_layer.py", line 249, in add_weight
weight = K.variable(initializer(shape),
File "/home/shashank/tensorflow/lib/python3.6/site-packages/keras/initializers.py", line 218, in __call__
dtype=dtype, seed=self.seed)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 4077, in random_uniform
dtype=dtype, seed=seed)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/random_ops.py", line 242, in random_uniform
rnd = gen_random_ops.random_uniform(shape, dtype, seed=seed1, seed2=seed2)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_random_ops.py", line 674, in random_uniform
name=name)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 609, in _apply_op_helper
param_name=input_name)
File "/home/shashank/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 60, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'shape' has DataType float32 not in list of allowed values: int32, int64
Why are you using np.floor for the shape in your Dense layers? This will produce a float, you need an int there. Removing np.floor should solve your problem.

Dimension mismatch error with scikit pipeline FeatureUnion

This is my first post. I've been trying to combine features with FeatureUnion and Pipeline, but when I add a tf-idf + svd piepline the test fails with a 'dimension mismatch' error. My simple task is to create a regression model to predict search relevance. Code and errors are reported below. Is there something wrong in my code?
df = read_tsv_data(input_file)
df = tokenize(df)
df_train, df_test = train_test_split(df, test_size = 0.2, random_state=2016)
x_train = df_train['sq'].values
y_train = df_train['relevance'].values
x_test = df_test['sq'].values
y_test = df_test['relevance'].values
# char ngrams
char_ngrams = CountVectorizer(ngram_range=(2,5), analyzer='char_wb', encoding='utf-8')
# TFIDF word ngrams
tfidf_word_ngrams = TfidfVectorizer(ngram_range=(1, 4), analyzer='word', encoding='utf-8')
# SVD
svd = TruncatedSVD(n_components=100, random_state = 2016)
# SVR
svr_lin = SVR(kernel='linear', C=0.01)
pipeline = Pipeline([
('feature_union',
FeatureUnion(
transformer_list = [
('char_ngrams', char_ngrams),
('char_ngrams_svd_pipeline', make_pipeline(char_ngrams, svd)),
('tfidf_word_ngrams', tfidf_word_ngrams),
('tfidf_word_ngrams_svd', make_pipeline(tfidf_word_ngrams, svd))
]
)
),
('svr_lin', svr_lin)
])
model = pipeline.fit(x_train, y_train)
y_pred = model.predict(x_test)
When adding the pipeline below to the FeatureUnion list:
('tfidf_word_ngrams_svd', make_pipeline(tfidf_word_ngrams, svd))
The exception below is generated:
2016-07-31 10:34:08,712 : Testing ... Test Shape: (400,) - Training Shape: (1600,)
Traceback (most recent call last):
File "src/model/end_to_end_pipeline.py", line 236, in <module>
main()
File "src/model/end_to_end_pipeline.py", line 233, in main
process_data(input_file, output_file)
File "src/model/end_to_end_pipeline.py", line 175, in process_data
y_pred = model.predict(x_test)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/utils/metaestimators.py", line 37, in <lambda>
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/pipeline.py", line 203, in predict
Xt = transform.transform(Xt)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/pipeline.py", line 523, in transform
for name, trans in self.transformer_list)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 800, in __call__
while self.dispatch_one_batch(iterator):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 658, in dispatch_one_batch
self._dispatch(tasks)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 566, in _dispatch
job = ImmediateComputeBatch(batch)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 180, in __init__
self.results = batch()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 72, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/pipeline.py", line 399, in _transform_one
return transformer.transform(X)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/utils/metaestimators.py", line 37, in <lambda>
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/pipeline.py", line 291, in transform
Xt = transform.transform(Xt)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/decomposition/truncated_svd.py", line 201, in transform
return safe_sparse_dot(X, self.components_.T)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/utils/extmath.py", line 179, in safe_sparse_dot
ret = a * b
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/base.py", line 389, in __mul__
raise ValueError('dimension mismatch')
ValueError: dimension mismatch
What if you change second svd usage to new svd?
transformer_list = [
('char_ngrams', char_ngrams),
('char_ngrams_svd_pipeline', make_pipeline(char_ngrams, svd)),
('tfidf_word_ngrams', tfidf_word_ngrams),
('tfidf_word_ngrams_svd', make_pipeline(tfidf_word_ngrams, clone(svd)))
]
Seems your problem occurs because you're using same object 2 times. I is fitted first time on CountVectorizer, and second time on TfidfVectorizer (Or vice versa), and after you call predict of whole pipeline this svd object cannot understand output of CountVectorizer, because it was fitted on or TfidfVectorizer's output (Or again, vice versa).

Resources