scikit-neuralnetwork mismatch error in dataset size - machine-learning

I'm trying to train an MLP classifier for the XOR problem using sknn.mlp
from sknn.mlp import Classifier, Layer
X=numpy.array([[0,1],[0,0],[1,0]])
print X.shape
y=numpy.array([[1],[0],[1]])
print y.shape
nn=Classifier(layers=[Layer("Sigmoid",units=2),Layer("Sigmoid",units=1)],n_iter=100)
nn.fit(X,y)
This results in:
No handlers could be found for logger "sknn"
Traceback (most recent call last):
File "xorclassifier.py", line 10, in <module>
nn.fit(X,y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 343, in fit
return super(Classifier, self)._fit(X, yp)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 179, in _fit
X, y = self._initialize(X, y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 37, in _initialize
self._create_specs(X, y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 64, in _create_specs
"Mismatch between dataset size and units in output layer."
AssertionError: Mismatch between dataset size and units in output layer.

Scikit seems to turn your y vector into a binary vector of shape (n_samples,n_classes). n_classes is in your case two. So try
nn=Classifier(layers=[Layer("Sigmoid",units=2),Layer("Sigmoid",units=2)],n_iter=100)

Related

I'm getting incomprehensible errors Unet

C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
the indiex is : 0 rest is: torch.Size([64, 240, 320, 3]) torch.Size([64, 240, 320, 3])
Traceback (most recent call last):
File "c:\Users\Viktor\Desktop\Infrarens.py", line 174, in
outputs = model(inputs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "c:\Users\Viktor\Desktop\Infrarens.py", line 135, in forward
x = self.encoder(x)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 1, 3, 3], expected input[64, 240, 320, 3] to have 1 channels, but got 240 channels instead
Im trying to train a Unet on a image set, I don't know how to interprate this output

ZeroDivisionError: float division by zero during net_segment inference patch aggregation

I ran (on Ubuntu 16.04 in a Google Cloud VM Instance):
net_segment inference -c <path-to-config>
for a binary segmentation problem using unet_2d with softmax and a (96,96,1) spatial window.
This was after I trained my model for 10 epochs and saved the checkpoint. I'm not sure why it's drawing a zero division error
from windows_aggregator_resize.py. What is the cause of this issue and what can I do to fix it?
Here are some inference settings and the corresponding error:
pixdim: (1.0, 1.0, 1.0)
[NETWORK]
batch_size: 1
cutoff: (0.01, 0.99)
name: unet_2d
normalisation: False
volume_padding_size: (96, 96, 0)
reg_type: L2
window_sampling: resize
multimod_foreground_type: and
[INFERENCE]
border = (96,96,0)
inference_iter = -1
output_interp_order = 0
spatial_window_size = (96,96,2)
INFO:niftynet: Accessing /home/xchaosfailx1/niftynet/models/MSD/heart_la_seg/models/model.ckpt-10 ...
INFO:niftynet: Restoring parameters from /home/xchaosfailx1/niftynet/models/MSD/heart_la_seg/models/model.ckpt-10
INFO:niftynet: Cleaning up...
WARNING:niftynet: stopped early, incomplete loops
INFO:niftynet: stopping sampling threads
INFO:niftynet: SegmentationApplication stopped (time in second 17.07).
Traceback (most recent call last):
File "/home/xchaosfailx1/.local/bin/net_segment", line 11, in <module>
sys.exit(main())
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/__init__.py", line 139, in main
app_driver.run_application()
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 275, in run_application
self._inference_loop(session, loop_status)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 493, in _inference_loop
self._loop(iter_generator(itertools.count(), INFER), sess, loop_status)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 442, in _loop
iter_msg.current_iter_output[NETWORK_OUTPUT])
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/application/segmentation_application.py", line 390, in interpret_output
batch_output['window'], batch_output['location'])
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 55, in decode_batch
self._save_current_image(window[batch_id, ...], resize_to_shape)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 82, in _save_current_image
[float(p) / float(d) for p, d in zip(window_shape, image_shape)]
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 82, in <listcomp>
[float(p) / float(d) for p, d in zip(window_shape, image_shape)]
ZeroDivisionError: float division by zero
For reproducing the error:
changed the padding in niftynet.network.unet_2d.py from valid to same
dataset [Task2_Heart] : https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
updated config:
https://drive.google.com/open?id=1RI111BZLv4Lhf9cGvHo_sAHRt_k5Xt0I
Didn't check the inference data but I think spatial_window_size in [INFERENCE] should be 96, 96, 1 as that's what you set in training.
The mistake that I made was that I set the border (96,96,0) under [Inference] to the same shape as my spatial window (96,96,1), so when the batch was cropped in decode_batch, the cropped image had an image shape with 0s in it. Hence, when the zoom ratio was calculated in _save_current_image, it led to a ZeroDivsionError. The temporary fix was to remove volume padding and changing the border=(0,0,0).

Saving extracted features in CNN

I've just started learning machine learning algorithms. I would like to train VGG-16 network for my own dataset. I am using tflearn.DNN to simulate the VGG net.
I want to save the output (which is a tensor) of fully connected layer, that extracts 4096 features, into a file. I wanted to know how to save these features.
When i ran the following lines:
feed_dict = feed_dict_builder(X, Y, model.inputs, model.targets)
output = model.predictor.evaluate(feed_dict, convnet1)
print(output)
output.save('features.npy')
I got the following exception and error:
Exception in thread Thread-48:
Traceback (most recent call last):
File "/home/anupama/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/anupama/anaconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/anupama/anaconda3/lib/python3.6/site-packages/tflearn/data_flow.py", line 187, in fill_feed_dict_queue
data = self.retrieve_data(batch_ids)
File "/home/anupama/anaconda3/lib/python3.6/site-packages/tflearn/data_flow.py", line 222, in retrieve_data
utils.slice_array(self.feed_dict[key], batch_ids)
File "/home/anupama/anaconda3/lib/python3.6/site-packages/tflearn/utils.py", line 180, in slice_array
return [x[start] for x in X]
File "/home/anupama/anaconda3/lib/python3.6/site-packages/tflearn/utils.py", line 180, in <listcomp>
return [x[start] for x in X]
IndexError: index 2 is out of bounds for axis 1 with size 2
[0.0]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-23-f2d62c020964> in <module>()
4 output = model.predictor.evaluate(feed_dict, convnet1)
5 print(output)
----> 6 output.save('/home/anupama/Internship/feats')
AttributeError: 'list' object has no attribute 'save'
You should save the FC layer of the network as a separate tensor and use DNN.predictor to evaluate it. Sample code:
import tflearn
from tflearn.utils import feed_dict_builder
# VGG model definition
...
previous_layer = ...
fc_layer1 = tflearn.fully_connected(previous_layer, 4096, activation='relu', name='fc1')
fc_layer2 = tflearn.fully_connected(fc_layer1, 4096, activation='relu', name='fc2')
network = ...
# Training
model = tflearn.DNN(network)
model.fit(x, y)
# Evaluation
feed_dict = feed_dict_builder(x, y, model.inputs, model.targets)
output = model.predictor.evaluate(feed_dict, [fc_layer2])
np.save('features.npy', output)

tensorflow.python.framework.errors_impl.InvalidArgumentError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_1/MaxPool'

I am trying to implement a neural network used for image classification with Keras and Tensorflow, according to the tutorial from here.
I added the following code:
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 150, 150)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
However, the problem is that I am getting:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 671, in _call_cpp_shape_fn_impl
16.4s
3
input_tensors_as_shapes, status)
File "/opt/conda/lib/python3.6/contextlib.py", line 89, in __exit__
next(self.gen)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_1/MaxPool' (op: 'MaxPool') with input shapes: [?,1,148,32].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "../src/script.py", line 49, in <module>
model.add(MaxPooling2D(pool_size=(2, 2)))
File "/opt/conda/lib/python3.6/site-packages/Keras-2.0.5-py3.6.egg/keras/models.py", line 469, in add
File "/opt/conda/lib/python3.6/site-packages/Keras-2.0.5-py3.6.egg/keras/engine/topology.py", line 596, in __call__
File "/opt/conda/lib/python3.6/site-packages/Keras-2.0.5-py3.6.egg/keras/layers/pooling.py", line 154, in call
File "/opt/conda/lib/python3.6/site-packages/Keras-2.0.5-py3.6.egg/keras/layers/pooling.py", line 217, in _pooling_function
File "/opt/conda/lib/python3.6/site-packages/Keras-2.0.5-py3.6.egg/keras/backend/tensorflow_backend.py", line 3378, in pool2d
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 1769, in max_pool
16.4s
4
name=name)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1605, in _max_pool
data_format=data_format, name=name)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2508, in create_op
set_shapes_for_outputs(ret)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1873, in set_shapes_for_outputs
shapes = shape_func(op)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1823, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_1/MaxPool' (op: 'MaxPool') with input shapes: [?,1,148,32].
After that, I looked at a possible answer and I changed the last line to this:
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="tf"))
But after this change I getting the same error.
Any idea what could be wrong?
The code you provided was written with assuming that your backend is Theano. In case of Tensorflow you should change your input to have shape (width, height, channels) so you should change this line:
model.add(Conv2D(32, (3, 3), input_shape=(150, 150, 3)))
Your problem comes from that after the convolution (with valid padding) your output has shape(1, 148, 32) so it's impossible to apply MaxPooling2D with stride (2, 2) which is a default value.

Unsupervised loss function in Keras

Is there any way in Keras to specify a loss function which does not need to be passed target data?
I attempted to specify a loss function which omitted the y_true parameter like so:
def custom_loss(y_pred):
But I got the following error:
Traceback (most recent call last):
File "siamese.py", line 234, in <module>
model.compile(loss=custom_loss,optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 911, in compile
sample_weight, mask)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 436, in weighted
score_array = fn(y_true, y_pred)
TypeError: custom_loss() takes exactly 1 argument (2 given)
I then tried to call fit() without specifying any target data:
model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
But it looks like not passing any target data causes an error:
Traceback (most recent call last):
File "siamese.py", line 264, in <module>
model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1435, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1322, in _standardize_user_data
in zip(y, sample_weights, class_weights, self._feed_sample_weight_modes)]
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 577, in _standardize_weights
return np.ones((y.shape[0],), dtype=K.floatx())
AttributeError: 'NoneType' object has no attribute 'shape'
I could manually create dummy data in the same shape as my neural net's output but this seems extremely messy. Is there a simple way to specify an unsupervised loss function in Keras that I am missing?
I think the best solution is customizing the training instead of using the model.fit method.
The complete walkthrough is published in the Tensorflow tutorials page.
Write your loss function as if it had two arguments:
y_true
y_pred
If you don't have y_true, that's fine, you don't need to use it inside to compute the loss, but leave a placeholder in your function prototype, so keras wouldn't complain.
def custom_loss(y_true, y_pred):
# do things with y_pred
return loss
Adding custom arguments
You may also need to use another parameter like margin inside your loss function, even then your custom function should only take in those two arguments. But there is a workaround, use lambda functions
def custom_loss(y_pred, margin):
# do things with y_pred
return loss
but use it like
model.compile(loss=lambda y_true, y_pred: custom_loss(y_pred, margin), ...)

Resources