Unsupervised loss function in Keras - machine-learning

Is there any way in Keras to specify a loss function which does not need to be passed target data?
I attempted to specify a loss function which omitted the y_true parameter like so:
def custom_loss(y_pred):
But I got the following error:
Traceback (most recent call last):
File "siamese.py", line 234, in <module>
model.compile(loss=custom_loss,optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 911, in compile
sample_weight, mask)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 436, in weighted
score_array = fn(y_true, y_pred)
TypeError: custom_loss() takes exactly 1 argument (2 given)
I then tried to call fit() without specifying any target data:
model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
But it looks like not passing any target data causes an error:
Traceback (most recent call last):
File "siamese.py", line 264, in <module>
model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1435, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1322, in _standardize_user_data
in zip(y, sample_weights, class_weights, self._feed_sample_weight_modes)]
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 577, in _standardize_weights
return np.ones((y.shape[0],), dtype=K.floatx())
AttributeError: 'NoneType' object has no attribute 'shape'
I could manually create dummy data in the same shape as my neural net's output but this seems extremely messy. Is there a simple way to specify an unsupervised loss function in Keras that I am missing?

I think the best solution is customizing the training instead of using the model.fit method.
The complete walkthrough is published in the Tensorflow tutorials page.

Write your loss function as if it had two arguments:
y_true
y_pred
If you don't have y_true, that's fine, you don't need to use it inside to compute the loss, but leave a placeholder in your function prototype, so keras wouldn't complain.
def custom_loss(y_true, y_pred):
# do things with y_pred
return loss
Adding custom arguments
You may also need to use another parameter like margin inside your loss function, even then your custom function should only take in those two arguments. But there is a workaround, use lambda functions
def custom_loss(y_pred, margin):
# do things with y_pred
return loss
but use it like
model.compile(loss=lambda y_true, y_pred: custom_loss(y_pred, margin), ...)

Related

I'm getting incomprehensible errors Unet

C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
the indiex is : 0 rest is: torch.Size([64, 240, 320, 3]) torch.Size([64, 240, 320, 3])
Traceback (most recent call last):
File "c:\Users\Viktor\Desktop\Infrarens.py", line 174, in
outputs = model(inputs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "c:\Users\Viktor\Desktop\Infrarens.py", line 135, in forward
x = self.encoder(x)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 1, 3, 3], expected input[64, 240, 320, 3] to have 1 channels, but got 240 channels instead
Im trying to train a Unet on a image set, I don't know how to interprate this output

no error while fitting the model over train data but NotFittedError while predicting over test set

Not fitted error coming up when using .predict,during fit there is no error
tried to convert dataframe into arrays still same error
Input:
rfg(n_estimators=500,random_state=42).fit(X=data_withoutnull1.iloc[:,1:8],y=data_withoutnull1['LotFrontage'])
rfg(n_estimators=500,random_state=42).predict(datawithnull1.iloc[:,1:8])
Output:
Traceback (most recent call last):
File "<ipython-input-477-10c6d72bcc12>", line 2, in <module>
rfg(n_estimators=500,random_state=42).predict(datawithnull1.iloc[:,1:8])
File "/home/sinikoibra/miniconda3/envs/pv36/lib/python3.6/site-packages/sklearn/ensemble/forest.py", line 691, in predict
check_is_fitted(self, 'estimators_')
File "/home/sinikoibra/miniconda3/envs/pv36/lib/python3.6/site-packages/sklearn/utils/validation.py", line 914, in check_is_fitted
raise NotFittedError(msg % {'name': type(estimator).__name__})
NotFittedError: This RandomForestRegressor instance is not fitted yet. Call 'fit' with appropriate arguments before using this method.
Try like this :
# Define X and y
X=data_withoutnull1.iloc[:,1:8].values
y=data_withoutnull1['LotFrontage']
You can use train test split to split the data into training set and testing set then pass the testing set into predict.
#pass X_train to fit -- training the model, fit(X_train)
#pass X_test to predict -- can be used for prediction, predict(X_test )
or Fitting Random Forest Regression to the dataset
from sklearn.ensemble import RandomForestRegressor
rfg= RandomForestRegressor(n_estimators = 500, random_state = 42)
rfg.fit(X, y)
# Predicting a new result
y_pred = rfg.predict([[some value here]] or testing set or dataset to be predicted)

TypeError: len() of unsized object in Python Extreme Learning Machine (ELM) library

I have installed elm library of python. There is an example provided in this link http://elm.readthedocs.io/en/latest/usage.html. The code is:
import elm
# download an example dataset from
# https://github.com/acba/elm/tree/develop/tests/data
# load dataset
data = elm.read("iris.data")
# create a classifier
elmk = elm.ELMKernel()
# search for best parameter for this dataset
# define "kfold" cross-validation method, "accuracy" as a objective function
# to be optimized and perform 10 searching steps.
# best parameters will be saved inside 'elmk' object
elmk.search_param(data, cv="kfold", of="accuracy", eval=10)
# split data in training and testing sets
# use 80% of dataset to training and shuffle data before splitting
tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True)
#train and test
# results are Error objects
tr_result = elmk.train(tr_set)
te_result = elmk.test(te_set)
print(te_result.get_accuracy)
When I run the code I am shown this error. It would be great help for me if someone could point out what is causing the problem. I have downloaded the dataset from the given URL provided in the link. My elm package's version is 0.1.1 and python version is 3.5.2. Thanks in advance.
Error is:
Traceback (most recent call last):
File "F:\7th semester\machine language\thesis work\python\Applying ELM in iris dataset\elm1.py", line 17, in <module>
elmk.search_param(data, cv="kfold", of="accuracy", eval=10)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\elm\elmk.py", line 489, in search_param
param_kernel=param_ranges[1])
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\optunity\api.py", line 212, in minimize
pmap=pmap)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\optunity\api.py", line 245, in optimize
solution, report = solver.optimize(f, maximize, pmap=pmap)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\optunity\solvers\CMAES.py", line 139, in optimize
sigma=self.sigma)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\deap\cma.py", line 90, in __init__
self.dim = len(self.centroid)
TypeError: len() of unsized object

Keras: ValueError: No data provided for "input_1". Need data for each key

I am using the keras functional API with input images of dimension (224, 224, 3). I have the following model using the functional API, although a similar problem seems to arise with sequential models:
input = Input(shape=(224, 224, 3,))
shared_layers = Dense(16)(input)
model = KerasModel(input=input, output=shared_layers)
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics='accuracy'])
I am calling model.fit_generator where my generator has
yield ({'input_1': image}, {'output': classification})
image is the input (224, 224, 3) image and classification is in {-1,1}.
On fitting the model, I get an error
ValueError: No data provided for "dense_1". Need data for each key in: ['dense_1']
One strange thing is that if I switch the input_1 target of the dict to dense_1, the error switches to missing an input for input_1, but goes back to missing dense_1 if both keys are in the data generator.
This happens whether I call fit_generator or get batches from the generator and call train_on_batch.
Does anyone know what's going on? From what I can tell, this should be the same as given in the documentation although with a different input size.
Full traceback:
Traceback (most recent call last):
File "pymask.py", line 303, in <module>
main(sys.argv)
File "pymask.py", line 285, in main
keras.callbacks.ProgbarLogger()
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1557, in fit_generator
class_weight=class_weight)
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1314, in train_on_batch
check_batch_axis=True)
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1029, in _standardize_user_data
exception_prefix='model input')
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 52, in standardize_input_data
str(names))
ValueError: No data provided for "input_1". Need data for each key in: ['input_1']
I encountered this error on 3 cases (In R):
The input data does not have the same dimension as was declared in the first layer
The input data includes missing values
The input data is not a matrix (for example, a data frame)
Please check all of the above.
Maybe this code in R can help:
library(keras)
#The network should identify the rule that a row sum greater than 1.5 should yield an output of 1
my_x=matrix(data=runif(30000), nrow=10000, ncol=3)
my_y=ifelse(rowSums(my_x)>1.5,1,0)
my_y=to_categorical(my_y, 2)
model = keras_model_sequential()
layer_dense(model,units = 2000, activation = "relu", input_shape = c(3))
layer_dropout(model,rate = 0.4)
layer_dense(model,units = 50, activation = "relu")
layer_dropout(model,rate = 0.3)
layer_dense(model,units = 2, activation = "softmax")
compile(model,loss = "categorical_crossentropy",optimizer = optimizer_rmsprop(),metrics = c("accuracy"))
history <- fit(model, my_x, my_y, epochs = 5, batch_size = 128, validation_split = 0.2)
evaluate(model,my_x, my_y,verbose = 0)
predict_classes(model,my_x)
I have encountered this issue as well and none of the above mentioned answers worked. According to the keras documentation you can pass the arguments either as a dictionary like that:
model.fit({'main_input': headline_data, 'aux_input': additional_data},
{'main_output': labels, 'aux_output': labels},
epochs=50, batch_size=32)
or as a list like that:
model.fit([headline_data, additional_data], [labels, labels],
epochs=50, batch_size=32)
The dictionary version didn't work for me with keras version 2.0.9. I have used the list version as a workaround for now.
This was due to me misunderstanding how the keras outputs work. The layer specified by the output argument to Model requires the output from the data. I misunderstood that the output key in the data dictionary automatically goes to the layer specified by the output argument.
yield ({'input_1': image}, {'output': classification})
Replace output with dense_1.
It will work.

scikit-neuralnetwork mismatch error in dataset size

I'm trying to train an MLP classifier for the XOR problem using sknn.mlp
from sknn.mlp import Classifier, Layer
X=numpy.array([[0,1],[0,0],[1,0]])
print X.shape
y=numpy.array([[1],[0],[1]])
print y.shape
nn=Classifier(layers=[Layer("Sigmoid",units=2),Layer("Sigmoid",units=1)],n_iter=100)
nn.fit(X,y)
This results in:
No handlers could be found for logger "sknn"
Traceback (most recent call last):
File "xorclassifier.py", line 10, in <module>
nn.fit(X,y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 343, in fit
return super(Classifier, self)._fit(X, yp)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 179, in _fit
X, y = self._initialize(X, y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 37, in _initialize
self._create_specs(X, y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 64, in _create_specs
"Mismatch between dataset size and units in output layer."
AssertionError: Mismatch between dataset size and units in output layer.
Scikit seems to turn your y vector into a binary vector of shape (n_samples,n_classes). n_classes is in your case two. So try
nn=Classifier(layers=[Layer("Sigmoid",units=2),Layer("Sigmoid",units=2)],n_iter=100)

Resources