I am training my own model on some MR images in Caffe. When I try to train the model, I get the following error:
Unknown bottom blob 'label' (layer 'accuracy', bottom index 1)
I looked at a similar question https://stackoverflow.com/questions/32241193/error-with-caffe-c-example-with-different-deploy-prototxt-file, The accepted answer for that question asked the OP to remove the "Accuracy" and loss layers from the deploy.prototxt file. I have done that also, and still the error persists. I can't figure out where I am going wrong.
As the error message suggests, you have a layer with bottom: "label", that is, a layer that expects as (one of its) inputs "label". However, it seems like no layer in your model produces "label" as an output of that layer: no layer has "label" as one of its "top"s.
Please review your model's prototxt to find where "label" should come from, or alternatively, eliminate layers that requires "label" as an input.
Related
I'm working with RetinaNet NN model for object detection and I faced with over fitting problem.
One of the solutions is adding "Dropout".
I'm Using the keras code Here
I want to Add Dropout to the last layers but I don't know how to add.
Can anyone help which file should I change?and how?
After a while, I tried many solutions but non of them didn't say how to add exactly, so I tried and then found how to add, So decided to answer it myself!
Just need to add a line like this:
outputs = keras.layers.SpatialDropout1D (rate=dropout_rate) (outputs)
You can use another layer dropout type like :
SpatialDropout2D and more.
You could try to store the fully connected layer into a variable like:
fc1 = model.layers[-3]
fc2 = model.layers[-2]
predictions = model.layers[-1]
Then create your dropout layer and reconnect them all to build a new Model as shown in this post :Add dropout layers between pretrained dense layers in keras
Hope this helps.
I wanna to create a classifier for an image dataset that each image is in multiple classes from all classes, so the target values are k-hot vectors. Now I create a text file which contains address if image file and space and a k-hot vector in each line but when i try to run scripts to create lmdb files it raise errors that can not open or find files. I try the same process with same data and just a number as class label and everything goes well. So I think it cannot parse .txt file correctly when labels are vectors.
Any suggestion...
Thank you
Caffe "Data" layers and convert_imageset script were written with a very specific use case in mind: image classification. Therefore the basic element stored in (and fetched from) LMDB by caffe is Datum that has a room for a single integer label.
You can see a more lengthy discussion on this subject here
It does not mean Caffe cannot facilitate different types of inputs/tasks.
You can use "HDF5Data" layer instead. When it comes to hdf5 inputs caffe has almost no restrictions on the input shape and size.
See, e.g., this answer and this one for more details on how to actually make it work.
I used pre-trained GoogLeNet and then fine tuned it on my dataset for binary classification problem. Validation dataset seems to give the "loss3/top1" 98.5%. But when I evaluating the performance on my evaluation dataset it gives me 50% accuracy. Whatever changes I did it train_val.prototxt, I did the same changes in deploy.prototxt and I am not sure what changes should I do in these lines.
name: "GoogleNet"
layer {
name: "data"
type: "input"
top: "data"
input_param { shape: { dim:10 dim:3 dim:224 dim:224 } }
}
Any suggestions???
You do not need to change anything further in your deploy.prototxt*, but in the way you feed the data to the net. You must transform your evaluation images in the same way you transformed your training/validation images.
See, for example, how classifier.py puts the input images through a properly initialized caffe.io.Transformer class.
The "Input" layer you have in the prototxt is merely a declaration for caffe to allocate memory according to an input blob of shape 10-by-3-by-224-by-224.
* of course, you must verify that train_val.prototxt and deploy.prototxt are exactly the same (apart from the input layer(s) and loss layer(s)): that includes making sure layer names are identical as caffe uses layer names to assign weights from 'caffemodel' file to the actual parameters it loads. Mismatching names will cause caffe to use random weights for some of the layers.
I ran caffe and got this output:
who can tell me what is the problem?
I will really appreciate!!
It seems like one (or more) of your label values are invalid, see this PR for information:
If you have an invalid ground truth label, "SoftmaxWithLoss" will silently access invalid memory [...] The old check only worked in DEBUG mode and also only worked for CPU.
Make sure your prediction vector length matches the number of labels you try to predict.
From your comments, it seems like you have labels in the range 0..10575, but on the other hand, your classification layer, "fc7" only predicts probabilities for 1000 classes. Thus, "SoftmaxWithLoss" layer tries to compute the loss for predicting label l>1000, and access memory outside the probability array, resulting with a segmentation fault.
This is my train.prototxt. And this is my deploy.prototxt.
When I want to load my deploy file I get this error:
File "./python/caffe/classifier.py", line 29, in __init__
in_ = self.inputs[0]
IndexError: list index out of range
So, I removed the data layer:
F1117 23:16:09.485153 21910 insert_splits.cpp:35] Unknown bottom blob 'data' (layer 'conv1', bottom index 0)
*** Check failure stack trace: ***
Than, I removed bottom: "data" from conv1 layer.
After it, I got this error:
F1117 23:17:15.363919 21935 insert_splits.cpp:35] Unknown bottom blob 'label' (layer 'loss', bottom index 1)
*** Check failure stack trace: ***
I removed bottom: "label" from loss layer. And I got this error:
I1117 23:19:11.171021 21962 layer_factory.hpp:76] Creating layer conv1
I1117 23:19:11.171036 21962 net.cpp:110] Creating Layer conv1
I1117 23:19:11.171041 21962 net.cpp:433] conv1 -> conv1
F1117 23:19:11.171061 21962 layer.hpp:379] Check failed: MinBottomBlobs() <= bottom.size() (1 vs. 0) Convolution Layer takes at least 1 bottom blob(s) as input.
*** Check failure stack trace: ***
What should I do to fix it and create my deploy file?
There are two main differences between a "train" prototxt and a "deploy" one:
1. Inputs: While for training data is fixed to a pre-processed training dataset (lmdb/HDF5 etc.), deploying the net require it to process other inputs in a more "random" fashion.
Therefore, the first change is to remove the input layers (layers that push "data" and "labels" during TRAIN and TEST phases). To replace the input layers you need to add the following declaration:
input: "data"
input_shape: { dim:1 dim:3 dim:224 dim:224 }
This declaration does not provide the actual data for the net, but it tells the net what shape to expect, allowing caffe to pre-allocate necessary resources.
2. Loss: the top most layers in a training prototxt define the loss function for the training. This usually involve the ground truth labels. When deploying the net, you no longer have access to these labels. Thus loss layers should be converted to "prediction" outputs. For example, a "SoftmaxWithLoss" layer should be converted to a simple "Softmax" layer that outputs class probability instead of log-likelihood loss. Some other loss layers already have predictions as inputs, thus it is sufficient just to remove them.
Update: see this tutorial for more information.
Besides advices from #Shai, you may also want to disable the dropout layers.
Although Jia Yangqing, author of Caffe once said that dropout layers have negligible impact on the testing results (google group conversation, 2014), other Deeplearning tools suggest to disable dropout in the deploy phase (for example, lasange).