I have a 5D blob like 1x8x128x128 and I have a Convolution layer which is able to process my 5D blob. When I want to use a pool layer though it does not work. How do you use a pool-layer with a 5D blob?
Check failed: 4 == bottom[0]->num_axes() (4 vs. 5) Input must have 4
axes, corresponding to (num, channels, height, width)
I think it is just not supported yet by caffe. Could I just use a convolution layer and do the pooling?
If you want to pool only the first two spatial dimensions, you can "Reshape" to 4D ("squashing" the channel and temporal dimensions), pool and then "Reshape" back to 5D:
layer {
name: "pool/reshape4D"
type: "Reshape"
bottom: "in"
top: "pool/reshape4D"
reshape_param { axis: 1 num_axes: 1 shape { dim: -1 } }
}
layer {
name: "pool"
type: "Pooling"
bottom: "pool/reshape4D"
top: "pool"
# pooling params here...
}
layer {
name: "pool/reshape5D"
type: "Reshape"
bottom: "pool"
top: "pool/reshape5D"
reshape_param { axis: 1 num_axes: 1 shape { dim: -1 dim: <temporal_dim> } } # replace <.> with the actual temporal dimension size.
}
See the definition of ReshapeParameter in caffe.proto for more details.
Related
I need to use an upscale layer in caffe which "doubles" the pixels. A 10x10 image becomes 20x20 with pixels "doubled" in both horizontal and vertical dimension. I heard that deconv layer may help with a stride of 2, no padding and a kernel size of 1x1 but this put zeros between pixels. Does anyone can help me ? Thanks
I would try kernel size of 2 and weights init (and fixed?) to 1.
layer {
name: "upsample"
type: "Deconvolution"
bottom: x
top: y
convolution_param {
num_output: # same as number of input channels
group: # same as number of channels
bias_term: false # no need for bias
kernel_size: 2
stride: 2
pad: 0
weight_filler: { type: "constant" val: 1 }
}
param { lr_mult: 0 }
}
Note the group and num_output should be equal so you have the same kernel acting on each channel independently.
I've got a huge data set in LMDB (40Gb) that I use for training a binary classifier with caffe.
Data layer in Caffe contains integer labels.
Are there any ready layers that could transform them into floats with adding some random jitter, so I could apply label smoothing technique, as described in 7.5.1 here
I have seen examples with HDF5, but they require regenerating data set, and I would like to avoid it.
You can use DummyData layer to generate the random noise you wish to add to the labels. Once you have the noise, use Eltwise layer to sum them up:
layer {
name: "noise"
type: "DummyData"
top: "noise"
dummy_data_param {
shape { dim: 10 dim: 1 dim: 1 dim: 1 } # assuming batch size = 10
data_filler { type: "uniform" min: -0.1 max: 0.1 } # noise ~U(-0.1, 0.1)
}
}
layer {
name: "label_noise"
type: "Eltwise"
bottom: "label" # the input integer labels
bottom: "noise"
top: "label_noise"
eltwise_param { operation: SUM }
}
I am currently reading the paper on 'CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection', it is using the skip-connection to fuse conv3-3, conv4-3 and conv5-3 together, the steps are shown below
Extract the feature maps of the face region (at multiple scales conv3-3, conv4-3, conv5-3) and apply RoI-Pooling to it (i.e. convert to a fixed height and width).
L2-normalize each feature map.
Concatenate the (RoI-pooled and normalized) feature maps of the face (at multiple scales) with each other (creates one tensor).
Apply a 1x1 convolution to the face tensor.
Apply two fully connected layers to the face tensor, creating a vector.
I used the caffe and made a prototxt based on faster-RCNN VGG16 , the following parts are added into the original prototxt
# roi pooling the conv3-3 layer and L2 normalize it
layer {
name: "roi_pool3"
type: "ROIPooling"
bottom: "conv3_3"
bottom: "rois"
top: "pool3_roi"
roi_pooling_param {
pooled_w: 7
pooled_h: 7
spatial_scale: 0.25 # 1/4
}
}
layer {
name:"roi_pool3_l2norm"
type:"L2Norm"
bottom: "pool3_roi"
top:"pool3_roi"
}
-------------
# roi pooling the conv4-3 layer and L2 normalize it
layer {
name: "roi_pool4"
type: "ROIPooling"
bottom: "conv4_3"
bottom: "rois"
top: "pool4_roi"
roi_pooling_param {
pooled_w: 7
pooled_h: 7
spatial_scale: 0.125 # 1/8
}
}
layer {
name:"roi_pool4_l2norm"
type:"L2Norm"
bottom: "pool4_roi"
top:"pool4_roi"
}
--------------------------
# roi pooling the conv5-3 layer and L2 normalize it
layer {
name: "roi_pool5"
type: "ROIPooling"
bottom: "conv5_3"
bottom: "rois"
top: "pool5"
roi_pooling_param {
pooled_w: 7
pooled_h: 7
spatial_scale: 0.0625 # 1/16
}
}
layer {
name:"roi_pool5_l2norm"
type:"L2Norm"
bottom: "pool5"
top:"pool5"
}
# concat roi_pool3, roi_pool4, roi_pool5 and apply 1*1 conv
layer {
name:"roi_concat"
type: "Concat"
concat_param {
axis: 1
}
bottom: "pool5"
bottom: "pool4_roi"
bottom: "pool3_roi"
top:"roi_concat"
}
layer {
name:"roi_concat_1*1_conv"
type:"Convolution"
top:"roi_concat_1*1_conv"
bottom:"roi_concat"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 1
weight_filler{
type:"xavier"
}
bias_filler{
type:"constant"
}
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "roi_concat_1*1_conv"
top: "fc6"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 4096
}
}
during the training, I met such a issue
F0616 16:43:02.899025 3712 net.cpp:757] Cannot copy param 0 weights from layer 'fc6'; shape mismatch. Source param shape is 1 1 4096 25088 (102760448); target param shape is 4096 10368 (42467328).
To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.
I could find out what goes wrong, I need some help from you if you can spot some problem or explanation.
Really appreciated!!
The error message you got is quite clear. You are trying to fine-tune the weights of the layers, but for "fc6" layer you have a problem:
The original net you copied the weights from had "fc6" layer with input dimension of 10368. On the other hand, your "fc6" layer has input dimension of 25088. You cannot use the same W matrix (aka param 0 of this layer) if the input dimension is different.
Now that you know the problem, look at the error message again:
Cannot copy param 0 weights from layer 'fc6'; shape mismatch.
Source param shape is 1 1 4096 25088 (102760448);
target param shape is 4096 10368 (42467328).
Caffe cannot copy W matrix (param 0) of "fc6" layer, its shape does not match the shape of W stored in .caffemodel you are trying to fine tune.
What can you do?
Simply read the next line of the error message:
To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.
Just rename the layer, and caffe will learn the weights from scratch (only for this layer).
I'm working with some older branch of caffe. Now I need to modify the prototxt file by slicing the input layer.
I know that in the new syntax it looks like this:
layer {
name: "slice"
type: "Slice"
bottom: "labelAndMask"
## Example of layer with a shape N x 5 x Height x Width
top: "label"
top: "mask"
slice_param {
axis: 1
slice_point: 1
}
}
What would be the equivalent in the old prototxt format? Also, where in the caffe sources could I look this up by myself?
You should look at the bottom of $CAFFE_ROOT/src/caffe/proto/caffe.proto, you'll see the V1LayerParameter definition.
For old syntax slice layer:
layers {
type: SLICE # this is NOT a string, but an enum
name: "slice"
bottom: "labelAndMask"
## Example of layer with a shape N x 5 x Height x Width
top: "label"
top: "mask"
slice_param {
axis: 1
slice_point: 1
}
}
How can I define multiply constant layer in Caffe (like MulConstant in Torch). I need to add it predefined const to existing network.
Caffe fails to parse my attempt to scale everything by 0.85:
layers {
name: "caffe.ConstantMul_0"
type: "Eltwise"
bottom: "caffe.SpatialConvolution_0"
top: "caffe.ConstantMul_0"
eltwise_param {
op: MUL
coeff: 0.85
}
}
It is possible to do with Power Layer, just set up power to 1 and scale to whatever you need:
layer {
name: "caffe.ConstantMul_1"
bottom: "caffe.SpatialConvolution_3"
top: "caffe.ConstantMul_1"
type: "Power"
power_param {
power: 1
scale: 0.85
shift: 0
}
}
Eltwise layer can do three types of operations - PROD, SUM, MAX. You can see more about this here
In your case, the op paramter should be set as PROD.
layers {
name: "caffe.ConstantMul_0"
type: "Eltwise"
bottom: "caffe.SpatialConvolution_0"
top: "caffe.ConstantMul_0"
eltwise_param {
op: MUL
coeff: 0.85
}
}