I wanna to create a classifier for an image dataset that each image is in multiple classes from all classes, so the target values are k-hot vectors. Now I create a text file which contains address if image file and space and a k-hot vector in each line but when i try to run scripts to create lmdb files it raise errors that can not open or find files. I try the same process with same data and just a number as class label and everything goes well. So I think it cannot parse .txt file correctly when labels are vectors.
Any suggestion...
Thank you
Caffe "Data" layers and convert_imageset script were written with a very specific use case in mind: image classification. Therefore the basic element stored in (and fetched from) LMDB by caffe is Datum that has a room for a single integer label.
You can see a more lengthy discussion on this subject here
It does not mean Caffe cannot facilitate different types of inputs/tasks.
You can use "HDF5Data" layer instead. When it comes to hdf5 inputs caffe has almost no restrictions on the input shape and size.
See, e.g., this answer and this one for more details on how to actually make it work.
Related
I want to build a database with Yolo and this is my first time working with deep learning
how can I build a database for Yolo and train it?
How do I get the weights of the classifications?
Is it too difficult for someone new to Deep Learning?
Yes you can do it with ease!! and welcome to the Deep learning Community. You are welcome.
First download the darknet folder from Link
Go inside the folder and type make in command prompt
git clone https://github.com/pjreddie/darknet
cd darknet
make
Define these files -
data/custom.names
data/images
data/train.txt
data/test.txt
Now its time to label the images using LabelImg and save it in YOLO format which will generate corresponding label .txt files for the images dataset.
Labels of our objects should be saved in data/custom.names.
Using the script you can split the dataset into train and test-
import glob, os
dataset_path = '/media/subham/Data1/deep_learning/usecase/yolov3/images'
# Percentage of images to be used for the test set
percentage_test = 20
# Create and/or truncate train.txt and test.txt
file_train = open('train.txt', 'w')
file_test = open('test.txt', 'w')
# Populate train.txt and test.txt
counter = 1
index_test = round(100 / percentage_test)
for pathAndFilename in glob.iglob(os.path.join(dataset_path, "*.jpg")):
title, ext = os.path.splitext(os.path.basename(pathAndFilename))
if counter == index_test+1:
counter = 1
file_test.write(dataset_path + "/" + title + '.jpg' + "\n")
else:
file_train.write(dataset_path + "/" + title + '.jpg' + "\n")
counter = counter + 1
For train our object detector we can use the existing pre trained weights that are already trained on huge data sets. From here we can download the pre trained weights to the root directory.
Create a yolo-custom.data file in the custom_data directory which should contain information regarding the train and test data sets
classes=2
train=custom_data/train.txt
valid=custom_data/test.txt
names=custom_data/custom.names
backup=backup/
Now we have to make changes in our yolov3.cfg for training our model. For two classes. Based on the required performance we can select the YOLOv3 configuration file. For this example we will be using yolov3.cfg. We can duplicate the file from cfg/yolov3.cfg to custom_data/cfg/yolov3-custom.cfg
The maximum number of iterations for which our network should be trained is set with the param max_batches=4000. Also update steps=3200,3600 which is 80%, 90% of max_batches.
We will need to update the classes and filters params of [yolo] and [convolutional] layers that are just before the [yolo] layers.
In this example since we have a single class (tesla) we will update the classes param in the [yolo] layers to 1 at line numbers: 610, 696, 783
Similarly we will need to update the filters param based on the classes count filters=(classes + 5) * 3. For two classes we should set filters=21 at line numbers: 603, 689, 776
All the configuration changes are made to custom_data/cfg/yolov3-custom.cfg
Now, we have defined all the necessary items for training the YOLOv3 model. To train-
./darknet detector train custom_data/detector.data custom_data/cfg/yolov3-custom.cfg darknet53.conv.74
Also you can mark bounded boxes of objects in images for training Yolo right in your web browser, just open url. This tool is deployed to GitHub Pages.
Use this popular forked darknet repository https://github.com/AlexeyAB/darknet. The author describes many steps that will help you to build and use your own Yolo detector model.
How to build your own custom dataset and train it? Follow this step . He suggests to use Yolo Mark labeling tool to build your dataset, but you can also try another tool as described in here and here.
How to get the weights? The weights will be stored in darknet/backup/ directory after every 1000 iterations (you can adjust this value later). The link above explains everything about how to make and use the weights file.
I don't think it will be so difficult if you already know math, statistic and programming. Learning the basic neural network like perceptron, MLP then move to modern Machine Learning is a good start. Then you might want to expand your knowledge to Computer Vision related or NLP related area
Depending on what kind of OS you have. You can either hit up https://github.com/AlexeyAB/darknet [especially for Windows] or stick to https://github.com/pjreddie/darknet.
Steps to do so:
1) Setup darknet as detailed in the posts.
2) I used LabelIMG to label my images. make sure that the format
you save the images is in YOLO. If you save using the PascalVOC format or others you can write scripts to change it to the format that darknet expects.[YOLO]. Also, make sure that you do not change your labels file. If you want to add new labels, at it at the end of the file, not in between. YOLO format is quite different, so your previously labelled images may get messed up if you make changes in between the classes.
3)The weights will be generated as you train your model in a specific folder in darknet.[If you need more details I am happy to help answer that]. You can download the .74 file in YOLO and start training. The input to train needs a built darknet.exe a cfg file a .74 file and your training data location/access.
The setup is draconian, the process itself is not.
To build your own dataset, you should use LabelImg. It's a free and very easy software which will produce for you all the files you need to build a dataset. In fact, because you are working with yolo, you need a txt file for each of your image which will contain important information like bbox coordinates, label name. All these txt files are automatically produced by LabelImg so all you have to do is open the directory which contains all your images with LabelImg, and start the labelisation. Then, well you will have all your txt files, you will also need to create some other files in order to start training (see https://blog.francium.tech/custom-object-training-and-detection-with-yolov3-darknet-and-opencv-41542f2ff44e).
Now I'm using fb torch library from github fb torch resnet
It's my first time to use torch and lua, so Im encountering some problems.
My goal is to save the feature vector of specific layer (last avg pooling of resnet) into a one file with the class of the input image. All input images are from cifar-10 db.
The file format that i want to get is like belows
image1.txt := class index of image and feature vector of image 1 of cifar-10
image2.txt := class index of image and feature vector of image 2 of cifar-10
// and so on through all images of cifar-10
Now I have seen some sample code of that github extract-features.lua
Because it's my first time for lua, I feel so hard to understand this code and to modify to the way i want. And i don't want my data to save into t7 file format.
How can i access only one specific layer from network in torch via lua? (last average pooling)
How can i access values of the layer and classification result index?
How can read all each images from cifar-10 db file(t7 batch)?
Sorry for too many questions. But im feeling hard using torch because of pool amouns of community threads and posting of torch.. please understand me.
How can i access only one specific layer from network in torch via lua? (last average pooling)
To access each layer you just have to load the model and get it using an integer number. If you do print model you will be able to see in which position the last average pooling is.
model = torch.load(path_to_model):cuda()
avg_pooling_layer = model:get(position_of_the_avg_pooling_layer)
How can i access values of the layer and classification result index?
I do not quite understand what you mean by this. If you want to see the output or the weights from a specific layer. (following the code above) You need to get these elements from the layer table. Again, to see which ones are the possible elements to get use print avg_pooling_layer
weights = avg_pooling_layer.weight -- get the weights of the layer
output = avg_pooling_layer.output -- get the output of the layer
How can read all each images from cifar-10 db file(t7 batch)?
To read the images from a t7 file use the torch function torch.load. (used before to load the model).
cifar_10 = torch.load("path_to_cifar-10.t7")
Once loaded you could have the training and test set in subtables or functions. Again, print the table and visualize which values are the ones you need to get.
Hope this helps!
I have seen lots of examples showing how to insert image data for model training in Caffe.
I am trying to train a model using data that is not images. I can reshape it as a matrix or a vector (for every example), but I don't understand how to make my Caffe network read it.
I know that Caffe can work with lmdb/hdf5 databases, and I can additionally use a Python data layer.
I guess a Python data layer will be my best choice. Can someone provide an example of how to create some kind of array in Python and use it as training data for a Caffe model?
You do not need to create a python layer for simple vector inputs. HDF5 layer is probably easiest to work with. Just create HDF5 files with your favorite tool ( Refer this for creating HDF5 using matlab, or this for creating using python)
Both examples are fairly easy to follow. The matlab example gives you a more advanced version of HDF5 file creation--as in creating batches and all--but at its heart you just need to call
store2hdf5(filename, data, labels) %others are optional
Similarly, the python example also goes over complete example, all of which you may or may not need. At its core, creating HDF5 file is simply.
import h5py
with h5py.File('filename.h5', 'w') as f:
f['data'] = your_data
f['label'] = your_labels
You can easily use the file thus created in HDF5 Datalayer easily as follows. You just need to create a text file with list of HDF5 files you want to use.
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
hdf5_data_param {
source: "path_to_text_file_containing_list_of_HDF5_Files.txt" #
batch_size: 128
shuffle: true
}
}
How can I use caffe convnet to detect facial expressions?
I have a image dataset, Cohn Kanade, and I want to train caffe convnet with this dataset. Caffe has a documentation site, but its not explain how to train my own data. Just with pre trained data.
Can someone teach me how to do it?
Caffe supports multiple formats for the input data (HDF5/lmdb/leveldb). It's just a matter of picking the one you feel most comfortable with. Here are a couple of options:
caffe/build/tools/convert_imageset:
convert_imageset is one of the command line tools you get from building caffe.
Usage is along the lines of:
specifying a list of images and label pairs in a text file. 1 row per pair.
specifying where the images are located.
Choosing a backend db (which format). Default is lmdb which should be fine.
You need to write up a text file where each line starts with the filename of the image followed by a scalar label (e.g. 0, 1, 2,...)
Construct your lmdb in python using Caffe's Datum class:
This requires building caffe's python interface. Here you write some python code that:
iterates through a list of images
loads the images into a numpy array.
Constructs a caffe Datum object
Assigns the image data to the Datum object.
The Datum class has a member called label you can set it to the AU class from your CK dataset, if that is what you want your network to classify.
Writes the Datum object to the db and moves on to the next image.
Here's a code snippet of converting images to an lmdb from a blog post by Gustav Larsson. In his example he constructs an lmdb of images and label pairs for image classification.
Loading the lmdb into your network:
This is done exactly like in the LeNet example. This Data layer at the beginning of the network prototxt that describes the LeNet model.
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
The source field is where you point caffe to the location of the lmdb you just created.
Something more related to performance and not critical to getting this to work is specifying how to normalize the input features. This is done through the transform_param field. CK+ has fixed size images, so no need for resizing. One thing you do need though is normalize the grayscale values. You can do this through mean subtraction. A simple of doing this is to replace the value of transform_param:scale with the mean value of the gray scale intensities in your CK+ dataset.
everyone, I am new to caffe. Currently, I try to use the trained GoogleNet which was downloaded from model zoo to classify some images. However, the network's output seem to be a vector rather than real label(like dog, cat).
Where can I find the label-map between trained model like googleNet's output to their real class label?
Thanks.
If you got caffe from git you should find in data/ilsvrc12 folder a shell script get_ilsvrc_aux.sh.
This script should download several files used for ilsvrc (sub set of imagenet used for the large scale image recognition challenge) training.
The most interesting file (for you) that will be downloaded is synset_words.txt, this file has 1000 lines, one line per class identified by the net.
The format of the line is
nXXXXXXXX description of class