Can imagemagick give layer names to layers when writing multi-layer PSD files?
The only thing I've found on SE is an "I don't know" from nine years ago:
Add layers to a PSD file using ImageMagick
Related
I am trying to process data to train a model.
I have a dataset processed and saved in a HDF5 file (original HDF file) to separate into two unoverlapping HDF files at the ratio 90:10.
I would like to separate data stored in that HDF file into two other HDF i.e. one HDF for training purpose which contains 90% of dataset in original HDF file and another HDF for validation purpose which contains 10% of dataset in original HDF file.
If you have any ideas to do it, please guide me.
Thank you so much in advance.
You don't have to separate the data into separate files for training and testing. (In fact, to properly train your model, you would have to do this multiple times -- randomly dividing the data into different training and testing sets each time.)
One option is to randomize the input when you read the data. You can do this by creating 2 lists of indices (or datasets). One list is the training data, and the other is the test data. Then, iterate over the lists to load the desired data.
Alternately (and probably simpler), you can use the h5imagegenerator from PyPi. Link to the package description here: pypi.org/project/h5imagegenerator/#description
If you search SO, you will find more answers on this topic:
Keras: load images batch wise for large dataset
How to split dataset into K-fold without loading the whole dataset
at once?
Reading large dataset from HDF5 file into x_train and use it in
keras model
Hope that helps. If you still want to know how to copy data from 1 file to another, take a look at this answer. It shows multiple ways to do that: How can I combine multiple .h5 file? You probably want to use Method 2a. It copies data as-is.
I am trying to use bert for the images, the following steps I'm considering to do this approach:
Create an embedding of an image using VggNet (extracting avgpool layer from the network).
Using PCA for dimensionality reduction from 4096 to 768 on vector matrix (embeddings we’ve got from VggNet).
As this is a sequence of integers, passing it to the transformer encoder, bert.
Does it seems like a sensible thing to do?
I am learning to create a learning model using TensorFlow.
I have successfully run the MNIST tutorial, now would like to test the model with my own images. They are same-size image (224x224) and classified into folders.
Now I would like to use those images as input for my model as in the MNIST example. I tried to open the MNIST data-set but it's unreadable. I guess it has been converted into some binary types. Through the example, I think the MNIST dataset somehow has a structure like this:
mnist
test
images
labels
train
images
labels
How can I make a dataset look like the MNIST data from my own images files?
Thank you very much!
MNIST is not stored in image format. From the mnist web-site (http://yann.lecun.com/exdb/mnist/) you could see that it has specific format which is already close to the tensor or numpy array, which could be used in tensorflow with minimal adjustments. It is a kind of a matrix with numbers.
What you need to work with usual images (.jpg for instance) is to use any python lib for image processing to convert into the np.array. For example PIL will work, like here:
PIL and numpy
Another option is to use a built-in functions from tensorflow to convert your images straight to tensors supported by tensofrlow, check this out:
https://www.tensorflow.org/versions/r0.9/api_docs/python/image.html
I have been working on training pedestrian detection classifier based on HOG features. Presently I have done the followings:
a) Extracted HOG features of all files i.e. Positive and Negative and saved those features with label i.e. +1 for positive and -1 for negative in file.
b)downloaded svmlight, extracted binaries i.e. svm_learn, svm_classify.
c) passed the "training file" (features file) to svm_learn binary which produced a model file for me.
d) passed "test file" to svm_classify binary and got result in predictions file.
Now my question is that "What to do next and how?". i think i know that now i need to use that "model file" and not "predictions file" in openCV for detection of pedestrian in video but somewhere i read that openCV uses only 1 support vector but i got 295 SV, so how do i convert it into one proper format and use it and any further compulsory steps if any.
I do appreciate your kindness!
It is not true that OpenCV (presumably you are talking about CvSVM) uses only one support vector. As pointed out by QED, what OpenCV does do is to optimize a linear SVM down to one support vector. I think the idea here is that the support vectors define the classification margin, but to do the actual classification only the separating hyperplane is needed and that can be defined with one vector.
Since you have a svmlight model file, and CvSVM can't read that, you have the following options:
train a CvSVM and save the mode as a CvStatsModel file, that you can load tha tlater to get the support vecotrs.
write some code to convert an svmlight model file into a CvStatsModel file (but for this you have to understand both formats).
get source for svmlight, the bit that reaads the modelfile, and integrate it into your OpenCV application
You may use LIBSVM instead, but really you are then faced with the same problems as svmlight.
For ideas on how to convert the support vectors so you can use them with the HOG detector see Training custom SVM to use with HOGDescriptor in OpenCV
Well Hello everybody. I am doing a project that consist in dectect objects using kinect and svm and ann machine learning. I want if it is posible to give the names of library for svm and ann with graphical tool because I want only to train ann with that library and save in .xml then load .xml with opencv!!
SVM is a classifier used to classify samples based upon their feature vectors. So, your task is to convert the images into feature vectors which can be used by SVM for its training and testing.
Ok, to create feature vector from your images there are several possibilites and i am going to mention some very common technique:
A very easy method is to create normalized hue-histogram of your each image. Let's say, you have created hue-histogram with 5-bins. So, based upon your image color there will be some values in these 5 bins. Lets say the values look like this { 0.32 0.56 0 0 0.12 }. So, now this is your one input vector with 5 dimensions (i.e. number of bins). You have to do the same procedure for all training samples and then you will do it for test image too.
Extract some feature from your input samples (e.g. by using SIFT, SURf) and then create there descriptor using SIFT/SURF. And, then you can use these descriptors as the input to your SVM for training.