Extract vectors from LDA2vec model - machine-learning

I want to implement lda2vec and extract vectors then feed them to the ml classifier. I tried to implement https://github.com/TropComplique/lda2vec-pytorch but I failed.
I got this error No such file 'model_state.pytorch' when I run explore_trained_model.ipynb
can anyone help me?

Related

model.predict_classes vs model.predict_generator in keras

I understand that predict_generator outputs probabilities. To get the class, I just then find the index for the greatest probability and that will be the most probable class. However I find that after doing this, I get a different output than if I were to call predict_classes. I do not understand why. Can someone explain this please?
Generator in Keras uses glob to list folders which are alphabetically sorted, you can get classes being used during training using
# save classes to JSON
class_json = json.dumps(train_generator.class_indices)
with open("class.json", "w") as class_file:
class_file.write(class_json)
The samples are shuffled with in the batch generator(here) so that when a batch is requested by the fit_generator or evaluate_generator random samples are given.
Another possibility if this is being done on images is not to use rescale=1./255 in ImageDataGenerator as mentioned in https://github.com/fchollet/keras/issues/3477
Hope that help!

deeplearing4j with SVHN dataset

I try to model a CNN with deeplearing4j using SVHN dataset (http://ufldl.stanford.edu/housenumbers/), in particular I'm using
Format 2: Cropped Digits
This is matlab's files and each one contains a struct with a tensor (4-D) and an array with label. I would open this one into my deeplearing4j code, so I wondered and I find this class MatlabRecordReader.java into deeplearning4j/DataVec (https://github.com/deeplearning4j/DataVec/blob/master/datavec-api/src/main/java/org/datavec/api/records/reader/impl/misc/MatlabRecordReader.java) but I can't understand how use it. Anybody has experience whit this?
Thanks in advance
Here is a reference for "datavec":
http://deeplearning4j.org/DataVec
So if you look at:
http://nd4j.org/tensor
All of deeplearning4j's neural nets are written using nd4j (matlab for java) so this should be pretty easy to map.
You'll see it more or less maps to matlab.
What might be easier is if you could just write out the values as a csv
and reshape them to be the proper value instead. If you use c ordering it should work fine.
If you do that you can just use the csvrecord reader.
That matlab record reader hasn't been used by a lot of people and I think may only work with matrices (it's been a while)
I would try the csv one first.

Unable to found feed input Error while predicting on Re-trained Inception-V3 in Tensorflow

I'm currently trying to make predictions on re-trained Inception-V3 model in TensorFlow.
When I'm trying to run inference on image with
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/path/output_graph.pb --labels=/path/output_labels.txt \
--output_layer=final_result \
--image=/path/to/test/image
I'm getting an error
E tensorflow/examples/label_image/main.cc:303] Running model failed: Not found: FeedInputs: unable to find feed output Mul
I used transfer learning to fine tune Inception trained on Imagenet dataset, to train on my own 1000+ classes. Training & evaluation processes were ok. I exported graph with tf.train.write_graph() and freeze it with https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
Did anyone faced this problem??
It seems that in the graph you are using, the feed or "input_layer" node has been renamed and is no longer called "Mul". You need to find the name of the node where inputs should be injected into your saved graph, and pass the node name via the --input_layer flag.
The easiest way to find the node name is just to make sure to set it explicitly to something you know when you build the graph in the first place.

Openc 2.4.6.1 - FaceRecognition - Predicting Several Candidates

I am trying to output several candidates(labels) while doing the prediction using FaceRecognizer class in OpenCv. There is a function FaceRecognizer::predict() which outputs only one choice. What i wanted to have is to get several answers(good candidates) within some threshold/range. I was wondering if it is possible at all?
thanks for reading.

OpenCV - training new LatentSVMDetector Models

I haven't found any method to train new latent svm detector models using openCV. I'm currently using the existing models given in the xml files, but I would like to train my own.
Is there any method for doing so?
Thank you,
Gil.
As of now only DPM-detection is implemented in OpenCV, not training.
If you want to train your own models, the most reliable approach is to use Felzenszwalb's and Girshick's matlab code (most of the heavy stuff is implemented in C) (http://www.cs.berkeley.edu/~rbg/latent/)(http://www.rossgirshick.info/latent/) It is reliable and works reasonably fast
If you want to do it in C-only, there is an implementation here (http://libccv.org/doc/doc-dpm/) that I haven't tried myself.
I think there is a function in the octave version of the author's code here
(Octave Version of DPM). It is in step #5,
mat2opencvxml('./INRIA/inriaperson_final.mat', 'inriaperson_cascade_cv.xml');
I will try it and let you know about the result.
EDIT
I tried to convert the .mat file from the octave version i mentioned before to .xml file, and compared the result with the built in opencv .xml model and the construction of the 2 xmls was different (tags, #components,..), it seems that this version of octave dpm generates xml files for later opencv version (i am using 2.4).
VOC-release3.1 is the one matches opencv2.4.14. I tried to convert the already trained model from this version using mat2xml function available in opencv and the result xml file is successfully loaded and working with opencv. Here are some helpful links:
mat2xml code
VOC-release-3.1
How To Train DPM on a New Object

Resources