I am trying to output several candidates(labels) while doing the prediction using FaceRecognizer class in OpenCv. There is a function FaceRecognizer::predict() which outputs only one choice. What i wanted to have is to get several answers(good candidates) within some threshold/range. I was wondering if it is possible at all?
thanks for reading.
Related
a superimposed display for train/val splits using StatisticsGen
Hi,
I'm currently using tfx pipeline inside kubeflow. I struggle to have StatisticsGen showing a single graph with train and validation splits curves superimposed, allowing better comparaison distributions. this is exactly how tfdv.visualize_statistics(lhs_statistics=train_stats, rhs_statistics=eval_stats, lhs_name='train', rhs_name='eval') behaves (see illustration 1), and I would like StatisticsGen to also provide a superimposed splits graph.
Thanks for any reference or help so that i can move forward.
Regards
You can use something like
# docs-infra: no-execute
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
From the tensorflow data validation tutorial
I am currently implementing a CNN with a custom error function.
The problem I am trying to solve is physics-based, so I can calculate the maximal achievable precision, or to put it another way, I know the best possible (i.e. minimal) standard deviation I can achieve. Those best possible precisions are calculated during the generation of the training data using the Cramer-Rao-lower bound (CRLB).
Right now, my error function looks something like this (in Keras):
def customLoss(yTrue, yPred):
STD = yTrue[:, 10:20]
yTrue = yTrue[:, 0:10]
dev = K.mean(K.abs(K.abs(yTrue - yPred) - STD))
return dev
In this case, I have 10 parameters, so I want to estimate with 10 CRLB's. I put the CRLB's in the target vector just to be able to handle the in the error function.
To my question. This method works, but it is not what I want. The problem is that the error is calculated considering a single prediction of the network, but to be correct the network would have to predict the same dataset/batch multiple times. By doing that I would be able to see the standard deviation of the prediction and use that to calculate the error (I'm using a Bayesian CNN).
Has someone an idea how to implement such a function in Keras or Tensorflow (I would also not mind switching to PyTorch)?
I have an image with 8 channels.I have a conventional algorithm where weights are added to each of these channels to get an output as '0' or '1'.This works fine with several samples and complex scenarios. I would like implement the same in Machine Learning using CNN method.
I am new to ML and started looking out the tutorials which seem to be exclusively dealing with image processing problems- Hand writing recognition,Feature extraction etc.
http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/neural_networks.html
I have setup the Keras with Theano as background.Basic Keras samples are working without problem.
What steps do I require to follow in order achieve the same result using CNN ? I do not comprehend the use of filters,kernels,stride in my use case.How do we provide Training data to Keras if the pixel channel values and output are in the below form?
Pixel#1 f(C1,C2...C8)=1
Pixel#2 f(C1,C2...C8)=1
Pixel#3 f(C1,C2...C8)=0 .
.
Pixel#N f(C1,C2...C8)=1
I think you should treat this the same way you use CNN to do semantic segmentation. For an example look at
https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf
You can use the same architecture has they are using but for the first layer instead of using filters for 3 channels use filters for 8 channels.
For the loss function you can use the same loos function or something that is more specific for binary loss.
There are several implementation for keras but with tensorflow
backend
https://github.com/JihongJu/keras-fcn
https://github.com/aurora95/Keras-FCN
Since the input is in the form of channel values,that too in sequence.I would suggest you to use Convolution1D. Here,you are taking each pixel's channel values as the input and you need to predict for each pixel.Try this
eg :
Conv1D(filters, kernel_size, strides=1, padding='valid')
Conv1D()
MaxPooling1D(pool_size)
......
(Add many layers as you want)
......
Dense(1)
use binary_crossentropy as the loss function.
I try to model a CNN with deeplearing4j using SVHN dataset (http://ufldl.stanford.edu/housenumbers/), in particular I'm using
Format 2: Cropped Digits
This is matlab's files and each one contains a struct with a tensor (4-D) and an array with label. I would open this one into my deeplearing4j code, so I wondered and I find this class MatlabRecordReader.java into deeplearning4j/DataVec (https://github.com/deeplearning4j/DataVec/blob/master/datavec-api/src/main/java/org/datavec/api/records/reader/impl/misc/MatlabRecordReader.java) but I can't understand how use it. Anybody has experience whit this?
Thanks in advance
Here is a reference for "datavec":
http://deeplearning4j.org/DataVec
So if you look at:
http://nd4j.org/tensor
All of deeplearning4j's neural nets are written using nd4j (matlab for java) so this should be pretty easy to map.
You'll see it more or less maps to matlab.
What might be easier is if you could just write out the values as a csv
and reshape them to be the proper value instead. If you use c ordering it should work fine.
If you do that you can just use the csvrecord reader.
That matlab record reader hasn't been used by a lot of people and I think may only work with matrices (it's been a while)
I would try the csv one first.
I'm working on implementation of LSTM Neural Network for sequence classification. I want to design a network with the following parameters:
Input : a sequence of n one-hot-vectors.
Network topology : two-layer LSTM network.
Output: a probability that a sequence given belong to a class (binary-classification). I want to take into account only last output from second LSTM layer.
I need to implement that in CNTK but I struggle because its documentation is not written really well. Can someone help me with that?
There is a sequence classification example that follows exactly what you're looking for.
The only difference is that it uses just a single LSTM layer. You can easily change this network to use multiple layers by changing:
LSTM_function = LSTMP_component_with_self_stabilization(
embedding_function.output, LSTM_dim, cell_dim)[0]
to:
num_layers = 2 # for example
encoder_output = embedding_function.output
for i in range(0, num_layers):
encoder_output = LSTMP_component_with_self_stabilization(encoder_output.output, LSTM_dim, cell_dim)
However, you'd be better served by using the new layers library. Then you can simply do this:
encoder_output = Stabilizer()(input_sequence)
for i in range(0, num_layers):
encoder_output = Recurrence(LSTM(hidden_dim)) (encoder_output.output)
Then, to get your final output that you'd put into a dense output layer, you can first do:
final_output = sequence.last(encoder_output)
and then
z = Dense(vocab_dim) (final_output)
here you can find a straightforward approach, just add the additional layer like:
Sequential([
Recurrence(LSTM(hidden_dim), go_backwards=False),
Recurrence(LSTM(hidden_dim), go_backwards=False),
Dense(label_dim, activation=sigmoid)
])
train it, test it and apply it...
CNTK published a hands-on tutorial for language understanding that has an end to end recipe:
This hands-on lab shows how to implement a recurrent network to process text, for the Air Travel Information Services (ATIS) task of slot tagging (tag individual words to their respective classes, where the classes are provided as labels in the training data set). We will start with a straight-forward embedding of the words followed by a recurrent LSTM. This will then be extended to include neighboring words and run bidirectionally. Lastly, we will turn this system into an intent classifier.
I'm not familiar with CNTK. But since the question has been left unanswered for so long, I can perhaps suggest some advice to help you with the implementation?
I'm not sure how experienced you are with these architectures; but before moving to CNTK (which seemingly has a less active community), I'd suggest looking at other popular repositories (like Theano, tensor-flow, etc.)
For instance, a similar task in theano is given here: kyunghyuncho tutorials. Just look for "def lstm_layer" for the definitions.
A torch example can be found in Karpathy's very popular tutorials
Hope this helps a bit..