I have been following the Turicreate tutorial link here
I am able to train a model successfully as per instructions. I am also able to consume the model in the iOS app with the help of the code provided over there. But I am not able to figure out how could I get the most similar actual images based on the distances this model returns. Also when I run it on iPhone, it returns me offset and element which I am not able to interpret. Please see the screenshot.
My aim in the iOS application is to input an image, pass that input image to the model and then show the actual output 5 or 10 most similar images and not just the distances.
Be sure that you have an id column or have added the id column to your SFrame using:
reference_data = reference_data.add_row_number()
Here is a code snippet to create the image_similarity model and get the ten most similar images for a sample image. I use fifty as a k value to demonstrate that how to pull out the n most similar from a larger group in your similarity graph.
similarity_graph = model.similarity_graph(k=50)
similar_images = similarity_graph.edges
# pick a sample image from reference_data
sample_index = 3
sample_similar_images = similar_images[similar_images["__src_id"]==sample_index].sort("rank")
# get 10 most similar images
most_similar_ids = sample_similar_images["__dst_id"][0:10]
most_similar_images = reference_data.filter_by(most_similar_ids, "id")
For details on interpreting the distance MultiArray from the CoreML model, see the sample code in the Turi Create documentation. The element value corresponds to the distance.
Related
I am working on an e-learning platform using PHP. It recommends videos, if you fail a specific question. How do i go about creating the recommender system that takes in Tags and recommends relevant Videos?
import pandas as pd
videos = pd.read_csv("/file_path/vid_com_dup.csv",
sep = ',', names =
['vid_id','ques_id','vid_name','vid_tags'])
videos.head()
The csv file includes the following columns:
vid_id - primary key and id for videos.
ques_id - foreign key.
vid_name - the name of the video.
vid_tags - some tags in form of (1+1,single digit, addition, grade 1).
the tags above are also in question table which are similar.
if question has tags (1+1,single digit, addition, grade 1), I want to make
recommender that takes in above tags compares with different videos that have similar tags and gives recommendations.
I finally got Around it, hope it will help someone else.
Load Data set: Image of Sample Dataset:
Split the tags: image of split tags:
Basically what the above picture depicts is that if the tag is present is 1 else 0.
Scale and Transform the Features matrix above:
Apply Scikit learn unsupervised nearest neighbors. You should get indices and distance matrix. What is unsupervised nearest neighbors? For this problem we are only interested in getting nearest neighbours based on distances and recommending and not for classifying. Image of indices and distances below:
Your all done. All is needed now is a function for you to get nearest videos. This is depicted in the image below that has code and result.
I am trying to create a handwritten digit recogniser that uses a core ml model.
I am taking the code from another similar project:
https://github.com/r4ghu/iOS-CoreML-MNIST
But i need to incorporate my ml model into this project.
This is my model:(Input image is 299x299)
https://github.com/LOLIPOP-INTELLIGENCE/createml_handwritten
My question is what changes are to be made in the similar project so that it incorporates my coreml model
I tried changing the shapes to 299x299 but that gives me an error
In viewDidLoad, you should change the number 28 to 299 in the call to CVPixelBufferCreate(). In the original app, the mlmodel expects a 28x28 image but your model uses 299x299 images.
However, there is something else you need to change as well: replace kCVPixelFormatType_OneComponent8 with kCVPixelFormatType_32BGRA or kCVPixelFormatType_32RGBA. The original model uses grayscale images but yours expects color images.
P.S. Next time include the actual error message in your question. That's an important piece of information for people who are trying to answer. :-)
I want to do binary image classification and I thought ImageJ would be a good platform to do it, but alas, I am at a loss. The pseudocode for what I want to do is below. How do I implement this in ImageJ? It does not have to follow the pseudocode exactly, I just hope that gives an idea of what I want to do. (If there is a plugin or a more suitable platform that does this already, can you point me to that?)
For training:
train_directory = get folder from user
train_set = get all image files in train_directory
class_file = get text file from user
// each row of class_file contains image name and a "1" or "-1"
// a "1" indicates that image belongs to the positive class
model = build_model_from_train_set_and_class_file
write_model_to_output_file
// end train
For testing:
test_directory = get folder from user
test_set = get all images in test_directory
if (user wants to)
class_file = get text file from user
// normally for testing one would always know the class of the image
// the if statement is just in case the user does not
model = read_model_file
output = classify_images_as_positive_or_negative
write_output_to_file
Note that there should not be any preprocessing done by the user: no additional set of masks, no drawings, no additional labels, etc. The algorithm must be able to figure out what is common among the training images from the training images alone and build up a model appropriately. Of course, the algorithm can do any preprocessing it wants to, it just cannot rely on that preprocesssing being done already.
I tried using CVIPtools for this but it wants a mask on all of the images to do feature extraction.
The use of OpenCV is fairly simple- create a FaceRecognizer object, train it, give it a source of images and then check your given image against those images with regards to the trained Eigenimage.
The problem is, the predict method has only two modes- find a match, or find a match and get the confidence of that match.
What I'd like to do is instead get a list of, say, the top ten matches along with the confidences of them (often my reference images are of low quality/lighting and therefore I don't expect there to be many high-confidence matches, but rather many low-quality matches).
If your opencv_contrib version is newer than 19 Jan, you can. This was merged in 19 Jan (#465).
To use it:
Mat testImage = ...; // load your test image
Ptr<TopNPredictCollector> collector = TopNPredictCollector.create(10, THRESH);
model->predict(testImage, collector);
Ptr<std::list<std::pair<int,double>>> result = collector.getResult();
Before that, the only colletor available was MinDistancePredictCollector(), that's why you could only get the closest match (see here).
I need to do a face recognition system using opencv LBP and here is the link where the facerec code.
In this code the CSV file has to be generated for Multiple users and the code will recognize if the input face is in the list of CSV not.
My intention is to do face verification against single user. i.e., User will register his face for the first time ( I will write it in csv ) and whenever the same user tries to authenticate
I will collect the few images of the user and compare with the previous CSV file. How to do this with the above code?
consider a threshold (given by predict function) smaller than a value to determine if the face in known or not.
for the first time, passing face will be predicted as unknown (smaller than the threshold), then put it in the DB (csv file too). for the next pass, it should predict by a value a bit greater than the previous, you can take it , and so on until you think that the the prediction is ok for you