Dataset for place for the drone landing place - machine-learning

Good morning. I am trying to build machine learning project which will be recognizing a drone landing place (stairs to home or doormat)
I need a dataset of like images but I do not know how to process them.

You can try to look for a dataset in the google dataset search:
https://datasetsearch.research.google.com/

Related

Creating a dataset for a machine learning project

I am working to create a video from a transcript where the idea is just to choose some series of images based on the meaning of the text. I need to create a model that can pick out the images based on the text but am struggling on how to create a meaningful way to choose images and also how to actually format the dataset with the images and text so that it can be used to train a model. Has anyone done anything similar to this?

Image classification roadmap for Pubg guns

i am trying to create a personal project taking 4-5 guns from Pubg mobile along with their different skins. I want to create a image classifier , classifying all these guns separately. Can you please help me that how should I start and proceed. For example how to create the dataset , how to take images? What data augmentation to apply Like scaling, shifting, rotating etc. Which model to use Alex net? Vgg model?. Key points to keep in mind. Python libraries everything.

openface How to import person pictures and compare any picture later on

I am trying to understand openface.
I have installed the docker container, run the demos and read the docks.
What I am missing is, how to start using it correctly.
Let me explain to you my goals:
I have an app on a raspberry pi with a webcam. If I start the app it will take a picture of the person infront.
Now it should send this picture to my openface app and check, if the face is known. Known in this context means, that I already added pictures of this person to openface before.
My questions are:
Do I need to train openface before, or could I just put the images of the persons in a directory or s.th. and compare the webcam picture on the fly with these directories?
Do I compare with images or with a generated .pkl?
Do I need to train openface for each new person?
It feels like I am missing a big thing that makes the required workflow clearer to me.
Just for the record: With help of the link I mentioned I could figure it out somehow.
Do I need to train openface before, or could I just put the images of the persons in a directory or s.th. and compare the webcam picture on the fly with these directories?
Yes, a training is required in order to compare any images.
Do I compare with images or with a generated .pkl?
Images are compared with the generated classifier pkl.
The pkl file gets generated when training openface:
./demos/classifier.py train ./generated-embeddings/
This will generate a new file called ./generated-embeddings/classifier.pkl. This file has the SVM model you'll use to recognize new faces.
Do I need to train openface for each new person?
OK, for this question I don't have an answer yet. But just because I did not look deeper into this topic yet.

How to make continuous video classification with Tensorflow Inception and raspberryPi?

I have retrained Tensorflow inception image classification model on my own collected dataset and is working fine. Now, I want to make a continuous image classifier on a live camera video. I have a raspberry pi camera for input.
Here's I/O 2017 link(https://www.youtube.com/watch?v=ZvccLwsMIWg&index=18&list=PLOU2XLYxmsIJqntMn36kS3y_lxKmQiaAS) I want to do the same as shown in the video at 3:20/8:49
Is there any tutorial to achieve this?
Step one
Put your tensorflow model aside for this first step. Follow different tutorials online like this one that show how to get an image from your raspberry pi.
You should be able to prove that your code works to yourself by displaying the images to a device or ftp'ing them to another computer that has a screen.
You should also be able to benchmark the rate at which you can capture images, and it should be about 5 per second or faster.
Step two
Look up and integrate image resizing as needed. Google and Stack Overflow are great places to search for how to do that. Again, verify that you are able to resize the image to exactly what your tensorflow needs.
Step three
copy over some of the images to your dev environment and verify that they work as is.
Step four
ftp your trained tensorflow model to the pi along with installing supporting libraries. Integrate the pieces into one codebase and turn it on.

Comparing images using OpenCv or something more useful

I need to compare two images in a project,
The images would be two fruits of the same kind -let's say two different images of two different apples-
To be more clear, the database will have images of the stages which an apple takes from the day it was picked from a tree until it gets rotten..
The user would upload an image of the apple they have and the software should compare it to all those images in the database and retrieve the data of the matching image and tell the user at which stage is it...
I did compare before images using OpenCv emgu but I really don't have much knowledge if it's the best way...
I need an expert advise is what i said in the project even possible? or the whole database images' will match the user's image!
And is this "image processing" or something else?
And is there any suggested tutorials to learn how to do this?
I know it seems not totally clear yet, but it's just a crazy idea that I wish I can get a way to know more how i can bring it to life!
N.B the project will be an android application
This is an example of a supervised image classification problem, which is a pretty broad field. You can read up on image classification here.
The way that you would approach this problem would be to define a few stages of decay (fresh, starting to rot, half rotten, completely rotten), put together a dataset of many images of the fruit in each stage, and train an image classifier on each stage. The sample dataset should contain images of many different pieces of fruit in many different settings. If you want to support different types of fruit, you would need to train a different classifier for each fruit.
There are many image classification tools out there. To name a few:
OpenCV's haar classifier
dlib's hog classifier
Matlab's Computer Vision System Toolbox
VLFeat
It would be up to you to look into which approach would work best for your situation.
Given that this is a fairly broad problem, I wouldn't expect to come up with a solid solution quickly unless you've had experience with image classification. If you are trying to develop a product, I would recommend getting in touch with a computer vision expert that you could contract to solve it.
If you are just looking to learn more about image classification, however, this could be a fun way to play around with different tools and get a feel for what's out there. You may want to start by learning about Machine Learning in general. Caltech offers a free online course that gives a pretty good intro to the subject.

Resources