Custom image feature extraction before detection using YOLO V4 - image-processing

I’m working on the object detection application using a camera and Sensor. I wanted to extract the features of a custom image using the YOLO v4 before the detection and save it into the text file for further clustering analysis with the sensor. would it be possible to extract these features before the detection? If so, please suggest the steps.
Thanks in Advance

Related

How to use OpenCV to do OCR and text detect and recognition

I am working on a test application to develop a small text detection and recognition app in python using Google Collab. Can you advise any code examples to achieve this? My requirement is that I should be able to detect and recognize text in an image using OpenCV.
Please advise.
you need to make pipeline with following step. if you work only opencv.
opencv for pre-processing - use morphological operations.
For Text detection - use Craft model or finding contours in your image.
For Recognition - Use Tesseract-OCR
According to my personal experience. EasyOCR is very good with good accuracy. easy to use and train your own text also.

Apple Vision Framework Identify face

Is it possible in the Apple Vision Framework to compare faces and recognise if that person is in a picture compared to a reference image of that person?
Something like Facebook Face recognition.
Thomas
From the Vision Framework Documentation:
The Vision framework performs face and face landmark detection, text
detection, barcode recognition, image registration, and general
feature tracking. Vision also allows the use of custom Core ML models
for tasks like classification or object detection.
So, no, the Vision Framework does not provide face recognition, only face detection.
There are approaches out there to recognize faces. Here is an example of face recognition in an AR app:
https://github.com/NovatecConsulting/FaceRecognition-in-ARKit
They trained a model that can detect like 100 persons, but you have to retrain it for every new person you want to recognize. Unfortunately, you can not just give two images in and have the faces compared.
According to Face Detection vs Face Recognition article:
Face detection just means that a system is able to identify that there is a human face present in an image or video. For example, Face Detection can be used to auto focus functionality for cameras.
Face recognition describes a biometric technology that goes far beyond a way when just a human face is detected. It actually attempts to establish whose face it is.
But...
In a case you need an Augmented Reality app, like Facebook's FaceApp, the answer is:
Yes, you can create an app similar to FaceApp using ARKit.
Because you need just a simple form of Face Recognition what is accessible via ARKit or RealityKit framework. You do not even need to create a .mlmodel like you do using Vision and CoreML frameworks.
All you need is a device with a front camera, allowing you detect up to three faces at a time using ARKit 3.0 or RealityKit 1.0. Look at the following Swift code how you can do it to get ARFaceAnchor when a face has detected.
And additionally, if you wanna use reference images for simple face detection – you need to put several reference images in Xcode's .arresourcegroup folder and use the following Swift code as additional condition to get a ARImageAnchor (in the center of a detected image).

Extract hand from kinect dataset using image processing

I downloaded the kinect sensor datasets (depth(textfile) and image)because kinect is expensive.I don't know how to proceed with the dataset?I have to extract the hand from the image.i can't use kinectSDK because it works only if kinect sensor is connected.So i decided to extract hand from the image using image processing.Can anyone please suggest any algorithm for that? or can I extract hand by means of other methods?
Thanks in advance.
color image and depth information can be used for hand detection,
i think you can use nearest skin region to camera as the hand, because in the dataset hand placed front of body.

Recognise Faces using opencv

I'm using CascadeClassifier to detect faces from an image.
Here is how the iOS simulator detect faces.
I want to improve this by adding some more features. How can I improve this to recognise similar faces. For example if I use one person in a photo & again use another photo with the same person, how the application identify both of them are same?
Is there a method in opencv to do it? or can you please give me some tips to start learning?
You can use OpenCV's face recognition module:
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/
Object detection, and object recognition are two different games. You are wanting to do object recognition, therefore look at features in opencv. SIFT, SURF etc

How to detect an object after the sobel detection in OpenCV

Is there anyway to detect a car (object) after a sobel detction in opencv?
This is my current's work as shown as the image below:
Advanced thanks for any guidelines or solutions!
Object detection task need quite a bit of effort.
First, extract relevant features from object you want to detect: histogram of oriented gradient should be fine for car detection. So, you need to get database of car images. Second, you need to learn models using machine learning algorithm. Boosting or support vector machine should do the work. You will find the implementation in opencv.

Resources