hi i use a open cv for detect object and without problem >>
but the problem when i move the camera every think is detected because i detect without color with real time how can i recognize if the object moving or the camera i thinking about this and found some idea its
.........
first add point on center of image (the image come from video)
and when i check for moving object if its distance didnt change so its didnt move and the moving its from camera did my idea good and how to add object to or poit to image
I assume you would like to tell whether the object is moving or the camera. When there is only one camera, the solutions are usually using a reference (not-moving) object or use a mechanic sensor for camera movement. If you use two camera, you can usually calibrate them and use stereo vision formulations to solve the problem.
Related
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
I understand that with a moving object and a stationary camera, it is easy to detect objects by subtracting the previous and current camera frames. It is also possible to detect moving objects when the camera is moving freely around the scene.
But is it possible to detect stationary objects with a camera rotating around the object? The movement of the camera is predefined and the camera is only restricted to the specified path around the object.
Try camshift demo which locates in opencv source code with this path: samples/cpp/camshiftdemo.cpp. Or other algorithms like meanshift,KCF,etc. These are all object tracking algorithms.
I am new to computer vision, and need some advice on where to start.
The project is to estimate speed of a moving object(A) relative to the moving object(B) which is tracking it(A).
what should I need to do if I assume-
if the background is appeared to be static(making the background single colored)
if the background is moving (harder)
I want to do this using opencv and c++
Any advice on where to start, general steps would be very appreciated. Thanks in advance!
If your camera is attached to object B, first you will have to design an algorithm to detect and track object A. A simplified algorithm can be:
Loop the steps below:
Capture video frame from the camera.
If object A was not in the previous frame, detect object A (manual initialisation, detection using known features, etc.). Otherwise, track the object using the previous position and a tracking algorithm (openCV offers quite a few).
Detect and record the current location of the object in image coordinates.
Convert the location to real world coordinates.
If previous locations and timestamps for the object were available, calculate its speed.
The best way to do this is to get started with at least a simple C++ program that captures frames from a camera, and keep adding steps for detection and tracking.
Can anyone help me to detect realtime objects in iPhone camera using OpenCV?
My actual objective is to give an alert to users while an object interfering on a specific location of my application camera view.
My current thinking is to capture an image with respect to my camera overlay view which represents a specific location of my camera view. And then I process that image using OpenCV to detect objects by colors. If there I can identify an object in a specific image. I will give an alert to user in camera overlay itself. I coudn't know how I can detect an object from UIImage.
Please direct me if anyone knows some other good way to achieve my goal. Thanks in advance.
I solved my issue by the following way,
Created an image capture module with AVFoundation classes (AVCaptureSession)
Capturing simultaneous image buffer through a timer working along with camera module.
Processing captured frames to find objects through OpenCV
(Cropping, grayscale, threshold, feature detection etc...)
Referral Link: http://docs.opencv.org/doc/tutorials/tutorials.html
Alerting user through animated camera overlay view
Anyway the detection of objects through image processing is not much accurate. We need to have a object sensor (like a depth sensor in Kinet camera or similar) to detect objects in real scenario in live streaming, or may be we have to create AI for it perfect working.
I want to track the head of a player in order to move the camera inside XNA.
When the player rotates left or right, the camera inside XNA will respond to this action and will also rotate.
I tried using the head joint from Skeleton Data and taking the vector value X,Y but this is not an accurate solution. I need another solution that can rotate the camera inside XNA.
Any suggestions?
You could use the Face Tracking API and see the difference from a certain point on the users face (like their nose) to decide whether or not the user looked in a different direction. The points on a users face are assembled like this:
Then you can see if the X changed and by what amount to see the rotation effects.
(You might want to see Facial Recognition with Kinect)