how to add the accuracy circle to a MKAnnotation - ios

I have a custom annotation to represent a moving bus, and id love to have that "pulse" has. I believe its also referred to as the accuracy circle. Is there a way to use that with an image overlay ontop of it?

Related

How to find the location of an object with respect to another object in an image?

We are doing a project in which we are detecting (using YOLOv4) runway debris using a fixed camera mounted at a pole on the side of the runway. We want to find out the position of the object with respect to the runway surface. How can we find out the distance of the object from the runway sides?
I would recommend using reliable sensors such as "light curtains" to make sure there is no debris on a runway. AI can fail, especially if things are hard to see.
As for mapping image coordinates to world coordinates on a plane, look into "homography". OpenCV has getPerspectiveTransform. it's easiest if you pick a big rectangle of known dimensions in your scene (runway), mark the corners, find them in your picture, note those points too, and those four pairs you give to getPerspectiveTransform. now you have a homography matrix that will transform coordinates back and forth (you can invert it).
check out the tutorials section in OpenCV's documentation for more on homographies.

How to detect and visualize ceiling with RealityKit?

Since RealityKit is new to me, apparently the plane anchors now have classification (floor, wall, ceiling, ...), so does anyone of you know the procedure of detecting ceilings (or any class of a plane)with RealityKit and ARKit and visualizing them using specific color (for instance red). Any information regarding this would be welcome.
ARKit/RealityKit can detect horizontal surfaces, so yes, it can definitely detect a ceiling. Of course, ARKit won't inherently know that it is a ceiling vs a floor or a table. You could compare the ARPlaneAnchor's y value to the phone's present y value and determine that it is likely a ceiling and add geometry/coloration based on that.

Tracking of rotating objects using opencv

I need to track cars on the road from top-view video.
My application contain two main parts:
Detecting cars on the frame (Tensorflow trained network)
Tracking detected cars (opencv trackers)
I have troubles with opencv trackers. Initially i tried to different trackers, but only MOSSE is fast enough. This tracker works almost perfect for case with straight road, but i faced problems with rotating cars. This situation appears on crossroads.
As i understood, bounding box of rotated object is bigger that bbox of horizontal or vertical object. As result bbox contains big part of static background and the tracker lose target object.
Are there any alternative trackers which can track contours (not bounding boxes)?
Can i adjust quality of existing opencv trackers results by any settings or by adjusting picture?
Schema:
Real image:
If your camera is stationary the following scenario is feasible:
use ‌background subtraction methods to separate background image from foreground blobs.
Improve the foreground results using morphological operations.
Detect car blobs and remove other blobs.
Track foreground blobs in video i.e. binary track (simply use this or even apply KF).
A very basic but effective approach in this scenario might be to track the center coordinates of the bounding box, if the center coordinates only change along one axis (with a small tolerance for either axis), its a linear motion (not a rotation). If both x and y change, the car is moving in the roundabout.
This only has the weakness that it will detect diagonal motion, but since you are looking at a centered roundabout, that shouldn't be an issue.
It will also be very efficient memory-wise.
You should use PCA method, which can calculate the orientation of an detected object and which way it is facing. You can change the threshold of detection to select objects more like the cars (based upon shape and colour - a HSV conversion which in your case is red) in your picture.
Link to an introduction to Principal Component Analysis (PCA)
Method 1 :
- Detect bounding boxes and subtract the background to get blobs rotated rectangles.
Method 2 :
- implement your own version of detector with rotated boxes.
Method 3 :
- Use segmentation instead ... Unet for example.
There are no other trackers than the ones found in the library.
Your best bet is to filter the image and use findcontours.
Optical flow and background subtraction will help with this. You can combine optical flow with your car detector to rule out false positives.
https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html
https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html

Convert ARKit SCNNode's bounding extent

I have an ARKit app that uses plane detection, and successfully places objects on those planes. I want to use some of the information on what's sitting below the object in my approach to shading it - something a bit similar to the WWDC demo where the chameleon blended in with the color of the table. I want to grab the rectangular region of the screen around the footprint of the object, (or in this case, the bounding volume of the whole node would work just as well) so I can take the camera capture data for the region of interest and use it in the image processing, like a metal sphere that reflects the ground it's sitting on. I'm just not sure what combination of transforms to apply - I've tried various combinations of convertPoint and projectPoint, and I occasionally get the origin, height, or width right, but never all 3. Is there an easy helper method I'm missing? I assume basically what I'm looking for is a way of going from SCNNode -> extent.

How to initialize a 3D aruco board object points?

I have a 3D object (a helmet) with a bunch a aruco markers on it. I'd like to treat these markers as a board. The markers are not co-planar with each other, but that is fine, per my understanding of aruco boards. The problem is, how do I initialize the board object coordinates (objPoints)?
It's not easy to take a ruler and measure their relative locations, since they do not all exist in the same plane. I could take a photo, detect markers, estimate the pose for each marker, and then figure out their relative locations from that. But I think doing this with a single photo wouldn't be very precise, nor would a single photo necessarily capture every marker.
Is there a common way to obtain objPoints from multiple photos for higher precision? Or is there any better way to do it?

Resources