How to distinguish between different license plates using OpenCV - opencv

Currently working on licentiate detection system and need some guidance on how to proceed.
I can capture (via video playback) and with the help of an open source library called OpenALPR display the license plates directly to the terminal, now the issue is it capture on a frame by frame basis so it capture the same license plate multiple times. I added a frame skip variable and now it skips however many number of frames I want it to but the issue is still there.
Furthermore, I'd like to distinguish between different license plates if possible but don't know how to work around that, I've attempted employing basic object detection and detection but failed miserably.
Below is an image of the program running, as seen it detects a single license plate and display multiple instance of it, now the issue is I expect it to move on to the next car and display Plate#1, unfortunately it does not and continues feeding into Plate #0
Program Running
Program Running
The function that actually helps display the license plate text is below, really the first line does all the work. OpenALPR is a pretty powerful.
results = alpr.recognize_ndarray(frame)
for i, plate in enumerate(results['results']):
best_candidate = plate['candidates'][0]
print('Plate #{}: {:} ({:}%)'.format(i,
best_candidate['plate'].upper(),
best_candidate['confidence']))
I'd like some guidance towards how I can solve this problem? Which is basically distinguish between different license plates.

It is a general problem without general solution, because it highly depends on context. Some thoughts:
If it is a video feed you can track the plate movement, the track will "jump" when it detects another plate. Let say the maximum optical flow velocity is 100 px/frame, if it jumps more than this threshold, you can suppose it is a new plate.
Depending on you video quality and detector, may there be spurious jumps, I would add a Kalman filter or any simple filter.
Perhaps there is a minimum time lapse between a plate goes out the image and the next arrives. You can use a time threshold to trigger the "changed plate alert" event.

Related

Methods to track marked points in a stationary video?

Not sure where to ask this. Please redirect me if SO is not the place.
I want make a web app that accurately tracks pose in a stationary video of someone pedaling a stationary bike. The joints can be marked with some stickers to make the process easier and more accurate. Basically, I want to do what does this app.
First i tried markerless tracking using pose estimation models such as mediapipe's Blazepose and google's MoveNet. However, these are not accurate enough. I would also like to track some additional landmarks (ball of the foot,...).
Then I tried OpenCV.js's Lukas-Kanade optical flow method. But the algorithm lost the tracked point quickly. Even when i placed a colored tape on the part of the body that i wanted to track.
I also tried template matching a single marked point in opencv but it was not very robust, and it would probably not work well when using more markers.
What other methods can I try? Since the app i send the video of requires stickers to be placed, I though it is using something like Lukas-Kanade. But as I said, when I tried it, it wasn't able to track the marked point. Because the app is only on iOS I thought it may be using this API. However, this is only my speculation.
Edit: added example video: https://www.youtube.com/watch?v=eCNyyABfWSE
I tried shooting in slowmo to have more fps, but the quality suffered because of this. Also i didn't have blue or green tape so I had to use yellow, which is not very visible on the sweater or on my wrist. But the markers on the pants should be trackable right?

How to detect scrolling speed of a video/How to detect differences in images

I have some screen recording videos from which I want to extract some information. My thinking is to use cv2.VideoCapture() to get screenshots and then use OCR to get information. But there is a limit to how many times I can call OCR service(a business service). So I want to only use the critical screenshots that don't have much information overlap. For example, I got 300 screenshots from cv2 but I can already get all the information needed from 20 of them since the scrolling speed is slow and most of the screenshots are overlapped.
See a real example: I want to get all the app names in a screen recording video of AppStore.
The question is:
How can I find out the scrolling speed of the video so that I can adjust how often I capture a screenshot. Or to put it in another way: how can I find out how much the consecutive screenshots change, which actually implies the speed of scrolling?
you can use optical flow processing to detect scrolling, there will be only one dimension Y in flow detected so it will be easy to get the average scrolling by calculating the average of flows vector norm.
you can find here a python example to adapt easily in your case:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html

TensorFlow video processing, changes detection

I'm newbie with machine learning, and I have only basic knowledge in neural networks.
I have pretty clear task:
1. Video stream shows static picture (white area with yellow squares)
(in different videos squares located in different places)
2. In some moment content of the video changes, and starts to show white area without some of the yellow squares.
3. I need to create mechanism which can determines and somehow indicates that changes.
I'm going to use for that task TensorFlow framework. Could anybody push me in right direction? Or I'll be very happy to see list of steps to overcome the problem.
Thanks in advance.
If you know how the static picture looks beforehand, may be some background-subtraction would work? Basically you just subtract the static picture from every frame and check the content of the result. If the resulting picture is empty (zeros or close to it up to some threshold) there is no change to detect. If the resulting picture contains a region that is non-zero (may be above or below a certain manually tuned threshold), you detected a change in that region.

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

recognize the moving objects and differentiate them from the background?

iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...

Resources