Im currently analysing frames of a camera video stream using OpenCV for Augmented Reality markers on iOS. I need to also be able to analyse each frame to see if they contain QR codes. Im currently using ZBarSDK for iOS to complete the task, but my performance has decreased from around 30fps to between 7-11fps when this is in use. I know ZBarSDK can be configured to increase frame rate by: only looking for QR codes and ignoring other barcodes, as well as adjusting the stride of each sweep but this has no real effect.
I noticed that ZXing is an alternative though it seems to be deprecated now and AVCaptureMetadataOutput from a AVCaptureSession is the way forward, but I cannot see if it is frame by frame processing.
Is there any other library that allows the processing of video streams frame by frame to detect QR codes on iOS? Or could anyone point me in the direction of maybe an OpenCV tutorial for writing QR code detection and QR content extraction from scratch?
Any help will be appreciated.
Related
So my Question is rather generic. I am implementing a Barcode Scanner with Vision Framework for iOS and i came across the problem:
Low opacity or crumpled Barcodes do not have a high success rate with the Vision Framework. It is still the highest out of all Barcodes Scanners i took a closer look at.
Now im wondering if there is an option to increase Back Scale on the live camera to up the opacity of fading Barcodes?
I am rather certain that wont be able to change a lot when it comes to Barcodes that have a crinkle at two lines so they get combined.
Would be CGRect an option to increase the reading rate and speed by setting boundaries?
I'm making a video call android app with augmented face effects using ARCore and WebRTC.
However, the frame structure of WebRTC and ARCore is different.
So I use PixcelCopy to convert ARCore frames to Bitmap and then convert them to WebRTC frames.
However, the audio and video are out of sync with this method.
Is there any other way?
Any advice would be of great help to me
thanks
Does anyone know how to reproduce the new Notes new scanning feature in iOS 11??
Is AVFoundation used for the camera?
How is the camera detecting the shape of the paper/document/card?
How do they place the overlay over in real time?
How does the camera know when to take the photo?
What's that animated overlay and how can we achieve this?
Does anyone know how to reproduce this?
Not exactly :P
Is AVFoundation used for the camera? Yes
How is the camera detecting the shape of the paper/document/card?
They are using the Vision Framework to do rectangle detection.
It's stated in this WWDC session by one of the demonstrators
How do they place the overlay over in real time?
You Should check out the above video for this as he talks about doing something similar in one of the demos
How does the camera know when to take the photo?
I'm not familiar with this app but it's surely triggered in the capture session, no?
Whats that animated overlay and how can we achieve this?
Not sure about this but I'd imagine it's some kind of CALayer with animation
Is Tesseract framework used for the image afterwards?
Isn't Tesseract OCR for text?
If you're looking for handwriting recognition, you might want to look for a MNIST model
Use Appleās rectangle detection SDK, which provides an easy-to-use API that can identify rectangles in still images or video sequences in near-realtime. The algorithm works very well in simple scenes with a single prominent rectangle in a clean background, but is less accurate in more complicated scenes, such as capturing small receipts or business cards in cluttered backgrounds, which are essential use-cases for our scanning feature.
An image processor that identifies notable features (such as faces and barcodes) in a still image or video.
https://developer.apple.com/documentation/coreimage/cidetector
I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.
As many of you know Tesseract does character recognition in still photos or images. I'm using xcode for my iOS app and I got this problem. How can I use tesseract to scan the camera live preview. An app that does this is the Word Lens app, it makes a frame by frame live recognition and translation of the text being previewed by the camera. I'm trying to do this live character recognition whithout the translation part. What is the best approach? How can I do a real-time scan of the camera preview frame by frame using Tesseract OCR? Thanks.
I have tested it and Performance is too low. Camera output eight pictures per second, but OCR process one need about 2 seconds.
The link A (quasi-) real-time video processing on iOS
The link tesseract-ios
and How can I make tesseract on iOS faster
Maybe we need use OpenCV.
Or, alternative you can use other free product, that does OCR in camera preview: ABBYY Real-Time Recognition OCR.
Disclaimer: I work for ABBYY.