Is there a way to Improve Vision Frameworks Barcode Detection rate? - ios

So my Question is rather generic. I am implementing a Barcode Scanner with Vision Framework for iOS and i came across the problem:
Low opacity or crumpled Barcodes do not have a high success rate with the Vision Framework. It is still the highest out of all Barcodes Scanners i took a closer look at.
Now im wondering if there is an option to increase Back Scale on the live camera to up the opacity of fading Barcodes?
I am rather certain that wont be able to change a lot when it comes to Barcodes that have a crinkle at two lines so they get combined.
Would be CGRect an option to increase the reading rate and speed by setting boundaries?

Related

I want to detect motion/movement in the live camera. How can i do it?

I'm creating motion detect app for ios. when camera on live any object passes the camera like person , animal. than i want detect motion feature. how's it possible?
I suggest you get familiar with the AVFoundation framework to understand how to get live video frames using the camera of an iOS device. A good starting point is Apple's famous sample AVCam, which should get you familiar with all the camera concepts.
As the next step, figure out how to do the movement detection. The simplest algorithm for that would be the background subtraction. The idea is to subtract two consecutive frames one from another. The areas without movement just cancel each other and become black, while the areas with movements show some nonzero values.
Here's an example of background subtraction in the OpenCV framework.
If in the end, you decide to use OpenCV (which is a classic Computer Vision framework which I definitely recommend), then you'll need to integrate OpenCV into your iOS app. You can see a short tutorial here.
I tried to show you some pointers which could get you going. The problem (how you presented it) is definitely not an easy one, so good luck!

how does google measure app works on android?

I can see that it can measure horizontal and vertical distances with +/-5% accuracy. I have a use case scenario in which I am trying to formulate an algorithm to detect distances between two points in an image or video. Any pointers to how it could be working would be very useful to me.
I don't think the source is available for the Android measure app, but it is ARCore based and I would expect it uses a combination of triangulation and knowledge it reads from the 'scene', using the Google ARCore term, it is viewing.
Like a human estimating distance to a point, by basic triangulation between two eyes and the point being looked at, a measurement app is able to look at multiple views of the scene and to measure using its sensors how far the device has moved between the different views. Even a small movement allows the same triangulation techniques be used.
The reason for mentioning all this is to highlight that you do not have the same tools or information available to you if you are analysing image or video files without any position or sensor data. Hence, the Google measure app may not be the best template for you to look to for your particular problem.

Detect sound level in a certain frequency range in iOS SDK

I am working on my iOS app and I need to detect sound level from a certain frequency range. Here is a good tutorial for detecting sound level, but how to do that in specific frequency range in iOS SDK?
You need to capture audio from the microphone (AVAudioEngine is a good API for doing that), calculate its Fourier Transform (the Accelerate framework will do that with blazing speed) and examine the amplitude of the frequency bucket corresponding to your frequency. If it's large then you've got a match.
A possibly simpler and more efficient technique would be a Goertzal filter which is good at detecting a given frequency.
Without knowing the exact use case, I can imagine that similar code as what's used in a musical instrument tuner app would work. A quick search found this guitar tuning app which uses fast Fourier transforms.
Note that FFTs are implemented in the Accelerate framework, and in this app they appear to be imported from some other library.

iOS Panorama UI

I am trying to create a Panorama app for iPhone/iPad.
The image stitching bit is OK, I'm using openCV libraries and the results are pretty acceptable.
But I'm a bit stuck on developing the UI for assisting the user while capturing the panorama.
Most apps (even on Android) would provide user with some sort of a marker that translates/rotates exactly matching the movement of the user's camera.
[I'm using the iOS 7 - default camera's panorama feature as a preliminary benchmark].
However, I'm way off the mark till date.
What I've tried:
I've tried using the accelerometer and gyro data for tracking the marker. With this approach -
I've applied an LPF on the accelerometer data and used simple
Newtonian mechanics (with a carefully tuned damping factor) to
translate the marker on the screen. Problem with this approach: very erratic data. Marker tends to jump and wobble between points. Hard to tell between smooth movement and jerk.
I've tried using a complimentary filter between LPF-ed gyro and
accelerometer data to translate the blob. Problem with this approach: Slightly better than the first approach, but still quite random.
I've also tried using image processing to compute optical flow. I'm
using openCV's
goodFeaturesToTrack(firstMat, cornersA, 30, 0.01, 30);
to get the trackable points from a first image (sampled from camera
picker) and then using calcOpticalFlowPyrLK to get the positions
of these points in the next image.
Problem with this approach: However, the motions vectors obtained from tracking these points are too noisy to compute the resultant
direction of motion accurately.
What I think I should do next:
Perhaps compute the DCT matrix from accelerometer and gyro data and
use some algorithm to filter one output with the other.
Work on the image processing algorithms, use some different techniques
(???).
Use Kalman filter to fuse the state prediction from
accelerometer+gyro with that of the image processing block.
The help that I need:
Can you suggest some easier way to get this job done?
If not, can you highlight any possible mistake in my approach? Does it really have to be this complicated?
Please help.

Motion Sensing by Camera in iOS

I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.

Resources