I want to capture a image from a web cam and i want to print dat captured image in desired size.
consider taking a screenshot programatically and cropping it. if this is not possible, you should talk to the hardware directly by using the driver.
also please provide more info on os, webcam make and programming platform,
Related
I am developing a custom camera in which the camera is set to the Image Capture mode. I need to increase the zoom level of camera preview according to the app requirements. The preview currently being displayed is perfect I just need to increase the zoom-out in current preview. I searched over internet but didn't find any solution. Please tell me how can I do this. I am attaching the example image for better understanding. first image is of my camera app and second image is of Scanner Pro app which shows view with more covered area while I focus both the apps for the same object on the same distance. My camera don't have any space but the Scanner camera has spacing all over the image. Both the camera are on the same distance from the paper.
i don't know whether you still need this answer. Probably not, but still for you and everyone else looking out:
When you set the Session Preset, try using SessionPresetPhotofor the device object. This should resolve the weird zoom issue.
Your preview view is probably spilling over the edge of the screen. Make sure it is a 4:3 aspect ratio and that it doesn’t overflow your screen edges. With that you should see more of your image.
I'm trying to capture image using avfoundation framework without Flash and with Flash but i could not capture original image instead gives me black colour image while capturing it without Flash.
With the same code i can capture an image with lighting mode with Auto Flash, In that case i can able to see only part of image which has full light but not actual image.
I've set flashmode to AVCaptureFlashModeAuto.
Even with AVCaptureFlashModeOn captured image shows darker as compared to the original preview.
Help me to get it solved as i'm not able to find any solution.
Thanks.
I'm currently using AVCaptureSessionpresetPhoto to take my pictures and I'm adding filters to them. Problem is that the resolution is so big that I have memory warnings ringing all over the place. The picture is simply way to large to process. It crashes my app every single time. Is there anyway I can specify the resolution to shoot at?
EDIT**
Photography apps like Instagram or the Facebook Camera app for example can do this without any problems. These applications can take pictures at high resolutions, scale them down and process them without any delay. I did a comparison check, the native iOS camera maintains a much higher quality resolution when compared to pictures taken by other applications. The extreme level of quality isn't really needed required for a mobile platform so it seems as if these images are being taken at lower resolution to allow for faster processing and quick upload times. Thus there must be a way to shoot at a lower resolution. If anyone has a solution to this problem, it would greatly be appreciated!
You need to re-size image after capture image using AVCaptureSession and store it's image after resizing.
You found lots of similar question in to StackOverlow i just putting some link bellow that makes help's you.
One More thing As my suggestion that using SDWebImage for Displaying Images asynchronously Becouse App working smoothly. There are also some other way for example(Grand Central Dispatch (GCD) Reference , NSOperationQueue etc) in iOS for asynchronous Tast
Re-size Image:-
How to resize an image in iOS?
UIImage resizing not working properly
How to ReSize Image with Good Quality in iPhone
How to resize the image programmatically in objective-c in iphone
Currently i am working on an image app where i am able to capture an image by using AVFoundation framework. But what i am looking is , to capture an image with certain resolution and DPI (may be 300 DPI or greater).
How to do this ?
There have been numerous posts on here about trying to do OCR on camera generated images. The problem is not that the resolution is too low, but that its too high. I cannot find a link right now, but there was a question a year or so ago where in the end, if the image size was reduced by a factor of four or so, the OCR engine worked better. If you examine the image in Preview, what you want is the number of pixels per character to be say 16x16 or 32x32, not 256x256. Frankly I don't know the exact number but I'm sure you can research this and find posts from actual framework users telling you the best size.
Here is a nice response on how to best scale a large image (with a link to code).
I'm using Tesseract OCR 3.01 in my iOS application, it shows 90% accuracy for my data when I pick an image from my phone’s library. But if I use the same image from the camera, it is showing jumbled letters. I followed this tutorial, kindly guide me if something can be done to make sure it works from camera as it works for gallery images.
Yup, There are three things to be specific, First of all, OCR works well with black and white images rather than colored, So If you could try to convert your image to B&W, it would increase accuracy.
The second thing is the size and orientation, You need to force the image to be of 640*480 or 320 size, this would increase both the speed of recognition and the accuracy as well, For orientation , there are a lot of ways to manage.
Finally, If some how you can allow the user to specify Exactly where or on which part of the image he wants to perform the OCR, this greatly increases the accuracy and time since the library does not need to check the entire image for text, rather you already specify the part to be searched for.
PS:I have been working on creating an OCR app for the past few weeks.
Almost for sure the problem is "orientation". Apple tends to create images in one bit map form - the image bits are laid out as if the camera was on its side with the volume buttons top and right. Images that you see which appear taller than wider are still laid out as above, but there is an "orientation" in the EXIF object included with the image.
I'm going to guess that tesseract does not look at the EXIF, but expects the image in a "standard" format so that text is in the position it would be for a person reading the text.
You can test my hypothesis by using camera images taken with volume button top right.
If they work, then what you will need to do is process the image yourself, and re-arrange the bits per the orientation setting. This is not all that hard to do but will require you to read up on vImage and/or bit map contexts.