Capturing image at higher resolution in iOS - ios

Currently i am working on an image app where i am able to capture an image by using AVFoundation framework. But what i am looking is , to capture an image with certain resolution and DPI (may be 300 DPI or greater).
How to do this ?

There have been numerous posts on here about trying to do OCR on camera generated images. The problem is not that the resolution is too low, but that its too high. I cannot find a link right now, but there was a question a year or so ago where in the end, if the image size was reduced by a factor of four or so, the OCR engine worked better. If you examine the image in Preview, what you want is the number of pixels per character to be say 16x16 or 32x32, not 256x256. Frankly I don't know the exact number but I'm sure you can research this and find posts from actual framework users telling you the best size.
Here is a nice response on how to best scale a large image (with a link to code).

Related

iOS ideal image resolution

I'm having a real hard time understanding this, but let's say I have an iOS app for both iPad and iPhone and I want to download an image from a server and display it in full screen.
I have read that the iPad pro has a resolution of 2732x2048 and if we want to display an image in fullscreen we would need to download the image with this size right? However, I also read that the image should never be over 300KB. I was not able to bring an image with this size under 2MB (I used JPEGmini for example to reduce size).
And I don't think that iPhone user would need to download such a huge image, so my question is: what resolution should my images be on the server and how can I manage to keep them in a rational file size. Also should I upload multiple images for different devices? If so, how many and at what resolutions?
Isn't the problem merely that you are holding incompatible beliefs? This is the belief that is giving you trouble:
I also read that the image should never be over 300KB.
Let go of it.
Clearly it is right to say that the image should be no larger than needed for display. But an image to be shown as a 3x scale image on the iPad pro needs to be 2732x2048. So that's that.
(You could, alternatively, use an image 2/3 of that size and show it as a 2x scale image. It wouldn't look quite as good as the 3x scale image, but it might be acceptable.)
On a smaller device, yes, you should scale down the image in code, so that you are not holding in memory an image larger than needed for display. But in this case, you need the large image for display.

Which image format i should for ios development native ? SVG or PNG?

I am into iOS development from past 1+ months and what I have experienced is that I have to put images for 1x 2x 3x for iphone and then 2x retina for ipad. One of the experienced designers has sugguested to me to go for svg format as it scales itself according to the screen sizes.
So my elaborated questions are:
Can I use svg instead of png?
Is it necessary to still put images in 2x and 3x for iphone and ipad if I'm using svg?
Will the images in svg scale according to the phone size and not lose quality?
If any other information according to your experience please share.
Thank you.
Official iOS Dev documentation says "the PNG format is the one most recommended for use in your apps". You can read it for a lot more information here.
Yes, although the supported file types table doesn't list it. Apple values user experience. SVG scaling consumes a few more CPU cycles which they don't like. PNG rendering is more efficient than SVG.
Yes, Apple explicitly recommends using multiple versions of the image at different sizes. Then scaling can be done from the file having the nearest dimensions.
Refer 1. There are cases like zoom-in / out scenarios where SVGs would be better though.
You could use vectorized PDFs alternatively. You can read more here. It isn't without limitations, but with vectorized PDFs, Xcode automatically generates scaled versions. That should make life easier. Note that sometimes the scaled results look quite poor.

supplying the right image size when not knowing what the size will be at runtime

I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.

UIImage size managment

I have an image that is used in multiple places within my IOS app at various sizes. Should I be using one large image in conjunction with UIViewContentModeScaleAspectFit to scale it (currently this approach seems to cause distortion on the smaller images despite being scaled to proportionate sizes) or should I have multiple versions of the image at different sizes.
It seems a little extreme to have so many copies of one image e.g. my_image_ipad, my_image_small, my_image_large, my_image_medium.
In general it depends on what it takes to get the look and quality you want. Some images will scale from a large image to a small image with little or no visual loss in quality. Others, as you've found, scale poorly, especially at extreme scales. You're really going to have to just try various solutions and see what works. One way you can almost always cut down on application size is to remove the non-#2x artwork and ship with just the #2x.
It is better to use one image, because it will make your app "lighter", but this may cause your code to become bigger.

Tesseract OCR Camera

I'm using Tesseract OCR 3.01 in my iOS application, it shows 90% accuracy for my data when I pick an image from my phone’s library. But if I use the same image from the camera, it is showing jumbled letters. I followed this tutorial, kindly guide me if something can be done to make sure it works from camera as it works for gallery images.
Yup, There are three things to be specific, First of all, OCR works well with black and white images rather than colored, So If you could try to convert your image to B&W, it would increase accuracy.
The second thing is the size and orientation, You need to force the image to be of 640*480 or 320 size, this would increase both the speed of recognition and the accuracy as well, For orientation , there are a lot of ways to manage.
Finally, If some how you can allow the user to specify Exactly where or on which part of the image he wants to perform the OCR, this greatly increases the accuracy and time since the library does not need to check the entire image for text, rather you already specify the part to be searched for.
PS:I have been working on creating an OCR app for the past few weeks.
Almost for sure the problem is "orientation". Apple tends to create images in one bit map form - the image bits are laid out as if the camera was on its side with the volume buttons top and right. Images that you see which appear taller than wider are still laid out as above, but there is an "orientation" in the EXIF object included with the image.
I'm going to guess that tesseract does not look at the EXIF, but expects the image in a "standard" format so that text is in the position it would be for a person reading the text.
You can test my hypothesis by using camera images taken with volume button top right.
If they work, then what you will need to do is process the image yourself, and re-arrange the bits per the orientation setting. This is not all that hard to do but will require you to read up on vImage and/or bit map contexts.

Resources