I am working on a project to recognize text in Business Cards and map them to appropriate fields.I am using opencv for image processing.I need to feed the preprocessed image to Tesseract-OCR engine for text recognition.This link
states that images should have atleast a DPI of 300.My image pixel size is 2560x1536 with 72 DPI.
How to increase the DPI to 300?
It is also said that it is beneficial to resize image.How to resize my image optimally for good OCR results
Tesseract works best on images which have a DPI of at least 300 dpi, so it may be beneficial to resize images. What does 'so' imply here.What is the relation between resizing an image and DPI?
For OCR, what really matters is the resolution in pixels. Because the physical characters can range from tiny to huge, independently of the DPI of the acquisition device.
As a rule of thumb, stroke width around 3 pixels is a good start. If lower, resizing might not be helpful because the information is missing. If much higher, the running time might be excessive (or the OCR function not be taylored to deal with it).
Also check that the package will not attempt to resize internally, based on its own assumption of stroke width and the DPI info stored in the header, if there is a mismatch.
Related
In OpenCV 3x there was such an opportunity. When you greatly increase the scale in pixels, their values were displayed. For example 100,100,100. But in version 4x this feature has ceased to exist. At least through the usual imshow.
For any given file data size, I want to be able to resize (or compress) a UIImage to fit within that data limit. This question is NOT about how to resize, or how to check file sizes... it is about an algorithm to getting this in a performant way.
Searching here already, I found this thread which talks about stepping down the image jpeg quality in a linear, or binary algorithm. This isn't very performant, taking dozens of seconds at best.
I am working on iOS so images can be close to 10MB (from iPhone 4S). My target, although variable, is currently 3145728 bytes.
I am currently using UIImageJPEGRepresentation to compress a little, but to get to my low target it appears I would have to lose much quality for such a large photo. Is there a relation between UIImage size and NSData size? Is there some function where I can say something like:
area * X = dataSize
...and solve for a scale factor so I can resize in one shot?
One idea I just had after looking at the thread you linked to: compressing a 10MB image is going to be relatively slow. How about resizing to be much smaller (so that compression is much faster), then performing the compression algorithm (from the link). This can then be used as a guide to the size of compressing the 10MB image? The idea being that the compression ratio should be similar for the same image, independent of size.
Let's say 1000x1000 pixels compressed is 10MB, target size is 3MB.
Then say smaller 100x100 pixels (for example), compressed with same quality, is C MB. Then perform the binary search alg on the 100x100 image until size = C * (3/10). Then use this compression quality for the 1000x1000 image to get ~3MB image.
Note: I have no idea how well this will work - it's just a suggestion. What size to pick (I've used 100x100) for the smaller-sized image is also just a guess and something would need to be experimented with.
Are there libraries, scripts or any techniques to increase image size in height and width....
or you must need to have a super good resolution image for it?.....
Bicubic interpolation is pretty much the best you're going to get when it comes to increasing image size while maintaining as much of the original detail as possible. It's not yet possible to work the actual magic that your question would require.
The Wikipedia link above is a pretty solid reference, but there was a question asked about how it works here on Stack Overflow: How does bicubic interpolation work?
This is the highest quality resampling algorithm that Photoshop (and other graphic software) offers. Generally, it's recommended that you use bicubic smoothing when you're increasing image size, and bicubic sharpening when you're reducing image size. Sharpening can produce an over-sharpened image when you are enlarging an image, so you need to be careful.
As far as libraries or scripts, it's difficult to recommend anything without knowing what language you're intending to do this in. But I can guarantee that there's an image processing library including this algorithm already around for any of the popular languages—I wouldn't advise reimplementing it yourself.
Increasing height & width of an image means one of two things:
i) You are increasing the physical size of the image (i.e. cm or inches), without touching its content.
ii) You are trying to increase the image pixel content (ie its resolution)
So:
(i) has to do with rendering. As the image physical size goes up, you are drawing larger pixels (the DPI goes down). Good if you want to look at the image from far away (sau on a really large screen). If look at it from up close, you are going to see mostly large dots.
(ii) Is just plainly impossible. Say your image is 100X100 pixels and you want to make 200x200. This means you start from 10,000 pixels, end up with 40,000... what are you going to put in the 30,000 new pixels? Whatever your answer, you are going to end up with 30,000 invented pixels and the image you get is going to be either fuzzier, or faker, and usually both. All the techniques that increase an image size use some sort of average among neighboring pixel values, which amounts to "fuzzier".
Cheers.
i have a bunch of images which are way too big i need to decrease their size from 30 kb to 10 or 5 kb without loosing quality. I tried to change the dpi and pixels with no succeed. The images got blurred, and as they have text i can't read anything after the changes. Is there anyway i can accomplish this without loosing quality? I have almost a dozen images in my application.
Thanks in advance and have a nice day.
for batch resizing I use IrfanView (despite it's "lite-ness" it's very powerful).
It has a nice batch dialog, with a lot of options.
If you're working with png files try using better compression, and/or different color depth settings (if you're not using transparency you could try converting them to jpeg, although you might lose some quality)
changing color depth/range/compression might not affect image quality (not visibile anyway, if used with moderation) and it will decrease the size of the picture - in most of the cases anyway
if you want to stick to Gimp (I never personally used it), it should have some export features where you can select some settings for the image, like format and options
You cannot leave out data without reducing quality. Data has meaning.
You may try to use improved compression, pngcrush is the tool that automatically tries several approaches for you and picks the best.
Reducing colour depth will reduce the file size (while reducing colour quality). You can also turn on dithering in some image editors, but that's another loss in quality.
If your image has photographic content rather than graphical, convert to JPEG and use the JPEG quality settings, experiment with them a bit.
It seems that if I have a large png of 2500px wide and I want to resize it down to 100px wide, If I scale the image all at once the the desired size the image becomes way to distorted to use.
However If I scale the image in small increments of 200 pixels and repeat until you reach the desired length the image does not get as distorted. So if Im at 2500px then I would scale the image to 2300px then to 2100 and so on. The smaller the scale the less distortion.
Any resize method will have some loss, no matter how small. Following steps will make you lose quality.
steps for a single layer
layer->scale layer
image->scale image
image->fit canvas to layer
file->export as
steps for multiple layers
layer->new layer group
move all layers to layer group
select layer group
layer->scale layer
image->scale image
image->fit canvas to layer
file->export as
I have a 1000x1000 300dpi image that I need to convert to a 100x100 96dpi thumbnail. How do I do this in ImageMagick? I'm after the smallest possible file size at the highest possible quality.
Doing something like this:
convert myimage.png -quality 100 -resize 100 PNG8:mynewimage.png
.... does change the dimension, but still maintains the DPI. If I can get this to change to 96dpi, I should get a smaller file size.
I've tried -density, etc., but can't seem to make them work for me. Maybe I put the commands in the wrong order or passed the wrong parameters. Any assistance is greatly appreciated. Thanks.
The short version is, if you want a 100x100 image in PNG format, the line you have will already give you best quality at smallest file size. You can't do any better than that without a) coding to a lossy format (JPEG) or b) reducing the color depth of your image.
For a slightly longer explanation, straight from Wikipedia: "Dots per inch (DPI) is a measure of spatial printing or video dot density, in particular the number of individual dots that can be placed in a line within the span of 1 inch (2.54 cm). The DPI value tends to correlate with image resolution, but is related only indirectly."
DPI has nothing to do with getting a smaller file size; your 100x100 image measures 100x100 pixels, no matter whether you see it on a 300dpi screen or a 96dpi one (it will just look smaller on the 300dpi screen). The amount of information is the same either way.
"-density" won't help either, as it only works when "encoding a raster image while rendering (reading) vector formats such as Postscript, PDF, WMF, and SVG into a raster image". Those formats are resolution-independant, so it makes sense to tell ImageMagick the DPI to which you want the image rasterized. However, your DPI should be a function of the output device you plan to use. In your case, since you're starting with an already-rasterized image, this has no use.
PNG is a lossless format, so the -quality parameter only controls the zlib compression level; any gains in image size will be minimal, but it's worth using.