Does EPS have a DPI? - eps

Some of what I've read about EPS (Encapsulated PostScript) sounds like it assumes 72 dpi. But other parts say because it's vectors that there is no dpi and I can use any units when creating the file as the eps file is then positioned and sized when imported/used.
So which is it?

Related

Eliminate hairlines from a vector graphics by converting to oversampled bitmap and then downscaling - How with ImageMagick?

I used Apple Numbers (a Spreadsheet app with styling options) to create a UX flowchart of various user interfaces of an app.
Apple Numbers has a PDF export option.
The problem is that even though some border lines in the table have been set to "none" in the export you nevertheless get small visible hairlines, see this cutout:
[
I want to to eliminate the hairlines by image processing
Before creating a flyover video over the graphics.
My basic idea is:
Convert vector to bitmap with very high resolution (oversampling, e.g. to 600 or 1200 DPI)
Then downsample to the target resolution (e.g. 150 DPI) with an algorithm which eliminates the hairlines (disappearing in the dominance of neighboring pixels) while overally still remaining as crisp and sharp as possible.
So step 1, I already figured out, by these two possibilities:
a. Apple Preview has a PDF to PNG export option where you can specify the DPI.
b. ImageMagick convert -density 600 source.pdf export.png
But for step 2 there are so many possibilities:
resample <DPI> or -filter <FilterName> -resize 25% or -scale 12.5% (when from 1200 to 150)
Please tell me by which methods (resample, resize, scale) and which of the interpolation algorithms or filters I shall use to achieve my goal of eliminating the hairlines by dissolving them into their neighboring pixels, with the rest (normal 1px lines, rendered text and symbols, etc) remaining as crisp as possible.
ImageMagick PDF tp PNG conversion with different DPI settings:
convert -density XXX flowchart.pdf flowchart-ImageMagick-XXX.png
flowchart-ImageMagick-150.png ; flowchart-ImageMagick-300.png ; flowchart-ImageMagick-600.png
Apple Preview PDF to PNG export with different DPI settings:
flowchart-ApplePreview-150.png ; flowchart-ApplePreview-300.png ; flowchart-ApplePreview-600.png
Different downscaling processings
a) convert -median 3x3 -resize 50% flowchart-ApplePreview-300.png flowchart-150-from-ApplePreview-300-median-3x3.png thanks to the hint from #ChristophRackwitz
b) convert -filter Box -resize 25% flowchart-ImageMagick-600.png flowchart-150-from-ImageMagick-600-resize-box.png
Comparison
flowchart-ApplePreview-150.png
flowchart-150-from-ApplePreview-300-median-3x3.png
✅ Hairlines gone
❌ But font is not as crisp anymore, median destroyed that.
flowchart-150-from-ImageMagick-600-resize-box.png
🆗 Overally still quite crisp
🆗 Hairline only very very faint, even only faint when zoomed in
Both variants are somehow good enough for my KenBurns / Dolly cam ride over them. Still I wished that there'd be an algorithm that keeps cripness but still eliminates 1px lines in very high DPI bitmaps. But I guess this is a Jack of all trades only in my phantasy.
Processing Durations
MacBook Pro 15'' (Mid 2014, 2,5 GHz Quad-Core Intel Core i7)
ImageMagick PDF to PNG
PDF source Ca. 84x60cm (33x23'')
300dpi -> 27s
600dpi -> 1m58s
1200dpi -> 37m34s
ImageMagic Downscaling
time convert -filter Box -resize 25% 1#600.png 1#150-from-600.png
# PNG # 39700 × 28066: 135.57s user 396.99s system 109% cpu 8:08.08 total
time convert -median 3x3 -resize 50% 2#300.png 2#150-from-300-median3x3.png
# PNG # 19850 × 14033: 311.48s user 9.42s system 536% cpu 59.76 total
time convert -median 3x3 -resize 50% 3#300.png 3#150-from-300-median3x3.png
# PNG # 19850 × 14033: 237.13s user 8.33s system 544% cpu 45.05 total

Scaling images before doing conversion or vice versa?

I wonder which one among methods below should preserve more details of images:
Down scaling BGRA images and then converting them to NV12/YV12.
Converting BGRA images to NV12/YV12 images and then down scaling them.
Thanks for your recommendation.
Updated 2020-02-04:
For my question is more clear, I want to desribe a little more.
The images is come from a video stream like this:
Video Stream
-> decoded to YV12.
-> converted to BGRA.
-> stamped texts.
-> scaling down (or YV12/NV12).
-> YV12/NV12 (or scaling down).
-> H264 encoder.
-> video stream.
The whole sequence of tasks ranges from 300 to 500ms.
The issue I have is text stamped over the images after converted
and scaled looks not so clear. I wonder order at items: 4. then .5 or .5 then.4
Noting that the RGB data is very likely to be non-linear (e.g. in an sRGB format) ideally you need to
Convert from the non-linear "R'G'B'" data to linear RGB (Note this needs higher bit precision per channel) (see function spec on wikipedia)
Apply your downscaling filter
Convert the linear result back to non-linear R'G'B' (ie. sRGB)
Convert this to YCbCr/NV12
Ideally you should always do filtering/blending/shading in linear space. To give you an intuitive justification for this, the average of black (0) and white (255) in linear colour space will be ~128 but in sRGB this mid grey is represented as (IIRC) 186. If you thus do your maths in sRGB space, your result will look unnaturally dark/murky.
(If you are in a hurry, you can sometimes get away with just using squaring (and sqrt()) as a kludge/hack to convert from sRGB to linear (and vice versa))
For avoiding two phases of spatial interpolation the following order is recommended:
Convert RGBA to YUV444 (YCbCr) without resizing.
Resize Y channel to your destination resolution.
Resize U (Cb) and V (Cr) channels to half resolution in each axis.
The result format is YUV420 in the resolution of the output image.
Pack the data as NV12 (NV12 is YUV420 in specific data ordering).
It is possible to do the resize and NV12 packing in a single pass (if efficiency is a concern).
In case you don't do the conversion to YUV444, U and V channels are going to be interpolated twice:
First interpolation when downscaling RGBA.
Second interpolation when U and V are downscaled by half when converting to 420 format.
When downscaling the image it's recommended to blur the image before downscaling (sometimes referred as "anti-aliasing" filter).
Remark: since the eye is less sensitive to chromatic resolution, you are probably not going to see any visible difference (unless image has fine resolution graphics like colored text).
Remarks:
Simon answer is more accurate in terms of color accuracy.
In most cases you are not going to see the difference.
The gamma information is lost when converting to NV12.
Update: Regarding "Text stamped over the images after converted and scaled looks not so clear":
In case getting clear text is the main issue, the following stages are suggested:
Downscale BGRA.
Stamp text (using smaller font).
Convert to NV12.
Downsampling an image with stamped text, is going to result unclear text.
A better solution is to stamp a test with smaller font, after downscaling.
Modern fonts uses vectored graphics, and not raster graphics, so stamping text with smaller font gives better result than downscaled image with stamped text.
NV12 format is YUV420, the U and V channels are downscaled by a factor of x2 in each axis, so the text quality will be lower compared to RGB or YUV444 format.
Encoding image with text is also going to damage the text.
For subtitles the solution is attaching the subtitles in a separate stream, and adding the text after decoding the video.

How to split BGR (raw) image into N number of equal images

Task at hand is to split an available BGR(raw) image into N equal number image. Can someone give me hint on storage of BGR -raw images in memory
For example:
If I have 1920 * 1080 pixels BGR image, and I would like to split it into 8 equal parts then is there any available framework that can help me. I'm trying to write native CPP code on Android, working with OpenCV would be expensive to do, any other alternative

Image conversion to PNG vs JPG using ImageMagick

I'm trying to convert some images into either PNG or JPG and trying to find out which format will result in the smaller file size. With most of the cases PNG will give me the best compression but some odd images I get better compression out of JPG. I have two questions:
What characteristics of image will cause it to give better results?
Is there a way to pre-determine which format will give me better
results without converting them first?
This photo gives better compression result using PNG
This photo provides substantially better compression using JPG
I have absolutely no time to develop this line of thought further but the image entropy is probably a good discriminant for selecting JPEG or PNG - see my earlier comment on your question.
If you use ImageMagick, you can calculate the Entropy easily like this:
identify -verbose -features 1 image.jpg | grep -i -A1 entropy
Your top image gives output like this:
identify -verbose -features 1 t.jpg | grep -i -A1 entropy
Sum Entropy:
0.703064, 0.723437, 0.733147, 0.733015, 0.723166
Entropy:
1.01034, 1.12974, 1.14983, 1.15122, 1.11028
Difference Entropy:
0.433414, 0.647495, 0.665738, 0.671079, 0.604431
and your bottom image gives output like this:
identify -verbose -features 1 b.jpg | grep -i -A1 entropy
Sum Entropy:
1.60934, 1.62512, 1.65567, 1.65315, 1.63582
Entropy:
2.19687, 2.33206, 2.44111, 2.43816, 2.35205
Difference Entropy:
0.737134, 0.879926, 0.980157, 0.979763, 0.894245
I suspect images with a higher entropy will compress better as JPEGs and those with a lower entropy will fare better as PNGs - but I have to dash now :-)
There are 5 values for each type of entropy - horizontal, vertical, left diag, right diag and overall. I think the last value is the only one you need consider.
Updated
Ok, I have had a little more time to spend on this now. I do not have a pile of sample images to test my theory on, so I did it a different way. I made a little script to calculate the following for a given input file:
ratio of JPEG size to PNG size
entropy
Here it is:
#!/bin/bash
f="$1"
jsize=$(convert "$f" -strip JPG:- | wc -c)
psize=$(convert "$f" PNG:- | wc -c)
jpratio=$(echo $jsize*100/$psize | bc)
# Make greyscale version for entropy calculation
rm temp*.jpg 2> /dev/null
convert "$f" -colorspace gray temp.jpg
entropy=$(identify -verbose -features 1 temp.jpg | grep -A1 " Entropy:" | tail -n 1 | awk -F, '{print $5}')
echo $jpratio:$entropy
So, for a given image, you would do this:
./go image.jpg
8:3.3 # JPEG is 8x bigger than PNG and entropy is 3.3
Then I took your image and added different amounts of noise to it to increase its entropy, like this
for i in {1..99}; do convert bottom.jpg +noise Gaussian -evaluate add ${i}% xx${i}.jpg;done
that gives me files called xx1.jpg with 1% noise, xx2.jpg with 2% noise and so on, up to xx99.jpg with 99% noise.
Then I ran each of the files through the first script, like this:
for f in xx*.jpg; do ./go $f;done > data.txt
to give me data.txt.
Then I created the following gnuplot command file plot.cmd:
set title 'Plotted with Gnuplot'
set ylabel 'Entropy'
set xlabel 'JPEG size/PNG Size'
set grid
set terminal postscript color landscape dashed enhanced 'Times-Roman'
set output 'file.eps'
plot 'data.txt'
and ran it with
gnuplot plot.cmd
And I got the following plot which shows that as ImageMagick's entropy number increases, the ratio of JPEG size to PNG size improves in favour of JPEG... not very scientific, but something at least. Maybe you could run the script against the type of images you normally use and see what you get.
That depends very much on your use case.
1) JPG is usually not as good for text because the artifacts tend to "smear" or blur the image. For photos, this is usually not a problem; also for high-resolution textual images the problem will be much less pronounced (because the blur radius is smaller relative to image size).
Note that PNG is usually used to losslessly compress images, while JPG is inherently lossy. With a higher compression ratio, JPG files will be much smaller, but the artifacts will be more pronounced. Note also that there are programs that are able to do lossy compression in PNG (which could well beat JPG compression in some cases).
In short: PNGs will work well with computer-generated images, because those tend to be quite regular and thus easy to deflate. JPGs will fare better with photos which tend to have more jitter, which is hard to compress. When you move away from ImageMagick and libpng, there are other possibilities.
2) While it would be possible to train a neural network to decide whether JPG or PNG would compress better, it would probably take longer and be less exact than just trying both and looking at the output. Note also that there are some approximative measurements that can tell you if an image is too blurry (which may help you setting the correct compression level if you want to tune further).
One big difference is that PNG allows for alpha transparencies so that you can see parts of what is behind the image. Jpg will block out a rectangle.

ImageMagick JPEG quality/size

I'm using the following command to generate a thumbnail:
mogrify -resize 128x128 -quality 75 "some thumb file"
For a sample file:
If i don't specify quality 75, i get a 40Kb file
If i specify quality 75, i get a 36 kb file and it looks awful
The same file resized in photoshop is < 10 kb - and it looks awesome!
Is it possible to use imagemagick to resize a thumbnail to such a low filesize so that the resulting image wouldn't suck?
Maybe i'm missing some other setting here?
What you've got without -quality was probably quality 92, or the quality of the input image (which, if large, could look OK despite low quality setting).
https://imagemagick.org/script/command-line-options.php#quality
The JPEG quality depends on mostly 2 things:
the used Quantization Matrix (or separate QMs: one for Y, and other for Cb and Cr)
whether there is chroma subsampling, e.g. the image uses one 8x8 block (coding unit) to store color information for a 16x16 (in 4:2:0 case) block of pixels
Your preferred quality, 90, is the lowest for which there is no subsampling. It may be that for small images, like thumbnails, high res color information is important.
Final note - Photoshop has it's own choice of quantization matrices for their "quality" settings. These are different than mogrify's and libjpeg's in general.
You should find the correct quality level in mogrify, and not rely on the number from Photoshop.
If you want to emulate the PS compression, you can get their QM-s:
$ djpeg -v -v saved_by_photoshop.jpg >/dev/null
And then compress some image using these matrices. cjpeg can do it using -qtables file_with_QMs.txt.

Resources