Batch converting .eps to .jpg in imagemagick and making .mkv - imagemagick

I am trying to convert multiple .eps files into .jpg ones. By looking at answers here in SO, I was able to do it for single separate files.
The problem is that, when I'm trying to do it for all the files, they don't show any .jpg file.
I am currently using Imagemagick with the command
convert -density 300 outputs-000.eps -flatten outputs-000.jpg
I believe the problem is because my files are written as
outputs-000.eps
outputs-001.eps
outputs-002.eps
outputs-003.eps
...
outputs-145.eps
...
and so on. I tried putting %d (as in outputs-%d.eps and outputs-%d.jpg), but with no success.
Apart from that, I intent to get all those files and "convert" them into an .mkv or .gif or similar type (they are images of the time configuration of a particle collision system, so each image is a frame, so the goal is to make it into a 10sec movie). If there is a way to do that directly from the .eps, even better. Any help is welcome, since I've been trying to do this for several hours now. Thank you.

You should be able to make an animated GIF in one go like this:
convert -density 300 outputs-*eps -delay 200 animated.gif
Failing that, you should be able to convert all your eps files to, say PNG with:
mogrify -density 300 -format png outputs-*eps
Be careful with mogrify - it overwrites your input files unless you specify -path for an output directory, or you change format - like I just did to PNG.

For anyone who lands here trying to figure out how to work around ImageMagic's convert: not authorized without reverting the change that was made to the system-wide security policy to close a vulnerability, here's how you'd use Ghostscript to do a batch EPS-to-JPEG conversion directly without bringing ImageMagick into the mix:
gs -dSAFER -dEPSCrop -r300 -sDEVICE=jpeg -o outputs-%03d.jpg outputs-*.eps
-dSAFER puts Ghostscript in a sandboxed mode where Postscript code can only interact with the files you specified on the command line. (Yes, the parts of EPS, PS, and PDF files that define the page contents are in a turing-complete programming language.)
-dEPSCrop asks for the rendered output to be cropped to the bounding box of the drawing rather than padded out to whatever size page Ghostscript expects you to be printing to. (See the manual for details.)
The -r300 requests 300 DPI (-r600 for 600 DPI, etc.)
The -sDEVICE specifies the output format (See the Devices section of the manual for other choices.)
-o is a shorthand for -dBATCH -dNOPAUSE -sOutputFile=
This section of the Ghostscript manual gives some example formats for for multi-file filename output but, for the actual syntax definition, it points you at the documentation for the C printf(3) function.
Once you've got your JPEGs, you can follow the instructions in this answer over on the Video Production Stack Exchange to combine them into an MKV file.
The TL;DR is this command here:
ffmpeg -framerate 30 -i outputs-%03d.jpg -codec copy output.mkv
Check out the other answers if you want something that performs inter-frame compression rather than aiming to avoid transcoding the JPEGs again.
(If you want the best compromise, have Ghostscript output PNGs and then let ffmpeg handle switching to lossy compression.)

Related

Converting AI to EPS / JPEG without "Use artboards"

I'm trying to read(preview) AI (adobe illustrator) file in my web application. my web app is on Linux machine and mainly uses Python.
I couldn't find any native python code that can preview AI file, so I continued to search for solution and found ghostscript, which gives the option to convert AI to JPG/PNG and I these format I have no problem previewing.
The issue I have is that I need the preview to include the whole document and not just the artboard, in illustrator it's possible when removing the checkbox from "use artboards" when saving, see screenshot: https://helpx.adobe.com/content/dam/help/en/illustrator/how-to/export-svg/_jcr_content/main-pars/image0/5286-export-svg-fig1.jpg
but when I try to export from ghostscript, I can't make it work...
from my understanding, it's best to try and first convert to EPS and then from that to JPG/PNG, but I failed doing that as well and the items that are outside the artboard are not showing.
on linux, these are the commands I basically tried , after installing ghostscript:
gs -dNOPAUSE -dBATCH -sDEVICE=eps2write -sOutputFile=out.eps input.ai
gs -dNOPAUSE -dBATCH -sDEVICE=jpeg -r300 -sOutputFile=out.jpeg input.ai
gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r300 -sOutputFile=out.png input.ai
if it's not possible with ghostscript and I need imagemagick instead, I don't mind using it... I tried it for 10 minute and just got bunch of errors so I left it....
AI file for example: https://drive.google.com/open?id=1UgyLG_-nEUL5FLTtD3Dl281YVYzv0mUy
Jpeg example of the output I want: https://drive.google.com/open?id=1tLT2Uj1pp1gKRnJ8BojPZJxMFRn6LJoM
Thank you
Some updates on the topic: I've found this:
https://gist.github.com/moluapple/2059569
This is AI PGF extractor which should theoretically help to extract the additional data from the PDF. Currently, it seems quite old and written for win32, so I cannot test it at the moment, but it's at least some kind of lead.
Firstly, Adobe Illustrator native files are not technically supported by Ghostscript at all. They might work, because they are normally either PostScript or PDF files with custom bits that can be ignored for the purposes of drawing the content. But it's not a guarantee.
Secondly; no, do not multiply convert the files! That's a piece of cargo-cult mythology that's been doing the rounds for ages. There are sometimes reasons for doing so but in general this will simply magnify problems, not solve them. Really, don't do that.
You haven't quoted the errors you are getting and you haven't supplied any files to look at, so it's not really possible to tell what your problem is. I have no clue what an 'artboard' is, and a picture of the Illustrator dialog doesn't help.
Perhaps if you could supply an example file, and maybe a picture of what you expect, it might be possible to figure it out. My guess is that your '.ai' file is a PDF file, and that it has a MediaBox (which is what Ghostscript uses by default) and an ArtBox which is what you actually want to use. Or something like that. Hard to say without more information.
Edit
Well, I'm afraid the answer here is that you can't easily get what you want from that file without using Illustrator.
The file is a PDF file (if you rename input.ai to input.pdf then you can open it with a PDF reader). But Illustrator doesn't use most of the PDF file when it opens it. Instead the PDF file contains a '/PieceInfo' key, which is a key in the Page dictionary. That points to a dictionary which has a /Private key, which (finally!) points to a dictionary with a bunch of Illustrator stuff:
52 0 obj
<<
/AIMetaData 53 0 R
/AIPrivateData1 54 0 R
/AIPrivateData10 55 0 R
/AIPrivateData11 56 0 R
/AIPrivateData2 57 0 R
/AIPrivateData3 58 0 R
/AIPrivateData4 59 0 R
/AIPrivateData5 60 0 R
/AIPrivateData6 61 0 R
/AIPrivateData7 62 0 R
/AIPrivateData8 63 0 R
/AIPrivateData9 64 0 R
/ContainerVersion 11
/CreatorVersion 23
/NumBlock 11
/RoundtripStreamType 1
/RoundtripVersion 17
>>
endobj
That's the actual saved file format of the Illustrator file. You can think of the PDF file as a 'preview' wrapped around the Illustrator native file. Illustrator reads the PDF file to find its own data, then throws the PDF file away and uses the native file format stored within it instead.
The problem is that the PDF part of the file simply doesn't contain the content you want to see. That's stored in the Illustrator native data. Ghostscript just renders what's in the PDF file, it doesn't look at the Illustrator native file.
Looking at the Illustrator private data, some of it is uncompressed, but most is compressed, it doesn't say how it is compressed but applying the FlateDecode filter produces a good old-fashioned Illustrator PostScript file, one that will work with Ghostscript.
But you would have to manually parse the PDF file, extract all the compressed AIPrivateData streams, concatenate them together, apply the FlateDecode filter to decompress them, and only then send the resulting output to Ghostscript with the -dEPSCrop switch set. That will result in the output you want.
But neither Ghostscript nor ImageMagick (which generally uses Ghostscript to render PDF files) will do any of that for you, you would have to do it all yourself.

How to use mogrify (ImageMagick) to convert from NEF to *.jpg and retain EXIF data?

I'm converting about 9000 photos from .NEF to .jpg.
I'd like to retain all EXIF data, most importantly Date and time created, Latidude and Longitude.
I'd like the JPGs to be at the highest possible quality.
I've just gotten started using ImageMagick from the command line. I also have Exiftool installed. I'm using the mogrify command because it handles batches nicely.
When I run
mogrify -format jpg *.NEF
All of my .NEF files are successfully converted to JPGs, but all EXIF data are lost.
I've searched around quite a bit to try and find a solution to this and it seems like I may have to install ufraw, but if possible I'd like a solution that uses software I already have - ImageMagick and Exiftool.
Thanks in advance for any advice you have about how to do this.
Update:
The images I converted using mogrify are slightly smaller (~ 1-2 MB) than those output by my colleague using picasa to convert NEF to JPG. But when I specify -quality 100 in ImageMagick the image sizes gain about 45 MB! Why?
The code exiftool -tagsfromfile %d%f.NEF -ext jpg -overwrite_original . adds the exif information to the JPGs.
Think twice before doing this - you really are discarding a lot of information - and if you don't want it, why not shoot JPEG instead of RAW in the first place?
FWIW, you can use ImageMagick to get the JPEG:
convert somefile.NEF somefile.jpg
Then you can copy the tags across from the original to the file newly created by ImageMagick:
exiftool -tagsfromfile somefile.NEF -all:all somefile.jpg
If you have thousands of images, and are on macOS or a decent Linux/Unix-based OS, I would recommend GNU Parallel like this and it will keep busy all those lovely cores that you paid Intel so dearly for:
parallel --dry-run 'convert {} {.}.jpg; exiftool -tagsfromfile {} -all:all {.}.jpg' ::: *nef
Sample Output
convert a.nef a.jpg; exiftool -tagsfromfile a.nef -all:all a.jpg
convert b.nef b.jpg; exiftool -tagsfromfile b.nef -all:all b.jpg
and if that looks good remove the --dry-run so it actually runs the command.
If you are on Windows, you will have to do some ad-hoc jiggery-pokery to get it done in any reasonable time frame. You can use the mogrify command and get all the conversions done to JPEG and then do all the exiftool re-embedding of the EXIF data later. If your files are named with some sort of system with incrementing numbers, you can start two or three copies of mogrify in parallel - say one doing files whose names end in [0-4] and another one doing files whose names end in [5-9]. I don't speak Windows, but that would probably look like these two commands each running in its own Command Prompt:
mogrify -format jpg *0.NEF *1.NEF *2.NEF *3.NEF *4.NEF
mogrify -format jpg *5.NEF *6.NEF *7.NEF *8.NEF *9.NEF
Then you would do the exiftool stuff when they had all finished but you would have to use a FOR loop like this:
FOR %%G IN (*.NEF) DO (
exiftool -tagsfromfile %%G -all:all %%~dpnG.jpg
)
The %%~dpnG part is a guess based on this answer.

ImageMagick to verify image integrity

I'm using ImageMagick (with Wand in Python) to convert images and to get thumbnails from them. However, I noticed that I need to verify whether a file is an image or not ahead of time. Should I do this with Identify?
So I would assume checking the integrity of a file needs the whole file to be read into memory. Is it better to try and convert the file and if there was an error, then we know the file wasn't good.
seems like you answered your own question
$ ls -l *.png
-rw-r--r-- 1 jsp jsp 526254 Jul 20 12:10 image.png
-rw-r--r-- 1 jsp jsp 10000 Jul 20 12:12 image_with_error.png
$ identify image.png &> /dev/null; echo $?
0
$ identify image_with_error.png &> /dev/null; echo $?
0
$ convert image.png /dev/null &> /dev/null ; echo $?
0
$ convert image_with_error.png /dev/null &> /dev/null ; echo $?
1
if you specify the regard-warnings flag with the imagemagick identify tool
magick identify -regard-warnings myimage.jpg
it will throw an error if there are any warnings about the file. This is good for checking images, and seems to be a lot faster than using verbose.
I the case you use Python you can consider also the Pillow module.
In my experiments, I have used both the Pyhton Pillow module (PIL) and the Imagemagick wrapper Wand (for psd, xcf formats) in order to detect broken images, the original answer with code snippets is here.
Update:
I also implemented this solution in my Python script here on GitHub.
I also verified that damaged files (jpg) frequently are not 'broken' images i.e, a damaged picture file sometimes remains a legit picture file, the original image is lost or altered but you are still able to load it.
End Update
I quote the full answer for completeness:
You can use Python Pillow(PIL) module, with most image formats, to check if a file is a valid and intact image file.
In the case you aim at detecting also broken images, #Nadia Alramli correctly suggests the im.verify() method, but this does not detect all the possible image defects, e.g., im.verify does not detect truncated images (that most viewers often load with a greyed area).
Pillow is able to detect these type of defects too, but you have to apply image manipulation or image decode/recode in or to trigger the check. Finally I suggest to use this code:
try:
im = Image.load(filename)
im.verify() #I perform also verify, don't know if he sees other types o defects
im.close() #reload is necessary in my case
im = Image.load(filename)
im.transpose(PIL.Image.FLIP_LEFT_RIGHT)
im.close()
except:
#manage excetions here
In case of image defects this code will raise an exception.
Please consider that im.verify is about 100 times faster than performing the image manipulation (and I think that flip is one of the cheaper transformations).
With this code you are going to verify a set of images at about 10 MBytes/sec (using single thread of a modern 2.5Ghz x86_64 CPU).
For the other formats psd,xcf,.. you can use Imagemagick wrapper Wand, the code is as follows:
im = wand.image.Image(filename=filename)
temp = im.flip;
im.close()
But, from my experiments Wand does not detect truncated images, I think it loads lacking parts as greyed area without prompting.
I red that Imagemagick has an external command identify that could make the job, but I have not found a way to invoke that function programmatically and I have not tested this route.
I suggest to always perform a preliminary check, check the filesize to not be zero (or very small), is a very cheap idea:
statfile = os.stat(filename)
filesize = statfile.st_size
if filesize == 0:
#manage here the 'faulty image' case
Here's another solution using identify, but without convert:
identify -verbose *.png 2>&1 | grep "corrupt image"
identify: corrupt image 'image_with_error.png' # error/png.c/ReadPNGImage/4051.
i use identify:
$ identify image.tif
00000005.tif TIFF 4741x6981 4741x6981+0+0 8-bit DirectClass 4.471MB 0.000u 0:00.010
$ echo $?

What is the fastest way to convert PostScript to GIF?

I am using the ImageMagick convert utility right now. I have a PostScript file that takes about 90 seconds to convert to GIF.
I am looking for a faster way to do this perferably by modifying the options to "convert".
When I say "fast", ideally a few seconds but I'll take any significant speed up. Something suitable for an interactive GUI.
I only need this in black and white or greyscale (specifically it is is an image of seismic data "wiggle traces" so B&W is fine.)
Other acceptable formats are BMP, GIF, JPEG, JPG, PCX, PGM, PNG, PNM, PPM, RAS, TGA, TIF, or TIFF.
Trying to stick with ImageMagick as that is already installed and trying to avoid selling my boss on anything new. Still happy to hear other suggestions.
My suggestion is: Use Ghostscript.
Since you have a working ImageMagick already installed, that means Ghostscript is also there: because ImageMagick cannot convert PDF or PostScript to raster images all by its own -- it has to call Ghostscript as its delegate to do this anyway.
Ghostscript can directly convert PDF/PostScript input to TIFF/TIF/TIFFg4, JPEG, PBM, PCX, PNG, PNM, PPM, BMP raster image output.
The advantages are: you don't need to have ImageMagick involved. So it's faster and also gives you more direct control over the conversion parameters. If you run Ghostscript via ImageMagick that's a level of indirection which isn't always required. (Sometimes it may be required to add some fine-tuning and post-processing manipulations to the raster image data that Ghostscript generated -- but that doesn't seem to be the case for you.)
The only disadvantage is: Ghostscript cannot produce GIF. If you required GIF (which you don't seem to), you need ImageMagick for post-processing the raster output of Ghostscript to GIF.
You can see how ImageMagick calls Ghostscript (and which parameters it uses for the call -- look for a printed line on stderr containing gs, gsx or gswin32c or gswin64c) by running for example:
convert -verbose some.pdf[0] some.gif
Update
I did run a very, very un-scientific 'benchmark', running the following two commands 100 time each, which convert the randomly picked page 333 of the official PDF specification (ISO version for PDF-1.7) to GIF, measuring the time consumed. I run these commands in concurrently parallel, so both should have had to deal with the same overall system load, making the results better comparable:
'Comfortably' using ImageMagick's convert to directly produce GIF:
time for i in $(seq -w 1 100); do
convert \
PDF32000_2008.pdf[333] \
p333-im-no_${i}.gif ;
done
Using Ghostscript to create from the same page grayscale PNGs, piping Ghostscript's output to ImageMagick's convert in order to get GIFs:
time for i in $(seq -w 1 100); do
gs \
-q \
-o - \
-dFirstPage=333 \
-dLastPage=333 \
-sDEVICE=pnggray \
PDF32000_2008.pdf \
| \
convert \
- \
p333-gs-no_${i}.gif ;
done
Timing esults for the first command (running the 'comfortable' convert to achieve the PDF->GIF transformation, which uses Ghostscript only 'behind our backs'):
real 2m29.282s
user 2m22.526s
sys 0m5.647s
Timing results for the second command (running gs directly + openly, piping it's output to convert:
real 1m27.370s
user 1m23.447s
sys 0m3.435s
One more thing:
The total size of the 100 'Ghostscript'-GIFs was 1,6 MByte -- but they were 8-bit grayscale.
The total size of the 100 'ImageMagic-direct'-GIFs was 1,2 MByte -- but they were 2-bit black+white.
I don't have the motivation currently to tweak the test commandline parameters more for even closer comparability of the resulting files.
This result (149 seconds vs. 87 seconds) gives me enough confidence into my guess that you can gain significant performance improvements when you follow my recommendation. :-)
I am using the ImageMagick convert utility right now. I have a
PostScript file that takes about 90 seconds to convert to GIF.
I am looking for a faster way to do this perferably by modifying the
options to "convert".
When I say "fast", ideally a few seconds but I'll take any significant
speed up. Something suitable for an interactive GUI.
I only need this in black and white or greyscale (specifically it is
is an image of seismic data "wiggle traces" so B&W is fine.)
You can start with GhostScript:
gs -dSAFER -dBATCH -dNOPAUSE \
-sDEVICE=pnggray -r300 -sOutputFile=seismic.png seismic.pdf
A very longer but interesting way would be to analyze exactly what is in those PDFs.
I had to do something similar with the PDF output of an EKG workflow. The original data were unavailable, we only had the PDF, but I discovered that the PDF was vector based and not raster. After a little hacking it was very easy to decode the labels, the legend and the single elementary lines making up the EKG diagram, and I came up with an option to recolor the tracks starting from what appeared a grayscale image. It did take several days, though.
It is possible that your PDF is generated in a similar way, and the data could be decoded (at first I had to use pdftk to get me a non-compressed PDF, then I found a library that I could use - it implemented the Deflate algorithm). It would be really cool to have output in SVG format :-)

Compressing multipage TIF files with varied page formats via command line

We have a large number of multipage TIF files (mainly document scans) contained in our document management system. Through various historical issues and end user misunderstandings a large number of these are considerably larger than they need to be (for example they will be scanned at a higher resolution than required, or stored without compression).
What I have been looking at doing is working through some of these documents and doing some optimisation in order to claim back some valuable storage space (I have already recovered 25GB just taking out the very low hanging fruit).
So far I have been using a combination of ImageMagick and Irfanview but I would really like to automate this process a lot more as it is pretty labour intensive at the moment. I have had a crack at creating a few scripts but unfortunately nature of the TIFs in question is proving problematic.
In particular, the majority of them contain mixed page formats; bilevel/1 bit pages for basic letter pages and full colour RGB pages for images / maps / plans. Most documents will have a mixture of these types and not always in any particular order (indeed they may go back and forth between these two formats).
Ideally I want to use group 4 fax compression on the bilevel pages and JPEG compression on the colour pages (so the -compress group4 / -compress jpeg flags in ImageMagick) but there does not appear to be any way (that I can tell - I have limited experience with IM) to set the compression on a per page format basis. Does anyone know if this is possible? Or can anyone recommend a scriptable tool that does have this capability?
Irfanview can do per page compression but it must be manually set page by page through the GUI, which is clearly not ideal.
Any tips would be greatly appreciated!
Since I don't have a sample TIFF file around showing the characteristics you describe (mixed formats, different compression schemes and color spaces for different pages...), here's a first shot.
To automate the processing of multipage TIFFs you need to know that you can access each picture individually by attaching its zero-based index number [n] to the file name.
Also, you should look up the list of ImageMagick escpape shortcuts, so you can construct an identify -format <%escapestrings> command that automatically extracts the interesting bits from the file, which you'll then use to base your further processing on.
So start your project with identifying the various characteristics between the different TIFF pages by running such an identify with a customized -format string, for example:
for i in $(seq 1 $(identify -format %n multipage.tiff)); do
identify -format \
"scene-number:%s \
image-width-in-pixels:%w \
image-height-in-pixels:%h \
x-resolution:%x \
y-resolution:%y \
image-depth:%z \
imageclass+colorspace:%r \
image-compression-type:%C \
image-compression-quality:%Q \
page-width:%W \
page-height:%H" \
multipage.tiff[$i];
done
(For educational reasons deliberately made more verbose than it need be...)
Based on that, you should be able to come up with a shell script that does what you need.

Resources