Command-line image converter\resizer - imagemagick

I'm looking for a command-line image converter/resizer.
What i need to do is convert bitmap and tiff files into png files as well as creating a thumbnail. The images are relatively large. The largest is approximately 13,000 x 10,000 pixels and around 200mb.
I've tried ImageMagick. It used too much memory, was too slow, and couldn't handle the largest files without using disc cache making it unbearably slow.
Currently I'm using GraphicsMagick which uses less memory and can handle the larger files, but it is still a little slow. Around 15s per image.
Are there any other programs out there that could maybe offer a little better performance?

You could try libvips. It's a streaming image processing library, so it's able to read the input, process, and write the output as a single pipeline, with no separate loading phase and no temporary files. It's got a fancy threaded IO system too, so performance is good and memory use is low.
I timed it on this machine (imac with ImageMagick 6.9.6-3 Q16, gm 1.3.25, vips 8.4.2):
$ vips black test.tif 13000 10000 --bands 3
$ ls -l test.tif
-rw-r--r-- 1 john staff 390000854 22 Nov 09:43 test.tif
So that's a 13000 x 10000 3-band, 8 bit uncompressed TIFF. With vipsthumbnail, the image shrinker that comes with vips, I see:
$ /usr/bin/time -l vipsthumbnail test.tif -s 128x128 -o small.png
0.54 real 0.42 user 0.11 sys
77635584 maximum resident set size
I ran three times and picked the fastest, so that should just be a test of vipsthumbnail and not my disk system. That's 0.54s real time, 77MB of peak memory.
With convert I see:
$ /usr/bin/time -l convert test.tif -resize 128x128 small.png
4.87 real 4.28 user 0.55 sys
1432182784 maximum resident set size
Again, fastest of three runs, 4.87s real time, 1.4gb memory. GraphicsMagick is a little faster, I see:
$ /usr/bin/time -l gm convert test.tif -resize 128x128 small.png
3.95 real 3.41 user 0.51 sys
1264369664 maximum resident set size
So 3.95s real, 1.2gb peak memory.
So on this test, libvips is 7x faster and uses 15x less memory than graphicsmagick.
libvips is a standard part of most linuxes, it's in homebrew and macports, and there are 64-bit windows binaries on the vips website.

There are so many image handling software that can convert from any of your choose to your desired out put format ,just download this beautiful software that
can handle all (video,image,audio) and
you don't have to write any command its graphical interface will provide you will all you need and
it runs with little or less memory.
you can convert as much images as you have with desired size and follow your progress while doing another thing.
check this official link of the software http://www.pcfreetime.com/

With either ImageMagick or GraphicsMagick you can speed up PNG encoding by using a lower "-quality" instead of accepting the default quality==75. This will trade compression performance (file size) for speed. Try -quality 40 for line art, or -quality 41 for photos. Here are some results for a JPEG out of my camera, using ImageMagick-7.0.3-8 built with libpng-1.2.54:
glenn.rp> time magick D*88.JPG d88-q75.png
real 0m13.494s user 0m11.252s sys 0m2.060s
glenn.rp> time magick -quality 41 D*88.JPG d88-q41.png
real 0m7.377s user 0m4.728s sys 0m1.908s
glenn.rp> time magick -quality 40 D*88.JPG d88-q40.png
real 0m3.842s user 0m3.200s sys 0m0.584s
glenn.rp> ls -lt d88*
-rw-rw-r-- 1 glennrp glennrp 24352041 Nov 29 15:45 d88-q40.png
-rw-rw-r-- 1 glennrp glennrp 17072518 Nov 29 15:45 d88-q41.png
-rw-rw-r-- 1 glennrp glennrp 15788794 Nov 29 15:44 d88-q75.png

Related

How to start U-Boot from SD cards's FAT partition on Beaglebone Black

I'm currently reading Master Embedded Linux Programming and I'm on the chapter where it goes into bootloaders, more specifically U-Boot for the Beaglebone Black.
I have built a crosscompiler and I'm able to build U-Boot, however I can't make it run the way it is described in the book.
After some experimentation and Google'ing, I can make it work by writing MLO and u-boot.img in raw mode (using these command)
However, if I put the files in a FAT32 MBR boot partition, the Beaglebone will not boot, it will only show a string of C's, which indicate that it is trying to get its bootloader from the serial interface and it has decided it cannot boot from SD card.
I have also studied this answer. According to that answer I should be doing everything correctly. I've tried to experiment with the MMC raw mode options in the U-Boot build configuration, but I've not been able to find a change that works.
I feel like there must be something obvious I'm missing, but I can't figure it out. Are there any things I can try to debug this further?
Update: some more details on the partition tables.
When using the "raw way" of putting LBO and u-boot.img on the SD cards, I have not created any partitions at all. This works:
$ sudo sfdisk /dev/sda -l
Disk /dev/sda: 117,75 GiB, 126437294080 bytes, 246947840 sectors
Disk model: MassStorageClass
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
When trying to use a boot partition, that does not work, I have this configuration:
$ sudo sfdisk /dev/sda -l
Disk /dev/sda: 117,75 GiB, 126437294080 bytes, 246947840 sectors
Disk model: MassStorageClass
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x3d985ec3
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 133119 131072 64M c W95 FAT32 (LBA)
Update 2: The contents of the boot partition is the exact same 2 files that I use for the raw writes, so they are confirmed to work:
$ ls -al
total 1000
drwxr-xr-x 2 peter peter 16384 Jan 1 1970 .
drwxr-x---+ 3 root root 4096 Jul 18 08:44 ..
-rw-r--r-- 1 peter peter 108184 Jul 14 13:56 MLO
-rw-r--r-- 1 peter peter 893144 Jul 14 13:56 u-boot.img
Update 3: I have already tried the following U-Boot options to try it go get to work (in the SPL / TPL menu):
"Support FAT filesystems" This is enabled by default. I can't really find a good reference for the U-Boot options, but I am guessing this is what enables booting from a FAT partition (which is what I'm trying to do)
"MCC raw mode: by sector" I have disabled this. As expected, this indeed breaks the booting in raw mode, which is the only thing I got working up till now.
"MCC raw mode: by partition". I have tried to enable this and using partition 1 to load U-Boot from. I'm not sure how to understand this option. I assume raw mode does not require partitions, but this asks for what partition to use...
In general, if any one can point me to a U-Boot configuration reference, that would already by very helpful. Right now, I'm just randomly turning things on and off that sound like they may help.

Huge memory usage when running huggingface transformers run_language_modeling.py with GPT2

I tried to test run_language_modeling.py with a small test file and it run out of memory after using more than 32 GB of RAM. Why does it need so much RAM or what am I doing wrong?
Command line:
python run_language_modeling.py --output_dir foo --model_type gpt2 --model_name_or_path gpt2 --do_train --train_data_file test.txt --no_cuda --eval_data_file test.txt
Testfile size: 29600 bytes, 546 lines.
With the original OpenAI implementation I have no problem to run the training script.

How does ImageMagick pass options to cwebp Linux

I'm running
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
I'm also running ImageMagick 6.9.
I'd like to convert a PDF image into WebP. AFAIK, out of the box, ImageMagick on Linux cannot convert to WebP, so I sudo apt-get install webp which installs cwebp.
cwebp allows to specify the -q parameter, and ImageMagick allows to specify the -quality parameter.
When I run $ cwebp -q 90 image.png -o image.webp, it takes cwebp around 8 seconds to convert it. If I run convert image.png -quality 90 image.webp, it takes ImageMagick around 30 seconds to convert it. It seems like the -quality parameter is not passed through to cwebp. It also may be the case that convert attempts to run a lossless conversion, which in cwebp is achieved with an explicit -lossless flag.
I run the test commands for a 10 MB test png image.
I would like to achieve 8 second conversion times with convert command. How can I do it?
I realize you want imagemagick, but if you are able to consider alternatives, libvips can do pdf -> webp quickly at the command line, and without any configuring.
For example, with this PDF (Audi R8 brochure) on my 2015 laptop, I see:
$ time convert -density 600 r8.pdf[3] -quality 90 x.webp
real 0m36.699s
user 0m23.787s
sys 0m1.628s
$ vipsheader x.webp
x.webp: 9921x4961 uchar, 3 bands, srgb, webpload
Which I think is broadly in line with the times you are seeing.
With libvips, I see:
$ time vips copy r8.pdf[dpi=600,page=3] x.webp[Q=90]
real 0m7.195s
user 0m6.861s
sys 0m0.505s
$ vipsheader x.webp
x.webp: 9921x4961 uchar, 3 bands, srgb, webpload
The same result, but within your 8s time budget.
You can set a lot of other webp options if you want more control over compression.
It turns out, that the delegates are invoked using the rules in /etc/ImageMagick-6/delegates.xml.
It lists a bunch of rules on how to convert between different types of images.
For my case, the png->webp conversion, I needed the string:
<delegate decode="png" encode="webp" command=""cwebp" -quiet %Q "%i" -o "%o""/>
While in this file I don't know the -quaility parameter value, and there seems to be no way to capture it.
However, if you wish to keep the value of the -q parameter for cwebp, you have the option of hard-coding the -q $YOUR_VALUE right into the command inside the delegate tag.
This solution is still slower than invoking cwebp directly, since ImageMagick can take up to 8 seconds before invoking the delegate.

ImageMagick convert pdf to jpeg has poor text quality after upgrading ImageMagick version to 6.7.8

After upgrading ImageMagick text quality got degraded when convert pdf to jpeg:
Old image
New Image
Conversion command: convert foo.pdf foo.jpeg
Old ImageMagick version:
[root#home]# convert -version
Version: ImageMagick 6.2.8 05/07/12 Q16 file:/usr/share/ImageMagick-6.2.8/doc/index.html
Copyright: Copyright (C) 1999-2006 ImageMagick Studio LLC
generated files size:
-rw-r--r-- 1 root root 139K Apr 2 16:11 foo-0.jpeg
-rw-r--r-- 1 root root 130K Apr 2 16:11 foo-1.jpeg
-rw-r--r-- 1 root root 334K Mar 24 14:27 foo.pdf
After upgrading ImageMagick
[root#home]# convert -version
Version: ImageMagick 6.7.8-10 2012-08-17 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2012 ImageMagick Studio LLC
Features: OpenMP
generated files size:
-rw-r--r-- 1 root root 60K Apr 2 16:11 foo-0.jpeg
-rw-r--r-- 1 root root 55K Apr 2 16:11 foo-1.jpeg
-rw-r--r-- 1 root root 334K Mar 24 14:27 foo.pdf
I've tried using antialias flag:
convert -antialias foo.pdf foo.jpeg
Which did nothing, I've tried setting an higher quality:
convert -quality 100 foo.pdf foo.jpeg
and super sampling:
convert -density 288 -background white -alpha off foo.pdf -resize 25% foo.jpeg
both gave bigger files and better results, but ran more time and had lower quality that the old ImageMagick version.
any advises?
Link to the file
I see the same problem with your sample file. It looks like ImageMagick's delegates for the PDF conversion may have changed with the new install.
If you try convert -verbose foo.pdf foo.jpeg, do you see -sDEVICE=pngalpha in the command that gets sent to gs? The pnmraw device has been used in the past, and switching back to that seems to fix the problem for me.
In ImageMagick's delegates.xml file (which may be in /etc/ImageMagick, but could be somewhere else depending on your setup), look for the decode="ps:alpha" delegate line and change -sDEVICE=pngalpha in the command to -sDEVICE=pnmraw. (You can probably just search for pngalpha in the file.)
it seem that problem at DPI. when convert pdf, imagemagick using Ghostscript. you can skip using imagemagick.
$ gs -q -dQUIET -dSAFER -dBATCH -dNOPAUSE -dNOPROMPT -dMaxBitmap=500000000 -dGridFitTT=2 -dUseCropBox -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r200x200 -sDEVICE=jpeg -dJPEGQ=100 -sOutputFile=foo-%05d.jpg foo.pdf
set -r option higher value. Ghostscript have default value is 100DPI.
or using convert option -density. this option set pdf converted DPI.
$ convert -density 200x200 foo.pdf foo.jpg
PDF files are vector files and have no specific size. Their size is controlled by defining the density and units before reading in the PDF file. You can get better quality for the same desired output file size by supersampling. That means rasterize the PDF to a large size and then resize to your desired actual size. For example in ImageMagick:
convert -units pixelsperinch -density 288 image.pdf -resize 25% output.jpg
The nominal density if left off is 72 dpi. So 72*4=288. Then resize by 1/5=25% gets back to the same default size, but should look much better. Change the density or resize to deal with quality and final size as desired.

Creating a tar stream of arbitrary data and size

I need to create an arbitrarily large tarfile for testing but don't want it to hit the disk.
What's the easiest way to do this?
You can easily use python to generate such a tarfile:
mktar.py:
#!/usr/bin/python
import datetime
import sys
import tarfile
tar = tarfile.open(fileobj=sys.stdout, mode="w|")
info = tarfile.TarInfo(name="fizzbuzz.data")
info.mode = 0644
info.size = 1048576 * 16
info.mtime = int(datetime.datetime.now().strftime('%s'))
rand = open('/dev/urandom', 'r')
tar.addfile(info,rand)
tar.close()
michael#challenger:~$ ./mktar.py | tar tvf -
-rw-r--r-- 0/0 16777216 2012-08-02 13:39 fizzbuzz.data
You can use tar with -O option tar -O, like this tar -xOzf foo.tgz bigfile | process
https://www.gnu.org/software/tar/manual/html_node/Writing-to-Standard-Output.html
PS: However, it could be, that you will not get the benefits you intend to gain as tar starts writing stdout only after it has read through the entire compressed file. You can demonstrate this behavior by starting a large file extraction and following the file size over time; it should be zero most of the processing time and start growing at very late stage. On the other hand I haven't researched this extensively, there might be some work around, or I might be just plain wrong with my first hand out-of-memory experience.

Resources