ImageMagick fails on php but works in shell - imagemagick

I've this command:
/usr/local/bin/convert -density 200 /singlePage.pdf -colorspace RGB -verbose -geometry 1155 -quality 10 -limit area 100mb singlePicture.jpg
When executing with php (via browser) it has no result output (executing with php function exec()).
When executing the same command on shell, it works perfectly.
I tried another pdf file, which works on php and shell. The only difference is the filesize.
1,0806 MB => Works
1,0962 MB => Not Works
Any ideas?

So this:
/usr/local/bin/convert -density 200 /singlePage.pdf -colorspace RGB -verbose -geometry 1155 -quality 10 -limit area 100mb singlePicture.jpg
implies that the singlePage.pdf file is located on the root of your filesystem. I doubt that is true. My guess is the "/singlePage.pdf" path is wrong.

Related

Thumbnail image from DNG file is inappropriate using imagemagick or exif tool

I have a DNG image and a cropped monochromatic version of the image. Both are generating the same file as the thumbnail when I run any of the below commands:
magick.exe C:\sample1crop.dng -resize 500x375 C:\crop-T.JPG
or
magick.exe dng:C:\sample1crop.dng -intent relative -sample 500x375> -strip -auto-orient -density 72 C:\crop-T.JPG
or
magick.exe convert dng:C:\sample1original.dng -thumbnail 500x375 -filter -auto-orient -density 72 C:\orig-T.JPG
As I am not allowed to upload the DNG files, thus I uploaded the images in hightail, sharing the link below: https://spaces.hightail.com/space/ThDEDYZVey
The generated thumbnail for both the cases:
I tried to get the thumbnail with exiftool as well:
exiftool -b -PreviewImage C:\86854\SLS\Issues\ART-73712\crop-T.JPG > C:\86854\SLS\Issues\ART-73712\thumbnail.jpg
exiftool -b -ThumbnailImage C:\86854\SLS\Issues\ART-73712\crop-T.JPG > C:\86854\SLS\Issues\ART-73712\thumbnail.jpg
but the resulting file seems corrupted. When I extract the exiftool metadata I see:
"ThumbnailTIFF": "(Binary data 42194 bytes, use -b option to extract)"
My requirement here is to get a generic cmd that provides a cropped monochromatic thumbnail similar to the original image.
Using this exiftool command, I was able to extract four images from those files.
You don't mention what OS or shell you're using but if you're using Windows PowerShell, it is known to corrupt binary data when piping or redirecting. Use CMD and you should be able to extract the images properly.

ffmpeg resize large image and high resolution

I tried to resize a very big image (457 MB and 21600x21600) with the following command
-i test.png -vf scale=320:-1 out.png
but it throws exception saying "Picture size 21600x21600 is invalid". How can I find out the biggest supported resolution by ffmpeg? Is there a way to resize this high resolution image with ffmpeg?
If you want to use ImageMagick it is included in most Linux distros and is available for macOS and Windows.
Your command becomes:
convert test.png -resize 320x result.png
If you are running v7 or newer, use:
magick test.png -resize 320x result.png
If you have lots to do, and you want all the resized images written in a directory called thumbs you can use:
mkdir thumbs
magick mogrify -path thumbs -resize 320x *.png
Alternatively, you may find vips is a lighter-weight installation and does a faster conversion using less memory:
mkdir thumbs
vipsthumbnail -s 320 -o "thumbs/%s.png" image.png

Imagemagick parallel conversion

I want to get screenshot of each page of a pdf into jpg. To do this I am using ImageMagick's convert command in command line.
I have to achieve the following -
Get screenshots of each page of the pdf file.
resize the screenshot into 3 different sizes (small, med and preview).
store the different sizes in different folders (small, med and preview).
I am using the following command which works, however, it is slow. How can I improve its execution time or execute the commands parallely.
convert -density 400 -quality 100 /input/test.pdf -resize 170x117> -scene 1 /small/test_%d_small.jpg & convert -density 400 -quality 100 /input/test.pdf -resize 230x160> -scene 1 /med/test_%d_med.jpg & convert -density 400 -quality 100 /input/test.pdf -resize 1310x650> -scene 1 /preview/test_%d_preview.jpg
Splitting the command for readability
convert -density 400 -quality 100 /input/test.pdf -resize 170x117> -scene 1 /small/test_%d_small.jpg
convert -density 400 -quality 100 /input/test.pdf -resize 230x160> -scene 1 /med/test_%d_med.jpg
convert -density 400 -quality 100 /input/test.pdf -resize 1310x650> -scene 1 /preview/test_%d_preview.jpg
Updated Answer
I see you have long, multi-page documents and while my original answer is good for making multiple sizes of a single page quickly, it doesn't address doing pages in parallel. So, here is a way of doing it using GNU Parallel which is available for free for OS X (using homebrew), installed on most Linux distros and also available for Windows - if you really must.
The code looks like this:
#!/bin/bash
shopt -s nullglob
shopt -s nocaseglob
doPage(){
# Expecting filename as first parameter and page number as second
# echo DEBUG: File: $1 Page: $2
noexten=${1%%.*}
convert -density 400 -quality 100 "$1[$2]" \
-resize 1310x650 -write "${noexten}-p-$2-large.jpg" \
-resize 230x160 -write "${noexten}-p-$2-med.jpg" \
-resize 170x117 "${noexten}-p-$2-small.jpg"
}
export -f doPage
# First, get list of all PDF documents
for d in *.pdf; do
# Now get number of pages in this document - "pdfinfo" is probably quicker
p=$(identify "$d" | wc -l)
for ((i=0;i<$p;i++));do
echo $d:$i
done
done | parallel --eta --colsep ':' doPage {1} {2}
If you want to see how it works, remove the | parallel .... from the last line and you will see that the preceding loop just echoes a list of filenames and a counter for the page number into GNU Parallel. It will then run one process per CPU core, unless you specify -j 8 if you want say 8 processes to run in parallel. Remove the --eta if you don't want any updates on when the command is likely to finish.
In the comment I allude to pdfinfo being faster than identify, if you have that available (it's part of the poppler package under homebrew on OS X), then you can use this to get the number of pages in a PDF:
pdfinfo SomeDocument.pdf | awk '/^Pages:/ {print $2}'
Original Answer
Something along these lines so you only read it in once and then generate successively smaller images from the largest one:
convert -density 400 -quality 100 x.pdf \
-resize 1310x650 -write large.jpg \
-resize 230x160 -write medium.jpg \
-resize 170x117 small.jpg
Unless you mean you have, say, a 50 page PDF, and you want to do all 50 pages in parallel. If you do, say so, and I'll show you that using GNU Parallel when I get up in 10 hours...

.CDR to .SVG Convert Using ImageMagick

I am on CentOS 6.4 and trying to convert .CDR to .SVG Convert Using ImageMagick using SSH command.
my 1.cdr file is in /var/www/vhosts/website.com/httpdocs/test/1.cdr
once converted to SVG it should be created in the same folder
Tried the following command:
convert /var/www/vhosts/website.com/httpdocs/test/1.cdr image.svg
The Error I am getting is:
sh: mplayer: command not found convert: Delegate failed "mplayer"
"%i" -really-quiet -ao null -vo png:z=3' #
delegate.c/InvokeDelegate/1032. convert: missing an image filename
image.svg' # convert.c/ConvertImageCommand/2800.
Not sure what does that mean ?
In order to convert CDR files you need to install uniconvertor for CDR delegate.
List of all delegates:
convert -list delegate
By default it outputs:
cdr => "uniconvertor" "%i" "%o.svg"; mv "%o.svg" "%o"
Install uniconvertor. For example, on Ubuntu it’s:
sudo apt-get install python-uniconvertor
Then run:
convert image.cdr -flatten -thumbnail '512x512' image.png
Or, with zoom cropping:
convert image.cdr -flatten -thumbnail '512x512^' -gravity center -crop 512x512+0+0 +repage image.png
And you’re done.
I convert to PNG here but you may use your own output format.
python-uniconvertor is part of inkscape.
It does not exist by itself.
Ubuntu/Mint recently removed all the old Python stuff, for Corel Draw I have to fire up the WinXP VM & Corel and export something Linux understands, usually PNG, a favourite
CDR & WMF files are pretty much dead to Linux, ImageMagick can still handle WMF though.

Cannot create thumbnail in ImageMagick: "convert: no decode delegate for this image format"

I've been at this all day. I'm trying to upload images to a Mediawiki and this is the error I get when ImageMagick tries to create the thumbnail:
Error creating thumbnail: convert: no decode delegate for this image format `/tmp/magick-11924QG1rRXzT948I' # error/constitute.c/ReadImage/552.
convert: no images defined `/tmp/s3thumb-cripEh' # error/convert.c/ConvertImageCommand/3127.
I setup a debug file for mediawiki and this is what I get in the log:
BitmapHandler::doTransform: creating 112x120 thumbnail at /tmp/s3thumb-cripEh using scaler im
BitmapHandler::doTransform: called wfMkdirParents(/tmp)
BitmapHandler::getMagickVersion: Running convert -version
wfShellExec: /bin/bash '/var/www/mediawiki-1.21.2/includes/limit.sh' ''\''/usr/local/bin/convert'\'' -version' 'MW_CPU_LIMIT=180; MW_CGROUP='\'''\''; MW_MEM_LIMIT=202400; MW_FILE_SIZE_LIMIT=102400; MW_WALL_CLOCK_LIMIT=180'
BitmapHandler::transformImageMagick: running ImageMagick: '/usr/local/bin/convert' -quality 80 -background white -define jpeg:size=112x120 '' -thumbnail '112x120!' -depth 8 -sharpen '0x0.4' -rotate -0 '/tmp/s3thumb-cripEh' 2>&1
wfShellExec: /bin/bash '/var/www/mediawiki-1.21.2/includes/limit.sh' 'OMP_NUM_THREADS='\''1'\'' '\''/usr/local/bin/convert'\'' -quality 80 -background white -define jpeg:size=112x120 '\'''\'' -thumbnail '\''112x120!'\'' -depth 8 -sharpen '\''0x0.4'\'' -rotate -0 '\''/tmp/s3thumb-cripEh'\'' 2>&1' 'MW_CPU_LIMIT=180; MW_CGROUP='\'''\''; MW_MEM_LIMIT=202400; MW_FILE_SIZE_LIMIT=102400; MW_WALL_CLOCK_LIMIT=180'
[thumbnail] thumbnail failed on ip-10-168-26-167: error 1 "convert: no decode delegate for this image format `/tmp/magick-11924QG1rRXzT948I' # error/constitute.c/ReadImage/552.
convert: no images defined `/tmp/s3thumb-cripEh' # error/convert.c/ConvertImageCommand/3127." from "'/usr/local/bin/convert' -quality 80 -background white -define jpeg:size=112x120 '' -thumbnail '112x120!' -depth 8 -sharpen '0x0.4' -rotate -0 '/tmp/s3thumb-cripEh' 2>&1"
LocalS3File::transform thumb:
LocalS3File::transform thumbTempPath: /tmp/s3thumb-cripEh, dest: wiki-images/thumb/1/19/5ovrDaU.jpg/112px-5ovrDaU.jpg
info:1
LocalS3File::transform return thumb: MediaTransformError Object
(
[htmlMsg] => Error creating thumbnail: convert: no decode delegate for this image format `/tmp/magick-11924QG1rRXzT948I' # error/constitute.c/ReadImage/552.<br />
convert: no images defined `/tmp/s3thumb-cripEh' # error/convert.c/ConvertImageCommand/3127.<br />
[textMsg] => Error creating thumbnail: convert: no decode delegate for this image format `/tmp/magick-11924QG1rRXzT948I' # error/constitute.c/ReadImage/552.<br />
convert: no images defined `/tmp/s3thumb-cripEh' # error/convert.c/ConvertImageCommand/3127.<br />
[width] => 112
[height] => 120
[url] =>
[path] =>
[file] =>
[page] =>
[responsiveUrls] => Array
(
)
[storagePath:protected] =>
)
Tried from the command line, copy pasted the command from the log (but used a test file):
convert -quality 80 -background white -define jpeg:size=112x120 '' -thumbnail '112x120!' -depth 8 -sharpen '0x0.4' -rotate -0 'logo.jpg'
but the process hangs. If I run:
sudo convert logo.png -quality 80 -background white -define jpeg:size=112x120 -thumbnail '112x120!' -depth 8 -sharpen '0x0.4' -rotate -0 logo.jpg
It works.
If I check DELEGATES, I have:
DELEGATES jng jp2 jpeg png ps tiff xml zlib
I tried increasing the default memory on media wiki to
$wgMaxShellMemory = 202400;
I feel like I've tried anything. Any ideas?
EDIT:
This is what I've discovered so far:
I'm pretty sure the shell wasn't executing the ImageMagick command because of the escaped backslashes in :
wfShellExec: /bin/bash '/var/www/mediawiki-1.21.2/includes/limit.sh' ''\''/usr/local/bin/convert'\'' -version' 'MW_CPU_LIMIT=180; MW_CGROUP='\'''\''; MW_MEM_LIMIT=202400; MW_FILE_SIZE_LIMIT=102400; MW_WALL_CLOCK_LIMIT=180'
Those ''\'' are causing the command to not run, and hence the "no decode" error. It can't decode, because the file is not there. I've traced wfShellEec to GlobalFunctions.php. The wfShellExec function is at around line 2778 on my file.
In the if ( php_uname( 's' ) == 'Linux' ) block there is:
escapeshellarg( $cmd )
I removed the escapeshellarg() function and just left the $cmd on its own.
Tried uploading again, the error is gone, the files are created, but now the thumbnail files are 0 bytes.
Any ideas?
In the following, the empty string parameter '' represents what is supposed to be the input image:
convert -quality 80 -background white -define jpeg:size=112x120 '' -thumbnail '112x120!' -depth 8 -sharpen '0x0.4' -rotate -0 'logo.jpg'
When you ran sudo convert logo.png ... from the command line it worked because you had an input image (logo.png), but inside MW the source image parameter was missing. So the problem here is not with convert, ImageMagick naturally cannot convert an image that doesn't exist. The problem is that MW fails to supply the source image file name.
If your case is like mine, the empty '' image source parameter could trace back to permissions in the images directory. Make sure this directory and all its subdirectories are rwx by the server process. Once I opened up everything in this dir to the server, the errors went away and the images and thumbnails appeared perfectly.

Resources