My Delphi app has created a squence called frame_001.png to frame_100.png.
I need that to be compiled into a movie clip. I think perhaps the easiest is to call ffmpeg from the command line, according to their documentation:
For creating a video from many images:
ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
The syntax foo-%03d.jpeg specifies to use a decimal number composed of three digits padded with zeroes to express the sequence number. It is the same syntax supported by the C printf function, but only formats accepting a normal integer are suitable.
From: http://ffmpeg.org/ffmpeg-doc.html#SEC5
However my files are (lossless) png format, so I have to convert using imagemagick first.
My command line is now:
ffmpeg.exe -f image2 -i c:\temp\wentelreader\frame_%05d.jpg -r 12 foo.avi
But then I get the error:
[image2 # 0x133a7d0]Could not find codec parameters (Video: mjpeg)
c:\temp\wentelreader\Frame_C:\VID2EVA\Tools\Mencoder\wentel.bat5d.jpg: could not
find codec parameters
What am I doing wrong?
Alternatively can this be done easily with Delphi?
Not sure if you are interested but there are delphi headers for this # http://ultrastardx.svn.sourceforge.net/viewvc/ultrastardx/trunk/src/lib/ffmpeg/
So you can use the DLL vs command line.
-Brad
Look at the file name in the error message. That can't possibly be right. The percent sign needs to get all the way to the program you're running, but it's being expanded by the batch file instead, where %0 expands to the full name and path of the file. Double the percent sign in the batch file:
ffmpeg.exe -f image2 -i c:\temp\wentelreader\frame_%%05d.jpg -r 12 foo.avi
Also, why do you want five digits when you've already said your files are named like frame_001.png, which has only three digits?
ffmpeg can create a movie from png images, why do you think you have to convert them to jpeg?
Guys in DelphiFFMpeg have been produced a component wrapper for FFMpeg. It's very expensive but it's worth to test it. However what you want to do is very simple and command-line is more than enough for you.
Related
I am trying to convert multiple .eps files into .jpg ones. By looking at answers here in SO, I was able to do it for single separate files.
The problem is that, when I'm trying to do it for all the files, they don't show any .jpg file.
I am currently using Imagemagick with the command
convert -density 300 outputs-000.eps -flatten outputs-000.jpg
I believe the problem is because my files are written as
outputs-000.eps
outputs-001.eps
outputs-002.eps
outputs-003.eps
...
outputs-145.eps
...
and so on. I tried putting %d (as in outputs-%d.eps and outputs-%d.jpg), but with no success.
Apart from that, I intent to get all those files and "convert" them into an .mkv or .gif or similar type (they are images of the time configuration of a particle collision system, so each image is a frame, so the goal is to make it into a 10sec movie). If there is a way to do that directly from the .eps, even better. Any help is welcome, since I've been trying to do this for several hours now. Thank you.
You should be able to make an animated GIF in one go like this:
convert -density 300 outputs-*eps -delay 200 animated.gif
Failing that, you should be able to convert all your eps files to, say PNG with:
mogrify -density 300 -format png outputs-*eps
Be careful with mogrify - it overwrites your input files unless you specify -path for an output directory, or you change format - like I just did to PNG.
For anyone who lands here trying to figure out how to work around ImageMagic's convert: not authorized without reverting the change that was made to the system-wide security policy to close a vulnerability, here's how you'd use Ghostscript to do a batch EPS-to-JPEG conversion directly without bringing ImageMagick into the mix:
gs -dSAFER -dEPSCrop -r300 -sDEVICE=jpeg -o outputs-%03d.jpg outputs-*.eps
-dSAFER puts Ghostscript in a sandboxed mode where Postscript code can only interact with the files you specified on the command line. (Yes, the parts of EPS, PS, and PDF files that define the page contents are in a turing-complete programming language.)
-dEPSCrop asks for the rendered output to be cropped to the bounding box of the drawing rather than padded out to whatever size page Ghostscript expects you to be printing to. (See the manual for details.)
The -r300 requests 300 DPI (-r600 for 600 DPI, etc.)
The -sDEVICE specifies the output format (See the Devices section of the manual for other choices.)
-o is a shorthand for -dBATCH -dNOPAUSE -sOutputFile=
This section of the Ghostscript manual gives some example formats for for multi-file filename output but, for the actual syntax definition, it points you at the documentation for the C printf(3) function.
Once you've got your JPEGs, you can follow the instructions in this answer over on the Video Production Stack Exchange to combine them into an MKV file.
The TL;DR is this command here:
ffmpeg -framerate 30 -i outputs-%03d.jpg -codec copy output.mkv
Check out the other answers if you want something that performs inter-frame compression rather than aiming to avoid transcoding the JPEGs again.
(If you want the best compromise, have Ghostscript output PNGs and then let ffmpeg handle switching to lossy compression.)
I am relatively new to machine learning/python/ubuntu.
I have a set of images in .jpg format where half contain a feature I want caffe to learn and half don't. I'm having trouble in finding a way to convert them to the required lmdb format.
I have the necessary text input files.
My question is can anyone provide a step by step guide on how to use convert_imageset.cpp in the ubuntu terminal?
Thanks
A quick guide to Caffe's convert_imageset
Build
First thing you must do is build caffe and caffe's tools (convert_imageset is one of these tools).
After installing caffe and makeing it make sure you ran make tools as well.
Verify that a binary file convert_imageset is created in $CAFFE_ROOT/build/tools.
Prepare your data
Images: put all images in a folder (I'll call it here /path/to/jpegs/).
Labels: create a text file (e.g., /path/to/labels/train.txt) with a line per input image . For example:
img_0000.jpeg 1
img_0001.jpeg 0
img_0002.jpeg 0
In this example the first image is labeled 1 while the other two are labeled 0.
Convert the dataset
Run the binary in shell
~$ GLOG_logtostderr=1 $CAFFE_ROOT/build/tools/convert_imageset \
--resize_height=200 --resize_width=200 --shuffle \
/path/to/jpegs/ \
/path/to/labels/train.txt \
/path/to/lmdb/train_lmdb
Command line explained:
GLOG_logtostderr flag is set to 1 before calling convert_imageset indicates the logging mechanism to redirect log messages to stderr.
--resize_height and --resize_width resize all input images to same size 200x200.
--shuffle randomly change the order of images and does not preserve the order in the /path/to/labels/train.txt file.
Following are the path to the images folder, the labels text file and the output name. Note that the output name should not exist prior to calling convert_imageset otherwise you'll get a scary error message.
Other flags that might be useful:
--backend - allows you to choose between an lmdb dataset or levelDB.
--gray - convert all images to gray scale.
--encoded and --encoded_type - keep image data in encoded (jpg/png) compressed form in the database.
--help - shows some help, see all relevant flags under Flags from tools/convert_imageset.cpp
You can check out $CAFFE_ROOT/examples/imagenet/convert_imagenet.sh
for an example how to use convert_imageset.
This is a followup to Apple's Automator: compression settings for jpg?
It works. However I am failing at modifying it to make it more flexible.
I am incorporating Sips into Automator to try to create a droplet that changes an image file to a jpeg, of a particular quality and dimensions. The automator app asks for the compression level and pixel width, then spits out the requested file. Except... mine doesn't. The scripting (my lack of programming knowledge) is my weak link.
This is what I've done that's not working... Please see:
There are two mistakes in the code that the poster who wrote the code made.
When calling a variable in shell. You must prepend it with "$"
so where the have missed this out is what is stopping the code to work as it should.
The lines without the $ are: compressionLevel=file
and
sips -s format jpeg -s formatOptions compressionLevel $file --out ${filename%.*}.jpg
The corrected code:
should be:
compressionLevel=$file
and
sips -s format jpeg -s formatOptions $compressionLevel $file --out ${filename%.*}.jpg
UPDATED ANSWER* I noticed you have pixel width.
So I have change the code to accommodate it.
I have also added a "_" to the end of the out put file which you can remove if you want.
The reason I put it there is so I do not overwrite originals and create in effect copies.
compressionLevel=$1
pixalWidth=$2
i=1 # index of item
for item # A for loop by default loop through $1, $2, ...
do
if [ $i -gt 2 ]; then # start at index 3 #-- array indexes start at 0. 0 is just a "-" in this case so we totally ignor it. we are using items 1 & 2 for the sip options, the rest for file paths. the index "i" is used to keep track of the array item indexes.
echo "Processing $item"
sips -s format jpeg -s formatOptions $compressionLevel --resampleWidth $pixalWidth $item --out ${item%.*}_.jpg
fi
((i++))
done
osascript -e 'tell app "Automator" to display dialog "Done." buttons {"OK"}'
I would suggest you do some reading on shell scripting to get some basics down.
there are plant of references on the web. And Apple have this.
I am sure if you ask the question others can give you some good starting points first search this site for similar question as I am sure it base been asked a thousand times.
I'm using ImageMagick (with Wand in Python) to convert images and to get thumbnails from them. However, I noticed that I need to verify whether a file is an image or not ahead of time. Should I do this with Identify?
So I would assume checking the integrity of a file needs the whole file to be read into memory. Is it better to try and convert the file and if there was an error, then we know the file wasn't good.
seems like you answered your own question
$ ls -l *.png
-rw-r--r-- 1 jsp jsp 526254 Jul 20 12:10 image.png
-rw-r--r-- 1 jsp jsp 10000 Jul 20 12:12 image_with_error.png
$ identify image.png &> /dev/null; echo $?
0
$ identify image_with_error.png &> /dev/null; echo $?
0
$ convert image.png /dev/null &> /dev/null ; echo $?
0
$ convert image_with_error.png /dev/null &> /dev/null ; echo $?
1
if you specify the regard-warnings flag with the imagemagick identify tool
magick identify -regard-warnings myimage.jpg
it will throw an error if there are any warnings about the file. This is good for checking images, and seems to be a lot faster than using verbose.
I the case you use Python you can consider also the Pillow module.
In my experiments, I have used both the Pyhton Pillow module (PIL) and the Imagemagick wrapper Wand (for psd, xcf formats) in order to detect broken images, the original answer with code snippets is here.
Update:
I also implemented this solution in my Python script here on GitHub.
I also verified that damaged files (jpg) frequently are not 'broken' images i.e, a damaged picture file sometimes remains a legit picture file, the original image is lost or altered but you are still able to load it.
End Update
I quote the full answer for completeness:
You can use Python Pillow(PIL) module, with most image formats, to check if a file is a valid and intact image file.
In the case you aim at detecting also broken images, #Nadia Alramli correctly suggests the im.verify() method, but this does not detect all the possible image defects, e.g., im.verify does not detect truncated images (that most viewers often load with a greyed area).
Pillow is able to detect these type of defects too, but you have to apply image manipulation or image decode/recode in or to trigger the check. Finally I suggest to use this code:
try:
im = Image.load(filename)
im.verify() #I perform also verify, don't know if he sees other types o defects
im.close() #reload is necessary in my case
im = Image.load(filename)
im.transpose(PIL.Image.FLIP_LEFT_RIGHT)
im.close()
except:
#manage excetions here
In case of image defects this code will raise an exception.
Please consider that im.verify is about 100 times faster than performing the image manipulation (and I think that flip is one of the cheaper transformations).
With this code you are going to verify a set of images at about 10 MBytes/sec (using single thread of a modern 2.5Ghz x86_64 CPU).
For the other formats psd,xcf,.. you can use Imagemagick wrapper Wand, the code is as follows:
im = wand.image.Image(filename=filename)
temp = im.flip;
im.close()
But, from my experiments Wand does not detect truncated images, I think it loads lacking parts as greyed area without prompting.
I red that Imagemagick has an external command identify that could make the job, but I have not found a way to invoke that function programmatically and I have not tested this route.
I suggest to always perform a preliminary check, check the filesize to not be zero (or very small), is a very cheap idea:
statfile = os.stat(filename)
filesize = statfile.st_size
if filesize == 0:
#manage here the 'faulty image' case
Here's another solution using identify, but without convert:
identify -verbose *.png 2>&1 | grep "corrupt image"
identify: corrupt image 'image_with_error.png' # error/png.c/ReadPNGImage/4051.
i use identify:
$ identify image.tif
00000005.tif TIFF 4741x6981 4741x6981+0+0 8-bit DirectClass 4.471MB 0.000u 0:00.010
$ echo $?
I am using the ImageMagick convert utility right now. I have a PostScript file that takes about 90 seconds to convert to GIF.
I am looking for a faster way to do this perferably by modifying the options to "convert".
When I say "fast", ideally a few seconds but I'll take any significant speed up. Something suitable for an interactive GUI.
I only need this in black and white or greyscale (specifically it is is an image of seismic data "wiggle traces" so B&W is fine.)
Other acceptable formats are BMP, GIF, JPEG, JPG, PCX, PGM, PNG, PNM, PPM, RAS, TGA, TIF, or TIFF.
Trying to stick with ImageMagick as that is already installed and trying to avoid selling my boss on anything new. Still happy to hear other suggestions.
My suggestion is: Use Ghostscript.
Since you have a working ImageMagick already installed, that means Ghostscript is also there: because ImageMagick cannot convert PDF or PostScript to raster images all by its own -- it has to call Ghostscript as its delegate to do this anyway.
Ghostscript can directly convert PDF/PostScript input to TIFF/TIF/TIFFg4, JPEG, PBM, PCX, PNG, PNM, PPM, BMP raster image output.
The advantages are: you don't need to have ImageMagick involved. So it's faster and also gives you more direct control over the conversion parameters. If you run Ghostscript via ImageMagick that's a level of indirection which isn't always required. (Sometimes it may be required to add some fine-tuning and post-processing manipulations to the raster image data that Ghostscript generated -- but that doesn't seem to be the case for you.)
The only disadvantage is: Ghostscript cannot produce GIF. If you required GIF (which you don't seem to), you need ImageMagick for post-processing the raster output of Ghostscript to GIF.
You can see how ImageMagick calls Ghostscript (and which parameters it uses for the call -- look for a printed line on stderr containing gs, gsx or gswin32c or gswin64c) by running for example:
convert -verbose some.pdf[0] some.gif
Update
I did run a very, very un-scientific 'benchmark', running the following two commands 100 time each, which convert the randomly picked page 333 of the official PDF specification (ISO version for PDF-1.7) to GIF, measuring the time consumed. I run these commands in concurrently parallel, so both should have had to deal with the same overall system load, making the results better comparable:
'Comfortably' using ImageMagick's convert to directly produce GIF:
time for i in $(seq -w 1 100); do
convert \
PDF32000_2008.pdf[333] \
p333-im-no_${i}.gif ;
done
Using Ghostscript to create from the same page grayscale PNGs, piping Ghostscript's output to ImageMagick's convert in order to get GIFs:
time for i in $(seq -w 1 100); do
gs \
-q \
-o - \
-dFirstPage=333 \
-dLastPage=333 \
-sDEVICE=pnggray \
PDF32000_2008.pdf \
| \
convert \
- \
p333-gs-no_${i}.gif ;
done
Timing esults for the first command (running the 'comfortable' convert to achieve the PDF->GIF transformation, which uses Ghostscript only 'behind our backs'):
real 2m29.282s
user 2m22.526s
sys 0m5.647s
Timing results for the second command (running gs directly + openly, piping it's output to convert:
real 1m27.370s
user 1m23.447s
sys 0m3.435s
One more thing:
The total size of the 100 'Ghostscript'-GIFs was 1,6 MByte -- but they were 8-bit grayscale.
The total size of the 100 'ImageMagic-direct'-GIFs was 1,2 MByte -- but they were 2-bit black+white.
I don't have the motivation currently to tweak the test commandline parameters more for even closer comparability of the resulting files.
This result (149 seconds vs. 87 seconds) gives me enough confidence into my guess that you can gain significant performance improvements when you follow my recommendation. :-)
I am using the ImageMagick convert utility right now. I have a
PostScript file that takes about 90 seconds to convert to GIF.
I am looking for a faster way to do this perferably by modifying the
options to "convert".
When I say "fast", ideally a few seconds but I'll take any significant
speed up. Something suitable for an interactive GUI.
I only need this in black and white or greyscale (specifically it is
is an image of seismic data "wiggle traces" so B&W is fine.)
You can start with GhostScript:
gs -dSAFER -dBATCH -dNOPAUSE \
-sDEVICE=pnggray -r300 -sOutputFile=seismic.png seismic.pdf
A very longer but interesting way would be to analyze exactly what is in those PDFs.
I had to do something similar with the PDF output of an EKG workflow. The original data were unavailable, we only had the PDF, but I discovered that the PDF was vector based and not raster. After a little hacking it was very easy to decode the labels, the legend and the single elementary lines making up the EKG diagram, and I came up with an option to recolor the tracks starting from what appeared a grayscale image. It did take several days, though.
It is possible that your PDF is generated in a similar way, and the data could be decoded (at first I had to use pdftk to get me a non-compressed PDF, then I found a library that I could use - it implemented the Deflate algorithm). It would be really cool to have output in SVG format :-)