Task: I am trying to create multipage tiff files at high speed, 4 pages each 4096 x 4096 8-bit bitmap. Each page needs to be compressed with a codec before, that results in minimum loss, before being written.
What I am doing: I am using Libtiff (Standard version, no IPP or GPU) to create these multipage tiff files and compressing each page using JPEG (quality- 95). I tried to use TIFFWriteEncodedStrip(), TIFFDWriteEncodedTile() and TIFFWriteScanline() and the CPU choked. In order to optimize the process, I am using my own JPEG compression method which is able to compress fast.
Problem: I am not able to write a tiff file with pre-compressed data. I am not able to use any WiteEncoded..() or WriteRaw..() methods in LibTiff.
Questions:
Is there a method in Libtiff or extensions written by third party that would let people to write pre-encoded data to a multipage tiff file ?
Is there a version of Libtiff, IPP or GPU based that I can use that will speed up my process ?
Or, are there other libraries available to write multipage tiff files that would solve my problems ?
I am open to other compression codecs as well as long as it can save space with minimal loss.
Thanks in advance!
Related
The title says it all. What is the difference between dcm2pnm (http://support.dcmtk.org/docs/dcm2pnm.html), dcmj2pnm (http://support.dcmtk.org/docs/dcmj2pnm.html) and dcml2pnm (http://support.dcmtk.org/docs/dcml2pnm.html) commands of dcmtk toolkit (http://support.dcmtk.org/docs/pages.html)? They all seem to convert dicom images to other formats. Are there any special situations where one should be preferred over others?
Edit: It seems dcml2pnm supports more formats. Why not use that for all purposes? What are the advantages of other commands?
I am the DCMTK developer.
The difference between the three DCMTK command line tools is: support for compressed DICOM images and output formats.
dcm2pnm was the original tool that has been developed more than 20 years ago and which originally only supported the image format PNM/PGM for output (that's also why the tool is called "dcm2pnm" and not "dcm2img" or the like). And, because at that time the DCMTK did not support any encapsulated transfer syntaxes (i.e. compression), only uncompressed DICOM images can be read.
dcmj2pnm is located in DCMTK's submodule "dcmjpeg" and adds support for JPEG-compressed DICOM images (based on the IJG library) as well as the JPEG image format for output.
dcml2pnm is located in DCMTK's submodule "dcmjpls" and adds support for JPEG-LS-compressed DICOM images (based on the CharLS library). It does not support conventional JPEG.
All this is probably more obvious from the source code package than from the binary package but it is also mentioned in the above referenced documentation (see "Description" section).
If you'd ask why there are three different tools (in fact, there is also a fourth one for JPEG-2000 support but that not part of the public DCMTK), my answer would be: this is mainly for historical reasons but also for reason of keeping dependencies between the various DCMTK modules as simple as possible.
Furthermore, we consider the command line tools as a kind of sample applications of the underlying C++ class library. So, if you'd need a tool that supports all image compression schemes available in the DCMTK, it should be easy to write such a tool.
dcmj2pnm adds JPEG codecs to the dcm2pnm functionality. Thus, it is capable of handling JPEG compressed DICOM data and produce JPEG output images. So it is a superset of dcm2pnm's functionality.
I think this is, because dcmtk offers different compile options which allow to include / exclude libjpeg. Just reflects the toolkit's options to the accompanying command line tools. Confirmed by the list of file formats when you start with option -h
For dcml2pnm I am not sure, but this is a good guess: Same as for JPEG but includes the JPEG-LS encoder which is another optional 3rd party toolkit for dcmtk.
I implemented a specialized utility for my team to batch convert AI files to other formats. It's using imagick and works well with the 1 AI file I used. They to need officially support Illustrator v 9 and upward.
Is it sufficient enough to write a test for the utility using an AI file saved for Illustrator 9 and be confident that the embedded PDF data is similar enough that I don't have to add test AI files saved in other versions of AI?
In other words, if it can properly convert the PDF (in the AI file) saved with Illustrator v 9 then all other formats will convert 100% the same?
Or should I add test fixtures (AI files) for each other version of Illustrator because the natively supported PDF format changed significantly?
Or ... does ImageMagick already account for these differences?
I tried same thing before. Simple answer is there is always something different. After trying for almost 3 months. I decide to use Illustrator Javascript API to convert files. And it works well.
It really depends on the version of GhostScript installed because ImageMagick uses GhostScript for converting from PDF.
If the version of AI used is 9 and above, and if the .ai file is saved with PDF compatibility and your GhostScript is up to date .. then GhostScript should have no issue converting it regardless of the version of Illustrator (from 9 or above) that it was produced with. Illustrator specific information is saved at the end of the PDF file which affects the specific version of Illustrator and this is data that is not consequential to GhostScript.
I need to convert jpg,gif and eps files to pdf and vice versa. Is ImageMagick will be the best tool for this ? I have configure imagemagic with ubuntu 11.04 and using CLI trying to convert images into pdf, but quality is too bad. So what whoud be best approch to convertion ?
Thanks in Advance :)
You want to support two types of files; raster (jpg/gif) and vectorial (eps). Conversion from one set to the other is never lossless.
When converting from vectorial to raster or vice-versa, one crucial parameter is -density, that sets the image size before it gets converted.
Imagemagick is probably the right tool for vectorial to raster, it's just a matter of getting the parameters right. To transform raster to vectorial, better tools would probably be autotrace or potrace, but be aware these tools cannot do a perfect conversion.
I have apps that use routines for producing PDF's that use a thing called PDF-in-the-box. The PDF's are not compressed, and there appears to be no way of achieving that with this component. Rave components implement a compression event handler and I have used this successfully in other apps that were designed from the ground up with Rave. I'd rather not recode the PDF-in-the-box stuff. Is there any way with Delphi that I can compress a PDF after I have produced it.
(This not image compression - this is lossless compression of the PDF contents. It results in a 3-fold decrease in PDF size typically).
I can simulate the effect I want by opening the PDFs produced with PDF-in-the-box in Adobe Reader and saving them again. The re-saved PDFs are much smaller. I just want to do the same thing in code.
If using external tools from commandline isn't a problem, you can try:
Ghostscript -dPDFSETTINGS option (which unfortunately isn't well documented)
QPDF --stream-data=compress option.
QPDF seems to work faster for me.
I'd like to know of alternatives to the command line program texturetool which comes with XCode for converting PNG Images to PowerVR compressed images.
For some reason texturetool takes about 50 seconds for converting some of the images I am working with. With about 1.3 mio images to be compressed, that would take several months.
Now I am looking for other tools running on either Linux or OSX, most preferable an in-memory C++ library, as the images are being generated procedurally.
Would love to get an answer.
Thanks.
Imagination Technologies have just released an updated version of their texture compression library That may be worth trying. (I know the page only says Windows and Linux but there appears to be a Mac version there as well).