I just discovered Apple's sips tool (Scriptable Image Processing System), and saw one person mention it is about 5x faster than ImageMagick for resizing images.
We have been using ImageMagick for years on our production CentOS servers and are happy with it, but performance is an issue. A large batch image resize can bring our beefy (48GB RAM, etc) server to it's knees, and we're often forced tho throttle/slow down the batch resize to keep other services running acceptably fast.
Is it possible to run sips on CentOS? Or is there some other similarly fast tool for resizing/recompressing jpeg images from within a PHP script?
Scriptable Image Processing System (sips) is a command line interface to Apple's Core Image framework which is not open source, therefore you can't run binary Mach-Object executables on Linux which has different architecture. So you have to look for some alternatives.
Related
I am using Ghostscript to convert pdf1.3 to pdf/a-1b using this command:
gs -dPDFA -dBATCH -dNOPAUSE -dNOOUTERSAVE -sColorConversionStrategy=sRGB -sDEVICE=pdfwrite -sOutputFile=output.pdf PDFA_def.ps input.pdf
The PDFA_def.ps is customized to use the srgb icc profile. Except that change it is the standard def file which comes with GS 9.26.
Now comes the tricky part:
1- running this conversion locally on a ubuntu 18.10, GS 9.26 it works fine an i get a valid pdf/a
2- running the same command in a docker container (ubuntu 18.10. GS 9.26) creates a pdf/a as well, which is considered to be valid
However, in the first scenario I can process the file using mustang (https://github.com/ZUGFeRD/mustangproject) to create a valid electronic invoice. In the second scenario (docker container) this fails, since the file is not considered to be valid pdf by mustang.
checking both pdf files I would have expected them to be identical since i am running the same converison on it. However they are not. The PDF create in the dockerfile is 10 bytes smaller and shows some different metainformation in the file itself.
I suspect that there must be some "hidden depdencies" that make GS to act different on my host system compared to a docker container, but it feels entirely wrong and I am running out of means to debug further.
Does anyone know, wether GS has some more depdencies that might cause the same command to produce different results?
The answer is 'maybe'. It depends on how Ghostscript was built for starters.
I assume you are using a package, and not building from source yourself. In which case there are a number of dependencies including; FreeType, LibJPEG, JBIG2dec, LibTIFF, JPEG-XR, LCMS2, LibPNG, OpenJPEG, Expat, zlib, potentially IJS, CUPS and X-Windows, depending on what devices were built in.
Any or all of these could be system shared libraries instead of being built using the specific version shipped by Artifex. They could also be different versions on the two systems.
That said, I think its 'unlikely' that this is your problem. However, without seeing the PDF files I can't tell you why there is a difference. Differences in the metadata are to be expected, since that includes a date/time stamp.
I'd really need to see examples of the original and the two output PDF files to be able to comment further.
[Edit]
Looking at the files they have been produced compressed (unsurprisingly) which can obviously lead to differences in size if there are small differences in the input streams. So the first task was to decompress the files.
That done I see there are, essentially, no differences between them. One of the operating systems is using a time zone 7 hours behind UTC, the other is in UTC so where one of the systems is time stamping with (eg)
2019-04-26T19:49:01Z
The other is using
2019-04-26T12:51:35-07:00
So instead of Z (for UTC) you get -07:00 which is where the extra 10 bytes are coming from. Other than that, the Unique IDs are (as you would imagine) different, the Length values for the streams are different for the streams containing dates, and the startxref is different because the streams are different lengths.
Both files claim to be compatible with PDF/A-1b. In short I can see no real differences between them. Since you are using a tool to further process the files, I'd suggest you try taking the PDF file from your working system and try processing it on the non-working system, and vice versa, it seems to me that the problem may be the later processing rather than the PDF file itself. Perhaps you have different versions of that tool on the two systems.
For what it may be worth, Ghostscript can be induced into creating a ZUGFeRD file directly, see this bug report and this commit to the repository.
I am trying to learn Docker and for that referring to online materials. I came to know that there is official hub of images which we can pull, and run a container.
The repos are available at https://hub.docker.com/ , part of screen shot:
In this diagram we can see the official images of ubuntu, httpd, mysql (and so on).
My question is:
Do all these images have "minimal OS" on which they run. For example, if we consider httpd image, does it have the needed OS on which it runs?
From my understanding images are built in a layered architecture from a parent image. So we have a parent image and then the changes for this image is one more layer above parent image. If you see dockerfile for an image you can see something like this
FROM node:6.11.5
This node:6.11.5 is a parent image for our current image.
If you check dockerfile of parent images you will find they are somewhere in the hierarchy follow from base image.
This base image is basically an OS without kernel but has only userland software based on the different linux distributions(eg, centos, debian). So all the images uses the host OS kernel. Hence, you cannot install a Windows container on a Linux host or vice-versa.
So basically all images are layered changes on the base image which is an OS without kernel.
Please find below links for further information:
https://serverfault.com/questions/755607/why-do-we-use-a-os-base-image-with-docker-if-containers-have-no-guest-os
https://blog.risingstack.com/operating-system-containers-vs-application-containers/
If you need to create a base image you can see the steps here.
https://docs.docker.com/develop/develop-images/baseimages/
Please correct me if i am wrong.
Here's the answer: "Containers," in all their many forms, are an illusion!"
Every process that is "running in a container" is, in fact, running directly on the host operating system. But it is "wearing rose-colored glasses." It thinks that it knows what its user-id is ... but it doesn't. 🤷♂️ It thinks it knows what the network looks like. What the filesystem looks like. ... ... ...
And, it can afford to think that way, because nothing that it is able to see would tell it otherwise.
... But, nothing that it sees is actually the physical truth.
"Containerization" allows us to achieve the essential technical requirement – total isolation – at very-considerably less cost than "virtualization." The running processes see what they need to see, and so "they need be none the wiser." Meanwhile, the host operating system, which is aware of the actual truth, can very-efficiently support them: it just has to maintain the illusion. The very-expensive "virtual machine" software layer is completely gone. The only "OS" that actually exists is the host.
Most images are based on a distribution as you can see it in their Dockerfiles. Except for the distribution images themselves. They have a different base-image, which is called scratch.
You can review the images they are based on when you visit the project's page on DockerHub, for example https://hub.docker.com/_/httpd/
Their Dockerfiles are referenced and you can review them by clicking on them, e.g. the first tag "2.2" refers to this file. The first line in the Dockerfile is FROM debian:jessie and shows, that it is based on a Debian image.
It is widely used to have a separated tag with the postfix -alpine in it to indicate, that alpine linux is used, which is a much smaller base-image than the Debian image. This leads to a smaller image of the httpd image, because the base-image is much smaller.
The whole idea is that the whole image is completely stand-alone running on hardware/virtualization layer. And thus (the pro:) also cannot be influenced by anything else than that is part of the image.
Every image contains an complete os. Special docker made OS's come with a few mega bytes: for example linux Alpine which is an OS with 8 megabytes!
But bigger OS like ubuntu/windows can be a few gigabytes. Both have their advantages since docker cuts an image up in layers so when you use anbase image twice (FROM command, see N20 Answers) then you will only download this layer once.
Smaller OS has the pro of only needing to download a few megabytes. but for every (linux) Library you want to use, you will have to download & include yourself. This custom made layer then is only used in your own image and thus is not re-used in other images and thus creates a customer extra download layer & megabytes people will have to download to run your image.
If you want to make an image from nothing you can start your dockerfile with:
FROM scratch
But this is not advised, unless you really know what your are doing and/or you are hobbying around.
I think a lot of these answers miss the point. Explaining what you can or may do does not answer the question: do all docker images need an OS?
After a bit of digging, the answer is no.
https://docs.docker.com/develop/develop-images/baseimages/
FROM scratch
ADD hello /
CMD ["/hello"]
There's no OS defined in this Dockerfile. Only a precompiled binary hello world app
Also here
https://hub.docker.com/_/scratch
Also in this question:
https://serverfault.com/questions/755607/why-do-we-use-a-os-base-image-with-docker-if-containers-have-no-guest-os
An answerer makes this statement:
why do we base the container on an OS image? Because you'd want to use some commands like (apt, ls, cd, pwd).
So often the OS is just included because you might want to use some bundled low level tools or SSH into it to do some things.
I have seen caffe installation for Mac. But I have a question. If my Mac does not have GPU, then I have no chances to use GPU?? and I have to use CPU-only?
or I have the chance of using (virtual!) GPU by NVIDIA web driver?
Moreover, can I have digits on my Mac? as I try to download it, it does not have any options for Mac download and it is just for Ubuntu!
I am very confused about these questions! Can you please make me clear about these?
The difference in architectures between CPU and GPU does not allow simple transformation of the code written for one architecture to the other. The GPU drivers are specifically written for the GPU architecture and cannot be easily virtualized. On the other hand, some software supports both. This includes OpenGL instructions and caffe (http://caffe.berkeleyvision.org/). NVidia DIGITS is based on caffe and therefore can work without a dedicated GPU (Here the thread how to install on Macs: https://github.com/NVIDIA/DIGITS/issues/88)
According to https://www.github.com/NVIDIA/DIGITS/issues/251 CUDA cannot be run on computers that do not have a dedicated NVidia GPU, but according to How to run my CUDA application on ATI or Intel card in software mode? there is a program gpuocelot that receives CUDA instructions and can work on NVidia GPU, AMD GPU and x86.
In scientific shared computing they wrote separate programs for different devices, e.g. Einstein at Home has four separate programs to find gravitational waves: CPU, NVidia GPU (CUDA), AMD GPU and ARM.
To make DIGITS work you need to
build Caffe with CPU_ONLY and tell DIGITS not to use any GPUs by
running digits-devserver with the --config flag
(https://github.com/NVIDIA/caffe/blob/v0.13.2/Makefile.config.example#L9-L10, https://github.com/NVIDIA/DIGITS/issues/251).
Other possibility:
you can still use the --config flag with the web installer. Try this:
./runme.sh --config. Choose "N" to select none.
Also a possibility:
I am trying to answer how you can choose CPU or GPUs.. Within the
caffe folder, there is a Makefile.config.example file.. Copy the
contents of this file into a new file and rename it as
"Makefile.config". If you want to use CPU, then
1. comment out the "USE_CUDNN :=1 Within "Makefile.config" file,
2. uncomment CPU_ONLY := 1
3. issue the make all command again within the caffe folder..
And if nothing helps you can do the procedure two times because it helped someone at the end of the thread.
I'd like to know of alternatives to the command line program texturetool which comes with XCode for converting PNG Images to PowerVR compressed images.
For some reason texturetool takes about 50 seconds for converting some of the images I am working with. With about 1.3 mio images to be compressed, that would take several months.
Now I am looking for other tools running on either Linux or OSX, most preferable an in-memory C++ library, as the images are being generated procedurally.
Would love to get an answer.
Thanks.
Imagination Technologies have just released an updated version of their texture compression library That may be worth trying. (I know the page only says Windows and Linux but there appears to be a Mac version there as well).
iOS uses a variant of PNG processed a tool called pngcrush. If I want to optimized the images on Linux server, how can I do it?
Ignore it, iOS-specific changes don't improve performance. Use regular pngcrush or PNGOUT, AdvPNG.
Although ImageOptim (linked above) doesn't work on Linux, there's a very similar script for Linux.
This tool is named pincrush.