In Carrierwave, how to compress images for Google PageSpeed - ruby-on-rails

When I use Google PageSpeed, I'm being told I need to compress my images. Example:
Compressing https://xxx.s3.amazonaws.com/xxxx.jpg could save 33.2KiB (66% reduction).
I'm not sure how to make Google happy here. In Carrierwave, I have the following setting:
version :thumb do
process resize_to_fill: [340, 260]
process :quality => 86
end
If I move the process quality to anything other than 86, the image doesn't look so good. Is there some other setting/trick I'm missing to compress images in a way that will make Google PageSpeed happy and help my site load fast?

I haven't tried resize_to_limit helper, which may help you:
process :resize_to_limit => [340, 260]
It will resize the image to fit within the specified dimensions while
retaining the original aspect ratio. Will only resize the image if it
is larger than the specified dimensions.
There are couple of ways for image optimization that you can perform. Desktop and Online. For Desktop, I would suggest using JPEGOPTIM utility to optimize jpeg files.
Provides lossless optimization (based on optimizing the Huffman
tables) and "lossy" optimization based on setting maximum quality
factor.
If you are on Linux, install it from your terminal:
sudo apt-get install jpegoptim
Then go to the folder where your image is and check first size of it:
du -sh photo.jpg
after that run this command below to optimize it:
jpegoptim photo.jpg
You will see the output.
You can also compress the given image to a specific size to, but it
disables the lossless optimization.
You can also optimize your images in batch with this command:
jpegoptim *.JPG
Another Desktop way is to do basic optimization manually with PS or GIMP. including cropping unnecessary space, reducing color depth to the lowest acceptable level, removing image comments and ( Save for web option )
You can use online solutions too. There are plenty of them, I suggest these ones for example:
https://tinypng.com
https://kraken.io
There is also a WebP format ( developed by Google ) Chrome & Opera support it, but Firefox is not supporting it, so basicly images need to be served conditionally based on the HTTP Accept header sent by browsers capable to display this format. Check this Blog in case that you opt for WebP format, there is a gem which you can use. ( Rails 4 )
I hope it helps,

Related

Export multiple image versions in GIMP (different resolution)

I want to export my image with multiple sizes (192x192, 144x144, 96x96, 72x72, 44x44).
Is there an easy and effective way to do that or do I have to scale it manually each time?
See the ofn-export-sizes script. The ZIP contains an HTML doc. Installation instructions at the bottom of the download page.

Change pixel order of a .tiff file from rgbrgb to rrbbgg (interleaved to non-interleaved)

I have been trying to figure out a way to create non-interleaved .tiff files, as described here: https://questionsomething.wordpress.com/2012/07/26/databending-using-audacity-effects/ (under the heading of "The photographic base").
It seems like it's a trivial thing using photoshop, but I'm on linux and would hate to get myself a copy just for this one option. If anyone knows of a way, be it via imagemagick, hacking the gimp or some obscure program, I'd be glad for any suggestions.
If TIFF parlance, you have a file in contiguous planar configuration, and want separate planar configuration.
The tiffcp utility that comes with LibTIFF can do this for you. Use the -p separate option:
tiffcp -p separate src.tif dest.tif
See the man page.

imagemagick splitting large pdf into png's

I have a pdf I'd like to split into individual pictures, each page is a picture, I am using the following imagemajick command to do so:
convert -density 400 mypdf.pdf out.png
and it works fine however I have tested it on the first 5 pages of my pdf and it took 10 seconds, at this rate it should take about half an hour to split my pdf, which seems strange to me considering that I'm not really doing anything fancy, I'm not rotating the images or modifying them in anyway, I'd like to know if there is a faster way to do this. Thanks
Also, I'd like to preserve the quality, I was doing it before without the density flag but the quality dropped dramatically.
PDF rendering is a bit of a mess.
The best system is probably GhostScript, and MuPDF, its library form. It's extremely fast and scales well to large documents. Unfortunately the library licensing (AFL) is difficult and you can't really link directly to the binary.
ImageMagick gets around this restriction by shelling out to the ghostscript command-line tool, but of course that means that rendering a page of a PDF is now a many-stage process: the PDF is copied to /tmp, ghostscript is executed with a set of command-line flags to render the document out to an image file in /tmp, this temporary image file is read back in again, a page is extracted and finally the image is written to the output PNG.
On my laptop I see:
$ time convert -density 400 nipguide.pdf[8] x.png
real 0m2.598s
The other popular PDF renderer is poppler. This came out of the xpdf document previewer project, so it's fast, but is only really happy rendering to RGB. It can struggle on large documents too, and it's GPL, so you can't link to it without also becoming GPL.
libvips links directly to poppler-glib for PDF rendering, so you save some copies. I see:
$ time vips copy nipguide.pdf[page=8,dpi=400] x.png
real 0m0.904s
Finally, there's PDFium. This is the PDF render library from Chrome -- it's the old Foxit PDF previewer, rather crudely cut out and made into a library. It's a little slower than poppler, but it has a very generous license, which means you can use it in situations where poppler would just not work.
There's an experimental libvips branch which uses PDFium for PDF rendering. With that, I see:
$ time vips copy nipguide.pdf[page=8,dpi=400] x.png
real 0m1.152s
If you have Python installed, you should try PyMuPDF. It is a Python binding for MuPDF, extremely easy to use and extremely fast (3 times faster than xpdf).
Rendering PDF pages is bread-and-butter business for this package. Use a script like this:
#----------------------------------------------------------------------------------
import fitz
fname = sys.argv[1] # get filename from command line
doc = fitz.open(fname) # open the file
mat = fitz.Matrix(2,2) # controls resolution: scale factor in x and y direction
for page in doc:
pix = page.getPixmap(matrix=mat, alpha=False)
pix.writePNG("p-%i.png" % page.number) # write the page's image
#----------------------------------------------------------------------------------
More to "Matrix":
This form scales each direction by a factor of 2. So the resulting PNG becomes about 4 times larger than the default version in original, 100% size. Both dimensions can be scaled independently. Rotation or rendering only parts of a page is possible also.
More to PyMuPDF:
Available as binary wheel for Windows, OSX and all Linux versions from PyPI. Installation therefore is a matter of a few seconds. The license for the Python part is GNU GPL 3, for the MuPDF part GNU AFFERO GPL 3. So it's open source and freeware. Creating commercial products is excluded, but you can freely distribute under the same licenses.

Optimize Images for Google Page Speed

i'm tryng to optimize the images from my webpage to fit the google pagespeed test. But i didnt get how to compress the files with the tools provided by google on the size that google wants to have.
So i use jpegoptim and jpegtran for jpegs with this command:
jpegoptim.exe FILENAME
jpegtran.exe -copy none -debug -optimize -copy none -outfile FILENAME FILENAME
Where FILENAME is the fullpath to the img file. In most cases the files would be a bit smaller, but not that small if i download it from google(over the PageSpeed Insights Tool). Can anyone help me to find out the right parameters or another tool(working on windows) that gives me perfect results(or results that are accepted by Google).
THanks in advance,
J. Doe ;)
In the end of the Google page speed insights page is a link where you can download optimized resources for your website.
Link is called:
Download optimized image, JavaScript, and CSS resources for this page.

Imagemagick removing watermark

Is it possible to remove watermark placed with imagemagick library in past?
Thanks ;)
Update
I mean, I need to remove my logo from images. Can't find in official documentation, how to remove watermark from image.
Yes, if you restore the original file directory from a backup. I'm presuming that you've rendered a single-layered file, where IM composited/overlayed the watermark on the image. There is no reliable and practical way to remove such a mark generally manually, let alone via batch process. Exceptions might include if the watermark always rendered over a flat color, etc.
The logo can be removed easily using ffmpeg, by using its delogo filter. All you need to supply is the co-ordinates and dimensions of the logo present on the video.
This works on videos very swiftly, you can convert your image to a video and apply this filter, or even compile group of images to a video and later break it into frames to obtain clean images. All of this can be easily done using ffmpeg only.
eg for the filter syntax: ffmpeg -i (your video url) -f "delogo=x=0:y=0:w=100:h=77:band=10" -r (output file url)
Find the complete documentation here.

Resources