I'm looking for a free, preferably open source, http image processing server. I.e. I would send it a request like this:
http://myimageserver/rotate?url=http%3A%2F%2Fstackoverflow.com%2FContent%2FImg%2Fstackoverflow-logo-250.png&angle=90
and it would return that image rotated. Features wanted:
Server-side caching
Several operations/effects (like scaling, watermarking, etc). The more the merrier.
POST support to supply the image (instead of the server GETting it).
Different output formats (PNG, JPEG, etc).
Batch operations
It would be something like this, but free and less SOAPy. Is there anything like this or am I asking too much?
The ImageResizing.Net library is both a .NET library and an IIS module. It's an image server or an image library, whichever you prefer.
It's open-source, under an MIT-style license, and is supported by plugins.
It has excellent performance, and supports 3 pipelines: GDI+, Windows Imaging Components, and FreeImage. WIC is the fastest, and can do some operations in under 15ms. It supports disk caching (for up to 1 million files), and is CDN compatible (Amazon CloudFront is ideal).
It has a very human-friendly URL syntax. Ex. image.jpg?width=100&height=100&mode=crop.
It supports resizing, cropping, padding, rotation, PNG/GIF/JPG output, borders, watermarking, remote URLs, Amazon S3, MS SQL, Amazon CloudFront, batch operations, image filters, disk caching, and lots of other cool stuff, like seam carving.
It doesn't support POST delivery of images, but that's easy to do with a plugin. And don't you typically want to store images that are delivered via POST instead of just replying to the POST command with the result?
[Disclosure: I'm the author of ImageResizer]
Apache::ImageMagick, you install that - and also Apache along with mod_perl. This is the standard setup, check docs, there are alternatives. This is probably as turn-key as it gets.
Sample conf:
<Location /img>
PerlFixupHandler Apache::ImageMagick
PerlSetVar AIMCacheDir /tmp/your/cache/directory
</Location>
Your requests could look like:
http://domain/img/test.gif/Frame?color=red
More docs are here!
While not an out of the box solution, check out ImageMagick. There is a perl interface for it, so combine that with some fairly simple cgi scripts, or mod_perl and it should do the trick.
You can use LibGD or ImageMagick to build a service like that fairly easily. They both have many language bindings.
You could make this with Google App Engine -- they provide image processing routines and will host for free within some bounds.
Here are some examples of people doing things like this already
http://appgallery.appspot.com/results?q=image
I found this product, it seems to match my requirements
Try Nginx image processing server with OpenResty and Lua. It uses ImageMagick C API. Openresty comes with LuaJIT. It has amazing performance in terms of speed. Checkout some benchmarks for LuaJIT and Openresty.
Related
I see that ImageMogr2 is some kind of tool used by qiniu.com (a chinese hosting provider), If some one could help me understand what it is and what similar tech e have with any other hosting provider available.
Yes.
You may see a very similar service provided by tencent cloud has exactly the same name.
its an image processing utility that can scale, crop, rotate images on-the-fly using URI-programming, which means, defining the image processing command and parameters in the request URIs and you'll get the cropped images based on the original image you uploaded before.
You can easily get their documentations and some simple examples on their website.
e.g. https://developer.qiniu.com/dora/api/1270/the-advanced-treatment-of-images-imagemogr2
but not sure if you can read Chinese.
there are similar solutions provided by a us company. e.g.
https://cloudinary.com/
We are trying to increase the page score (google) for our website. One of the options to do this is "Image optimization".
As we have a huge number of images in the DAM, how can we compress/optimize them? Does AEM have any such tool to achieve this?
ImageMagick is one of the tool to achieve this. Do we need to integrate that with AEM or we'll have to re-upload all the images after compressing them using the tool?
Any suggestions?
In contrast to CSS, JS and HTML files which can be gzipped using dispatcher, images can be compressed only by reducing quality or resizing them.
It is a quite common case for AEM projects and there are a couple of options to do that, some of them are coming out-of-the-box and do not even require programming:
You can extend DAM Update Asset with CreateWebEnabledImageProcess Workflow Process Step. It allows you to generate new image rendition with parameters like size, quality, mime-type. Depending on workflow launcher configuration, this rendition can be generated during creation or modification of assets. You can also trigger the workflow to be run on chosen or all assets.
In case that CreateWebEnabledImageProcess configuration is not sufficient for your requirements, you can implement your own Workflow Process Step and generate proper rendition programmatically, using for example ImageHelper or some Java framework for images transformation. That might be also needed if you want to generate the compressed images on the fly, for example, instead of generating rendition for each uploaded image, you can implement servlet attached to proper selectors and image extensions (i.e. imageName.mobile.png) which return the compressed image.
Eventually, integration with ImageMagick is possible, Adobe documentation describes how it can be achieved using CommandLineProcess Workflow Process Step. However, you need to be aware of security vulnerabilities related to this mentioned in the documentation.
It is also worth to mention that if your client needs more advanced solutions for images transformation in the future, then integration with Dynamic Media can also be considered as a possibility, however, this is the most costly solution.
There are many ways to optimise Images in AEM. Here I will go through 3 of those ways.
1) Using DAM Update Asset Workflow.
This is an out of the box workflow in AEM, Where on upload of images renditions get created . You can use those renditions path in img src attribute.
2) Using ACS commons Image transformer
Install ACS commons Package , Use Image transformer Servlet config to generate optimised or transformed images acc to requirement. For more Info on this check ACS AEM commons.
3) Using Google PageSpeed in dispatcher level
If you want to reduce the size of image, Google PageSpeed is an option to consider. Install PageSpeed in dispatcher level and add image optimise rules to achieve your requirement.
This rule Insights detects the images on the page that can be optimized to reduce their filesize without significantly impacting their visual quality.
for more info check here Optimising Images
AEM offers options for "image optimisation" but this is a broad topic so there is no "magic" switch you can turn to "optimise" your images. It all boils down to the amount of kilo- or megabytes that are transferred from AEM to the users browser.
The size of an asset is influenced by two things:
Asset dimension (width and height).
Compression.
The biggest gains can be achieved by simply reducing the assets dimensions. AEM does that already. If you have a look at your assets renditions you will notice that there is not just the so called original rendition but several other renditions with different dimensions.
MyImage.jpg
└── jcr:content
└── renditions/
├── cq5dam.thumbnail.140.100.png
├── cq5dam.thumbnail.319.319.png
├── cq5dam.thumbnail.48.48.png
└── original
The numbers in the renditions name are the width and height of the rendition. So there is a version of MyImage.jpg that has a width of 140px and a height of 100px and so on.
This is all done by the DAM Update Asset workflow when the image is uploaded and can be modified to generate more renditions with different dimensions.
But generating images with different dimensions is only half of the story. AEM has to select the rendition with the right dimension at the right moment. This is commonly referred to as "responsive images". The AEM image component does not support "responsive" images out of the box and there are several ways to implement this feature.
The gist of it is that your image component has to contain a list of URLs for different sized renditions. When the page is rendered client side JavaScript determines which rendition is the best for current screen size and adds the URL to the img tags src attribute.
I would recommend that you have a look at the fairly new AEM Core components which are not included with AEM. Those core components contain an image component that supports responsive images. You can read more about those here:
AEM Core Components Image Component (GitHub)
AEM Core Components Documentation
Usually, components like that will not use "static" renditions that were already generated by the DAM Update Asset workflow but will rely on a Adaptive Image Servlet. This servlet basically gets the asset path and the target width and will return the asset in the requested width. To avoid doing this over and over you should allow the Dispatcher to cache the resulting image.
Those are just the basic things you can do. There are a lot of other things that can be done but all of them with less and less gains in terms of "optimisation".
I had the same need, and I looked at ImageMagick too and researched various options. Ultimately I customized the workflows that we use to create our image renditions to integrate with another tool. I modified them to use the Kraken.io API to automatically send the rendition images AEM produced to Kraken where they would be fully web-optimized (using the default Kraken settings). I used their Java integration library to get the basic code for this integration. So eventually I ended up with properly web-optimized images for all the generated renditions (and the same could be done to the original) that were automatically optimized during a workflow without authors having to manually re-upload images. This API usage required a Kraken license.
So I believe the answer is that at this time AEM does not provide a feature to achieve this, and your best bet is to integrate with another tool that does it (custom code).
TinyPng.com was another image optimization service that looked like it would be good for this need and that also had an API.
And for the record, I also submitted this as a feature request to our AEM rep. It seems like a glaring product gap to me, and something I am surprised hasn't been built into the product yet to allow customers to make images fully web-optimized.
For developing a video content heavy website like youtube which language/framework might be a better option from performance and support for video conversion/compression plugins point of view. Some points worth considering may be.
CPU vs I/O time
Support for compression/conversion plugin (existing mods/gems/libs)
Ease of learning is not very important though inputs are welcome
I know the question sounds a bit subjective however my intention is to understand the technicalities involved from someone who has had experience developing similar kind of site(s).
Unfortunately there isn't one or two APIs/Libraries/Frameworks you can knit together to produce a video serving website.
Invariably this will require heavy involvement on all levels of the stack:
Server back-end will require the following problems to be solved:
Video Encoding
FFMPEG or MPlayer experience for encoding any number of video formats to either FLV or more recent h264 for HTML5 supported formats
A reliable mechanism to transcode video in a background process; initially on one server but eventually on multiple servers as your services scales
Video resizing
Bandwidth Management to throttle connection just enough so that the video trickles down to the user
Storing video files and a file sharding and naming mechanism
API Server - Something like Rails, Django or NodeJS Express to serve as a JSON service layer between web clients and the video encoding/serving service.
Front end will require the following issues to be solved:
Playing back the video reliably across multiple OSes (Windows, OSX, Linux, Tablets, Mobile) and Platforms (IE, Chrome/Safari, Firefox, Opera) with fallback support for older browsers
DRM - are your videos free or commercial? If the latter, this is another issue that needs to be addressed
I'd strongly recommend an Event Driven system on your back-end as it is much easier to develop code that supports concurrency. NodeJS would be a good pick. It is worth looking at node-fluent-ffmpeg module for NodeJS as a good starting point.
As for your front-end I'd recommend frameworks such as Backbone.js or AngularJS to develop you web-app.
It was a fun and challenging journey when I attempted something similar a few years ago. I wish you good fortune in your journey.
For a site like that, I guess will need to choose several tools to do the job.
For the web, you could use any framework, so rails would be OK, to deal with videos you'll need something like ffmpeg or transconding to convert the videos.
For streaming, if you can use HTML5 check this question otherwise you'll need a player whith flash fallback.
Remember that the heavy part in terms of storage and CPU is video compressing/conversion.
I'm using Google's Custom Search API to dynamically provide web search results. I very intensely searched the API's docs and could not find anything that states it grants you access to Google's site image previews, which happen to be stored as base64 encodes.
I want to be able to provide image previews for sites for each of the urls that the Google web search API returns. Keep in mind that I do not want these images to be thumbnails, but rather large images. My question is what is the best way to go about doing this, in terms of both efficiency and cost, in both the short and long term.
One option would be to crawl the web and generate and store the images myself. However this is way beyond my technical ability, and plus storing all of these images would be too expensive.
The other option would be to dynamically fetch the images right after Google's API returns the search results. However where/how I fetch the images is another question.
Would there be a low cost way of me generating the images myself? Or would the best solution be to use some sort of site thumbnailing service that does this for me? Would this be fast enough? Would it be too expensive? Would the service provide the image in the correct size for me? If not, how could I change the size of the image?
I'd really appreciate answers that are comprehensive and for any code examples to be in ruby using rails.
So as you pointed out in your question, there are two approaches that I can see to your issue:
Use an external service to render and host the images.
Render and host the images yourself.
I'm no expert in field, but my Googling has so far only returned services that allow you to generate thumbnails and not full-size screenshots (like the few mentioned here). If there are hosted services out there that will do this for you, I wasn't able to find them easily.
So, that leaves #2. For this, my first instinct was to look for a ruby library that could generate an image from a webpage, which quickly led me to IMGKit (there may be others, but this one looked clean and simple). With this library, you can easily pass in a URL and it will use the webkit engine to generate a screenshot of the page for you. From there, I would save it to wherever your assets are stored (like Amazon S3) using a file attachment gem like Paperclip or CarrierWave (railscast). Store your attachment with a field recording the original URL you passed to IMGKit from WSAPI (Web Search API) so that you can compare against it on subsequent searches and use the cached version instead of re-rendering the preview. You can also use the created_at field for your attachment model to throw in some "if older than x days, refresh the image" type logic. Lastly, I'd put this all in a background job using something like resque (railscast) so that the user isn't blocked when waiting for screenshots to render. Pass the array of returned URLs from WSAPI to background workers in resque that will generate the images via IMGKit--saving them to S3 via paperclip/carrierwave, basically. All of these projects are well-documented, and the Railscasts will walk you through the basics of the resque and carrierwave gems.
I haven't crunched the numbers, but you can against hosting the images yourself on S3 versus any other external provider of web thumbnail generation. Of course, doing it yourself gives you full control over how the image looks (quality, format, etc.), whereas most of the services I've come across only offer a small thumbnail, so there's something to be said for that. If you don't cache the images from previous searches, then your costs reduces even further, since you'll always be rendering the images on the fly. However I suspect that this won't scale very well, as you may end up paying a lot more for server power (for IMGKit and image processing) and bandwidth (for external requests to fetch the source HTML for IMGKit). I'd be sure to include some metrics in your project to attach some exact numbers to the kind of requests you're dealing with to help determine what the subsequent costs would be.
Anywho, that would be my high-level approach. I hope it helps some.
Screen shotting web pages reliably is extremely hard to pull off. The main problem is that all the current solutions (khtml2png, CutyCapt, Phantom.js etc) are all based around QT which provides access to an embedded Webkit library. However that webkit build is quite old and with HTML5 and CSS3, most of the effects either don't show, or render incorrectly.
One of my colleagues has used most, if not all, of the current technologies for generating screenshots of web pages for one of his personal projects. He has written an informative post about it here about how he now uses a SaaS solution instead of trying to maintain a solution himself.
The TLDR version; he now uses URL2PNG to do all his thumbnail and full size screenshots. It isn't free, but he says that it does the job for him. If you don't want to use them, they have a list of their competitors here.
I have an Erlang app which makes a large number of http client calls using inets. I'd like to reduce my bandwidth bill by accepting gzipped data from servers that provide it. Is there an inets option that would handle this ? [can't find one]. Is there a zip library anyone could recommend ? [I've looked at the stdlib zip library, but it seems only to unzip archives, rather than uncompress individual streams].
Thanks!
Look at zlib module. Look also at file:open/2 option compressed for possible future usage. Note that zip and zlib aka gzip is not same thing. I think you already note it.
Look at http://blog.gebhardtcomputing.com/2007/09/grab-webpage-in-erlang-which-is-gzipped.html for some inspiration, but you probably will need streaming which is well described in manual page.
I'd suggest to do compression/decompression at HTTP frontend (nginx, apache, etc). This will be more optimal.