Manage Image Submissions in WordPress - image-processing

I have a client WordPress website that allows users to upload custom artwork from the front end.
I use TDO Mini Forms to create the submission form, but it doesn't seem to have options for file manipulation on upload. As a result, the images are often very large, sometimes in CMYK, and have various other issues.
I've managed the size issue, to some extent, using WP's media settings, but there are two issues that have vexed me:
It doesn't address the CMYK issue (which admittedly happens very rarely, but still prompts a call from the client).
WP doesn't discard the original image, which creates huge backup files.
Is there an extension out there that better manages submitted images? Even if it involves replacing the submission form (TDO Mini Forms works well, but has been unsupported for some time), I'm looking for any solution that meets this need.
IMO, the gold standard would be a WP equivalent to ExpressionEngine's Safecracker + DevDemon's ChannelImages.
Is there anything out there? I can't be the only one looking for this.
As always, any help is greatly appreciated.
ty

Don't know if it's exactly what you need, but this may help you out:
http://www.verot.net/php_class_upload.htm
It's integrated into this plugin I used some months ago for a client:
http://wordpress.org/extend/plugins/photosmash-galleries/

The solution is a plug-in called Imsanity (http://wordpress.org/extend/plugins/imsanity/).
It still doesn't address the CMYK issue, but it does manage the file sizes, which is the more pressing issue.
It also works retroactively, so if you already have large images on your site, you can bulk resize them.

Related

What are some options for handling image uploading/compression in ASP?

please bear with me as I'm not trying to frustrate anyone with inane questions, and I did google search this but I couldn't really find anything recent or helpful.
I am a novice programmer and I am using a classic asp web application. I just enabled the users to upload and download images, but I'm quickly regretting it as it's eating up all of the router bandwidth. I am finding my solution inadequate, so I wanted to start over.
My desire is threefold with this functionality:
Compression. I understand that this is impossible to do BEFORE uploading without some kind of Java/Silverlight/Flash portion of the application to handle uploads, correct? What is the common way most places go about this? Just allow regular file uploads and compress once they are on the server?
Resizing. I want to resize all images before they are uploaded to a reasonable size, instead of just telling users that try and upload huge camera images that they can't upload. I figure I just want to let them upload and have it resize for them before uploading. Does this functionality exist already?
Changing filetype. I want to allow users to upload all image file types but make them .jpg on the server after the upload.
With these three requirements, how hard is it to implement something like this in just pure code and libraries? Would it be better to just use a 3rd party plugin, such as ASPjpeg or ASPupload? Have you encountered something similar, and what was your solution?
Thanks.
Take a look at ASPJpeg and ASPUpload from Persits. We use these components to upload a full size image (can be png even though the library is "ASPJpeg"), resize it to several different sizes we need on our site, then store the resized images on the server in a variety of folders. The ASPUpload component is a little tricky but if you follow their sample code you'll be fine.
I never found a good component for decompressing uploaded zip files and had to write my own, which I've since abandoned. In the end with upload speeds increasing and storage getting so cheap, it started to matter less and less that the files were compressed before being uploaded.
EDIT: Just noticed you mentioned these components in your question. Consider this an endorsement of your idea to use them. :-)

iOS App Backend Provider

I've recently submitted my iOS Quiz app to Apple but noticed that the file size for the app is pretty big (about 150 MB). Users would need to be connected to wifi in order to download it per Apple's rules. My quiz app is set up so users are given 4 choices and shown an image and must guess the correct answer from the image shown to them. How would I minimize the file size for my app so that it isn't so large? Is there a way I can host the images on a server without losing the functionality of my app? I heard of something like Backend Services but know nothing about it. If anyone can guide me in the right direction that would be awesome, thanks!
You can check out a free back end service like Parse, it could do the trick for you, especially because you dont have a lot (besides images I guess) that'll be on the server side.
This also helped me start with using it.
Good luck :)
I'm assuming you have all the quiz data (questions and images) within your app bundle?
You can shrink it next to nothing if you move all your questions and images to a backend server and serve the questions and images (links) using simple JSON Structure.
You can build your own backend (Java/PHP/etc..) or look into using Parse.
use JPEG images whenever possible. PNGs costs more space. Do not place jpeg to xcassets, since they will be converted to PNGs. If your pictures should be transparent - it is better to use Webp or JPNG format.
You may use CloudKit to host your data in a public database. You won't need any backend knowledge to do that. This tutorial will help you understand the basics. WWDC videos covers some more, i suggest you to look at WWDC 2014, Introducing CloudKit and WWDC 2015, CloudKit Tips and Tricks.

Handling very large image files in web browsers

First post on SO; hopefully I am doing it right :-)
Have a situation where users need to upload and view very high resolution files (they need to pan, tilt, zoom, and annotate images). A single file sometimes crosses 1 GB so loading complete file on client side is not an option.
We are thinking about letting the users upload files to the server (like everyone does), then apply some encryption on server side creating multiple, relatively small low resolution images with varying sizes. We then give users thumbnails with canvas size option on the webpage for them to pick and start their work.
Lets assume a user opens low grade image with 1280 x 1028 canvas size. Image will be broken into tiles before display, and when user clicks on a title it will be like zooming in to a specific tile. Client will send request to the server asking for higher resolution image for the title. Server will send the image which will be broken into titles again for the user to click and get another higher resolution image from server and so on ... Having multiple images with varying resolution will help us break images into tiles and serve user needs ('keep zooming in' or out using tiles).
Has anyone dealt with humongous image files? Is there a preferred technical design you can suggest? How to handle areas that have been split across tiles is bothering me a lot so not sure how above approach can be modified to address this issue.
We need to plan for 100 to 200 users connected to the website simultaneously, and ours is .NET environment if it matters
Thanks!
The question is a little vague. I assume you are looking for hints, so here are a few:
I see uploading the images is a problem in the firstplace. Where I come from, upload-speeds are way slower than download speeds. (But there is litte you can do if you need your user to upload gigabytes...) Perhaps offer some more stable upload than web. FTP if you must.
Converting in smaller pieces should be no big problem. Use one of the availabe tools. Perhaps imagemagick. I see there is a .net wrapper out: https://magick.codeplex.com/
More than converting alone I think it is important not to do it everytime on the fly (you would need a realy big machine) but only once the image is uploaded. If you want to scale you can outsource this to another box in the network.
For the viewer. This is the interessting part. There are some ready to use ones. Google has one. It's called 'Maps' :). But there is a free alternative: OpenLayers from the OpenStreetmap Project: http://wiki.openstreetmap.org/wiki/OpenLayers All you have to do is naming your generated files in the right way and a litte configuration.
Even if you must for some reasons create the tiles on the fly or can't use something like OpenLayers I would try to stick to its naming scheme. Having something working to start with is never a bad idea.

NicEdit and Imageshack : does Imageshack keep our images without account?

I'm using NicEdit for uploading pictures. But I have heard that Imageshack doesn't keep images for a long time if we aren't registered. Is that true with Nicedit ?
Thank's !
Alex
Not familiar with NicEdit, but I've seen imageshack images become unavailable after they got hotlinked too much. It's rare though, and I've used imageshack for quite some years with very few problems. Never used an account.
But since they have been adding more restrictions over the past few months (trying to get people to create an account) I found myself using imgur.com more and more. I like it better than imageshack (the interface is easier, can upload multiple images etc). No account there either.
A site I can NOT recommend is tinypic.com. Been using that for a while as well, but images appeared to have randomly changed (i.e. the same link suddenly resulted in a completely different pictre). Happened quite a few times so I never use it anymore.
So, altogether I'd say imageshack is fine, but imgur remains my #1 pick to date.

How should I go about providing image previews of sites while using Google's Web Search API?

I'm using Google's Custom Search API to dynamically provide web search results. I very intensely searched the API's docs and could not find anything that states it grants you access to Google's site image previews, which happen to be stored as base64 encodes.
I want to be able to provide image previews for sites for each of the urls that the Google web search API returns. Keep in mind that I do not want these images to be thumbnails, but rather large images. My question is what is the best way to go about doing this, in terms of both efficiency and cost, in both the short and long term.
One option would be to crawl the web and generate and store the images myself. However this is way beyond my technical ability, and plus storing all of these images would be too expensive.
The other option would be to dynamically fetch the images right after Google's API returns the search results. However where/how I fetch the images is another question.
Would there be a low cost way of me generating the images myself? Or would the best solution be to use some sort of site thumbnailing service that does this for me? Would this be fast enough? Would it be too expensive? Would the service provide the image in the correct size for me? If not, how could I change the size of the image?
I'd really appreciate answers that are comprehensive and for any code examples to be in ruby using rails.
So as you pointed out in your question, there are two approaches that I can see to your issue:
Use an external service to render and host the images.
Render and host the images yourself.
I'm no expert in field, but my Googling has so far only returned services that allow you to generate thumbnails and not full-size screenshots (like the few mentioned here). If there are hosted services out there that will do this for you, I wasn't able to find them easily.
So, that leaves #2. For this, my first instinct was to look for a ruby library that could generate an image from a webpage, which quickly led me to IMGKit (there may be others, but this one looked clean and simple). With this library, you can easily pass in a URL and it will use the webkit engine to generate a screenshot of the page for you. From there, I would save it to wherever your assets are stored (like Amazon S3) using a file attachment gem like Paperclip or CarrierWave (railscast). Store your attachment with a field recording the original URL you passed to IMGKit from WSAPI (Web Search API) so that you can compare against it on subsequent searches and use the cached version instead of re-rendering the preview. You can also use the created_at field for your attachment model to throw in some "if older than x days, refresh the image" type logic. Lastly, I'd put this all in a background job using something like resque (railscast) so that the user isn't blocked when waiting for screenshots to render. Pass the array of returned URLs from WSAPI to background workers in resque that will generate the images via IMGKit--saving them to S3 via paperclip/carrierwave, basically. All of these projects are well-documented, and the Railscasts will walk you through the basics of the resque and carrierwave gems.
I haven't crunched the numbers, but you can against hosting the images yourself on S3 versus any other external provider of web thumbnail generation. Of course, doing it yourself gives you full control over how the image looks (quality, format, etc.), whereas most of the services I've come across only offer a small thumbnail, so there's something to be said for that. If you don't cache the images from previous searches, then your costs reduces even further, since you'll always be rendering the images on the fly. However I suspect that this won't scale very well, as you may end up paying a lot more for server power (for IMGKit and image processing) and bandwidth (for external requests to fetch the source HTML for IMGKit). I'd be sure to include some metrics in your project to attach some exact numbers to the kind of requests you're dealing with to help determine what the subsequent costs would be.
Anywho, that would be my high-level approach. I hope it helps some.
Screen shotting web pages reliably is extremely hard to pull off. The main problem is that all the current solutions (khtml2png, CutyCapt, Phantom.js etc) are all based around QT which provides access to an embedded Webkit library. However that webkit build is quite old and with HTML5 and CSS3, most of the effects either don't show, or render incorrectly.
One of my colleagues has used most, if not all, of the current technologies for generating screenshots of web pages for one of his personal projects. He has written an informative post about it here about how he now uses a SaaS solution instead of trying to maintain a solution himself.
The TLDR version; he now uses URL2PNG to do all his thumbnail and full size screenshots. It isn't free, but he says that it does the job for him. If you don't want to use them, they have a list of their competitors here.

Resources