Cloud/RESTful interface for Image/GraphicsMagick [closed] - imagemagick

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm looking for a cloud service to where I can upload images and get them cropped / resized.
Basically I'm looking for zencoder, but for images instead of video.
I know about cloudinary but it forces you to store the images on their system so that they can jack up the fees. Is there any good alternative to it that's reputable?

Blitline can do that for you.
http://www.blitline.com/docs/quickstart
Here is an example on how to do this through curl:
curl "http://api.blitline.com/job" \
-d json='{ "application_id": "sgOob0A3b3RdYaqwTEJCpA",
"src" : "http://www.google.com/logos/2011/yokoyama11-hp.jpg",
"functions" : [ {"name": "blur",
"params" : {"radius" : 0.0, "sigma" : 2.0},
"save" : { "image_identifier" : "some_id" }} ]}'
Here is the list of image functions that can be used.
Note:
The source image should be public, or on your Amazon S3 account.
the target image will be stored on blitline's S3 account, or on your own S3 account.

You mean something like
Magick Studio
provided by the developers of ImageMagick themselves? (This is free of charge.)
Or a paid 'software as a service':
Cloudinary
Blitline
AFAIU, ...
Cloudinary offers a RESTful as well as a JSON interface, Blitline only a JSON one.
Cloudinary probably offers a few more image manipulation features than Blitline.
Cropping and resizing works with both.
Blitline is probably cheaper than Cloudinary.
Update: sorry, I had forgotten you had mentioned Cloudinary already...

Alternatively, imgix could also be worthwile to investigate. Not a plain ImageMagick wrapper, as they mention themselves, but a rather broad offering in image manipulation commands.
Their documentation page lists examples of commands supplied via the URL, and the full API is documented extensively.

Related

Detect the number of physical objects in an image (Image processing) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am developing a Ruby on Rails application where I want to detect the number of physical objects (bottles and food packets) in an image.
I just explored Google Vision API (https://cloud.google.com/vision/) to check whether this is possible or not. I uploaded a photo which has some cool drink bottles and got the below response.
{
"responses" : [
{
"labelAnnotations" : [
{
"mid" : "\/m\/01jwgf",
"score" : 0.77698487,
"description" : "product"
},
{
"mid" : "\/m\/0271t",
"score" : 0.72027034,
"description" : "drink"
},
{
"mid" : "\/m\/02jnhm",
"score" : 0.51373237,
"description" : "tin can"
}
]
}
]
}
My concern here is, it is not giving the number of cool drink bottles available in the image, rather it returning type of objects available in the photo.
Is this possible in Google Vision API or any other solution available for this?
Any help would be much appreciated.
This problem unfortunately is not a problem that is fully solved. You can go with some object detection algorithms like Faster RCNN and YOLO. They can give you the objects up to a bounding box if they are included in ImageNet dataset; however, of course you can train your own classifier with them. I recommend YOLO which is really easy to use and nicely documented.
Also, you can deploy a DIGITS object detection server which includes Faster RCNN. It gives you a really nice user interface to use those models.
I've made a simple command line program that detects faces and replaces them with emojis using OpenCV through JRuby. It's an absolute pain to set up, but once it is done it is a beauty to write in. I also made a small script to create OpenCV JRuby projects that can be executed with the required command line arguments in a shell script, which alleviates most, if not all, of the pain when setting up.
Later on when I'm at my computer I'll upload both the project and the script to GitHub and link them here if you want me to, but for now I can direct you to this project as an example.
EDIT
Here are the links to the JRuby OpenCV project and script:
JRuby OpenCV Project
Project Creation Script

Core Data cloud sync [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am working on Core Data app which needs to sync data to various platforms including the web. Initially I started integrating Stackmob which seemed a fine candidate to handle this task. Now that Stackmob is apparently shutting down I'm looking for another BaaS framework/service as a replacement. Everyone is suggesting to use Parse.com but parse is an 'always online' service which does not support offline sync. The users of my app need to be able to use the app offline, and sync the cached data as soon as the device has internet connection.
Building my own syncing backend is not an option at this moment since I'm a small developer who has not the time nor the resources to do this. What are my options, are there any similar services which support offline sync for Core Data ?
note: I can't use iCloud since I want to sync to the web.
update:
I stumbled upon Simperium which seems to do offline Core Data syncing. Anyone having any experience with this service ?
I would suggest the Dropbox Datastore API. You can use the ParcelKit wrapper which allows you to use Core Data. This gives you everything you need: offline use, Core Data, and a Javascript API for your web component.
There is also Wasabi Sync, which is Core Data-native, and has a REST API for web use.
If you can drop the requirement for web, there are solutions like Ensembles and TICDS, which work with multiple backends (e.g. iCloud, Dropbox). (Disclosure: I develop Ensembles)
There is an open source package called FTASync that syncs Parse with CoreData. I looked at using it, but it was too simplistic for my app. Although I am a lone developer like you I took on the task of modifying FTASync into what I need. I have it pretty much finished now and it is very different from FTASync. If FTASync is not sufficient for you, contact me privately and perhaps we can work something out.
-Bob

WIll Google fiber, reach the Northeast, USA,. Still using DSL and FIOS is not available. For Uploading large amount of data what should I do? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Google fiber,? Hoping that it will reach the Northeast, USA,. Still using DSL for internet in my area and FIOS is not available in my area, and town has no plan on future installations.
I am trying to make use of a cloud server and currently with Verizon DSL my upload speeds are terrible. Not much use, takes all day to be able to upload. And I have multiple storage drives to upload. I heard about Google fiber haven't heard much about it. It seems promising and since Google is behind it maybe it will. Was wondering if anyone and any unpublished news concerning these areas.
Just uploading takes too long to make use of my server, what should i do?
Make sure you read the guidelines thoroughly before posting questions.
Concerning the availability of Google fiber; Kansas City and Kansas City, MO , are only areas Google fiber is only currently available. They soon plan to expand to more areas. Information can be obtained from Google’s fiber official site located here Google Fiber
https://fiber.google.com/cities/#header=check
Concerning uploading and transfer speeds for a large amount of data to upload. The ultimate solution would be to know of an upload station. Such an example as what a company called aframe.com Aframe's Upload Partners uses, it as it has upload partners in its cloud infrastructure. Or you can send them your data. Not sure about your cloud server but that would be best-case scenario if they had those services in place.
There are multiple useful upload managers that are standalone and also integrates into Windows Explorer and that will help you keep the uploads from dropping and has also additional speed settings that you can apply to enable significant performance compared to standard uploading by a web browser.
Here are some that might help you. Most of these are sympathy FTPs.
Files Zila ‘s comes highly recommend. Great support information and integration. http://filezilla-project.org/
FTPGetter allows you to automate ftp and sftp uploads http://www.ftpgetter.com/
WebCEO FTP Upload Manager http://www.websiteceo.com/ftp_upload_manager.htm
Well, good luck and I hope the FIOS comes to your area soon.

smush.it vs OptiPNG / pngcrush [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'd like to see some online vs. offline image optimizers comparison numbers, namely Yahoo! Smush.it vs. OptiPNG or pngcrush.
How those things differ in speed and resulting image size, and what is the best choice?
Very detailed and comprehensive comparison — with lots of tools and results on many different types of PNGs and optimizations:
http://css-ig.net/png-tools-overview
I think it's a much better source than PunyPNG's small comparison showing that their tool is best [partly at converting image formats rather than optimizing existing format] :)
I really don't know how reliable the information on this site is because they have their own compression service but take a look at the comparison in the URL: http://punypng.com/about/comparison
I copied the following image:
And installed two of the tools you mentioned offline:
brew install optipng pngcrush
And compared image sizes using default settings with an online tool called reSmush.it:
879K feat-social-awareness.original.png
712K feat-social-awareness.optipng.png
700K feat-social-awareness.pngcrush.png
205K feat-social-awareness.resmushit.png
Speed of each tool was not measured for the above test. Subjectively they all felt about the same.
Comparing the images visually I was unable to see the difference between the original and the optimized versions created using the offline tools. In the case of reSmush.it, however, there was a noticeable loss in image fidelity which can be easily reproduced using their API (see example).
As a result, the above sizes are not an apples-to-apples comparison. More like apples-to-gorillas. So I went back and increased the reSmush.it quality to 100 by setting qlty=100 as specified in their API docs and got back the same lossy PNG as with the default settings.
So what's the best choice? Well, it depends…
If compute resources are a major constraint consider using reSmush.it.
If image fidelity is a concern don't use reSmush.it.
If you use OptiPNG you're likely going to lose your original files (it overwrites by default).
If you use pngcrush you're getting better compression compared to optipng without a noticeable loss in image fidelity.
If you want lossy optimization similar to reSmush.it in an offline tool try pngquant.
And if serving images over the wire under heavy bandwidth constraints consider a different image format altogether, such as Fabrice Bellard's BPG Image format.

Best 3rd Party Resume Parser Tool [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
We are working on a hiring application and need the ability to easily parse resumes. Before trying to build one, was wondering what resume parsing tools are available out there and what is the best one, in your opinion? We need to be able to parse both Word and TXT files.
I suggest looking at some AI tools. Three that I'm aware of are
ALEX
Sovren
Resume Mirror
I think all the products handle Word, txt, and pdf along with a bunch of other document types. Although I've never used it, I've heard unfavorable things about Resume Mirror's accuracy and customer support. I'm a contract recruiter and have used both Sovren's and Hireability's parsers in different ATS's. From my view I thought Hireability did a better job, with Sovren it seemed like I was always fixing errors. And when there was a goof with Hire's I gave it to my ATS vendor and it seemed like it was fixed pretty quickly. Good luck.
Don't try to build one unless you want to dedicate your life to it. Don't re-invent wheels!
We build and sell a recruitment system. I did a long evaluation a few years ago and went for Daxtra - the other one in the frame was Burning Glass but I got the impression that Daxtra did non-US resumes better.
Anyway, we're re-evaluating it. Some parts it does brilliantly (name, address, phone numbers, work history) as long as the resume is culturally OK. But if it's not then it fails. What do I mean: Well, if the resume has as the first line:
Name: Sun Yat Sen
then Daxtra is smart enough to figure out that Sun Yat Sen is the guy's name. (Girl's?)
But if it has as the first line:
Sun Yat Sen
It can't figure it out.
On the other hand if the first line is
Johnny Rotten
then Daxtra works out his name.
Also, it works really well on UK addresses, fairly well on Australian addresses, crashes and burns on Indonesian addresses. That said, we've just parsed 35,000 Indonesian resumes relatively well - CERTAINLY far better than not doing it at all, or doing it manually!
On Skilling: I reckon if someone really tried to make the Skills section work then it would take 3 man-months or so and it would work really well.
Summary: Don't write it yourself, do some really good research on real resumes that you want parsing and dive in.
The key thing is: Don't expect any tool to be anywhere near 100% accurate - but it's a lot better than not having it.
Neil
FWIW I just ran 650 international resumes through Rchilli and found the accuracy to be very poor. Names & addresses were mangled and the detail fields were hit and miss.
This was a mix of pdfs & Word docs, primarily from Europe & Asia.
I have seen a lot of resumes in PDF format. Are you sure you don't care about them?
I'd recommend something simple:
Download google desktop search or
similar tool (i.e. Copernic)
Drop the files in a directory
Point the index tool to that
directory, and punch in your search
terms.
You may want to have a look at egrabber and rchilli these are two best tools out in the market.
I was wondering if any one update these list. Seems all are 2010 old almost 3 yrs old.
We integrated RChilli, and found them no flaw, support is best, and product is easier to use.
We tested RChilli, Hireability, and Daxtra. Sovren never responded to our emails.
Integration was smooth, and support is best in there.

Resources