Retrieving a cropped photo from the Facebook Graph API - ios

I have developed a part of an iOS application that involves using Facebook's Graph API that accesses a user's photos, and allows the user to crop that image into a square with a desired zoom. The images must be squares. Is there a way to use the Graph API with a given rect parameter so it returns a URL of the desired photo cropped into that rect? I have done some research and it seems like there isn't, but I was hoping for another set of eyes on that.
Assuming that there isn't, what sounds like a better idea:
Uploading the cropped photo to my own servers for future access.
or
Use my own SQL database to store the rect of the cropping and the URL of the photo (hosted by facebook), and then load the full facebook photo and crop it to how I want.
1 offers efficiency when loading data from the internet, but it means storing more data in my own servers (this could get expensive in the future)
2 means I will use less space on my own servers, but also means that the entire photo will be forced to be loaded, even parts that won't be used.
I'm leaning towards 2, but I don't deal too much with web/database work so I was hoping for some advice. Thanks.

Facebook already does some transformations of the pictures. It is not exactly what you want but might be interesting to take a look:
https://developers.facebook.com/docs/graph-api/reference/photo
https://developers.facebook.com/docs/graph-api/reference/platform-image-source/
#CBroe has a good point not to store the urls obtained from Facebook directly. I would try and save the cropped picture on your servers.

Related

Sharing ARKit experiences over the web

We're looking to share AR experiences (ARWorldMap) over the web (not necessarily to devices nearby, I'm referring to data that can be stored to some server and then retrieved by another user).
Right now we're looking into ARWorldMap which is pretty awesome, but I think it only works on the same device AND with devices nearby. We want to be able to delete these constraints (and therefore save the experience over the web on some server) so that everyone else can automatically see 3D things with their devices (not necessarily at the same time) exactly where they were places.
Do you know if it's possible to send the archived data (ARWorldMap) to some server in some kind of format so that another user can later retrieve that data and load the AR experience on their device?
The ARWorldMap contains the feature points in the enviroment around the user. For example, the exact position and size of a table but including all the other points found by the camera. You can visualize those with the debugOptions.
It does not make sense to share those to a user that is not in the same room.
What you want to do is share the interactions between the users, eg when an object was placed or rotated.
This is something you would need to implement yourself anyway since ARWorldMap only contains anchors and features.
I can recommend Inside Swift Shot from last years WWDC as a starting point.
Yep technically it’s possible as according to docs here. You will have to use the nsssecure coding protocol, might have to extend if you aren’t using a swift backend though. It’s still possible as the arkit anchors are defined by a mix of point maps, rotational data , and image algos. Just implement portions of codable to JSON and voila it’s viable for most backends. Of course this is easier said then done.

Google Cloud Vision API - Partial matching image

I am using the Google Cloud Vision API to search similar images (web detection) and it works pretty well. Google detects full matching images and partial matching images (cropped versions).
I am looking for a way to detect more different versions. For example, when I look for a logo, I would like to detect large, small, square, rectangular ... versions of this logo. For now, I detect images that match exactly the one I upload and cropped versions.
Do you know if this is possible and how can I do that?
The Web entities feature supports the detection of exact and partial matches of the image as well as the URLs of visually similar images, as mentioned here; however, keep in mind that these objects need to have certain percentage of similarity with the original image which means that the objects that doesn't satisfy these parameters cannot be detected by the model.
In case this feature doesn't cover your current needs, you can use the Send Feedback button, located at the lower left and upper right corners of the service public documentation, as well as take a look the Issue Tracker tool in order to raise a Vision API feature request and notify to Google about this desired functionality.

Generating Thumbnails on Client

My team and I are building an iOS application. We allow technicians in the field to upload images for certain issues they are resolving on technical equipment. It will be important to zoom in (so keep quality relatively high) when these images are uploaded to S3.
Recently we decided to add thumbnails because it will be much faster when others browse the iOS app, rather than downloading a 1.5-2.5mb image.
My co-worker decided the best way to handle this is to generate a 200-500kb thumbnail in iOS then upload the image and the thumbnail to s3.
I voiced my concern that some of our technicians may be in some parts of the world where internet is slow and data usage is limited. So doing all this additional work on the device and uploading makes no sense to me. However the team considers this a good solution and will move forward. I've shown them easy examples of how to generate thumbnails from S3 and Lambda automatically on the server... allowing us to either upload higher fidelity images with the additional bandwith or just increase the speed of the app by uploading much less. Sometimes a user may upload as many as 100 images... meaning an additional 20-50mb...
Anyways I wanted to hear some answers about how you guys think the best way to handle this is, mainly for my own sanity check.
I don't completely comprehend the intricacies of your project, but from experience, I have one word for you - Cloudinary. As opposed to S3, which a general purpose Cloud storage solution, cloudinary is designed to handle images.
We have a 200,000 hits a day online classified app that handles tens of thousands of photos daily. And cloudinary provides an extremely mean solution for all our needs. We have uploads by users from their mobile and desktop devices, bookmarking of those images, CDN based serving, and thumbnail generation.
Did I mention they have thumbnail generation built in? They have lots of other features as well, including
Resize and Crop
Optimized JPEG Custom Crop
Face Thumbnail
Rotated Circular Thumbnail
Zoom Effects and Zoom Image Overlay
Watermark Image
Optimized WebP
Overlay, Border, Shadow Text Overlay, Border, Shadow etc.
The admin console is pretty kickass too, with all of the above features available for you to configure over the cloud. And it fits well with pretty much any application (we use it in our internal Ruby, Go, NodeJS services, our Web Application and our iOS and Android apps as well).
I'm not paid to sell Cloudinary to you, but I can vouch that if it is image based services I needed, I would go for Cloudinary any day over S3. Major players like EBay and TED etc. use it for their image requirements.

Handling very large image files in web browsers

First post on SO; hopefully I am doing it right :-)
Have a situation where users need to upload and view very high resolution files (they need to pan, tilt, zoom, and annotate images). A single file sometimes crosses 1 GB so loading complete file on client side is not an option.
We are thinking about letting the users upload files to the server (like everyone does), then apply some encryption on server side creating multiple, relatively small low resolution images with varying sizes. We then give users thumbnails with canvas size option on the webpage for them to pick and start their work.
Lets assume a user opens low grade image with 1280 x 1028 canvas size. Image will be broken into tiles before display, and when user clicks on a title it will be like zooming in to a specific tile. Client will send request to the server asking for higher resolution image for the title. Server will send the image which will be broken into titles again for the user to click and get another higher resolution image from server and so on ... Having multiple images with varying resolution will help us break images into tiles and serve user needs ('keep zooming in' or out using tiles).
Has anyone dealt with humongous image files? Is there a preferred technical design you can suggest? How to handle areas that have been split across tiles is bothering me a lot so not sure how above approach can be modified to address this issue.
We need to plan for 100 to 200 users connected to the website simultaneously, and ours is .NET environment if it matters
Thanks!
The question is a little vague. I assume you are looking for hints, so here are a few:
I see uploading the images is a problem in the firstplace. Where I come from, upload-speeds are way slower than download speeds. (But there is litte you can do if you need your user to upload gigabytes...) Perhaps offer some more stable upload than web. FTP if you must.
Converting in smaller pieces should be no big problem. Use one of the availabe tools. Perhaps imagemagick. I see there is a .net wrapper out: https://magick.codeplex.com/
More than converting alone I think it is important not to do it everytime on the fly (you would need a realy big machine) but only once the image is uploaded. If you want to scale you can outsource this to another box in the network.
For the viewer. This is the interessting part. There are some ready to use ones. Google has one. It's called 'Maps' :). But there is a free alternative: OpenLayers from the OpenStreetmap Project: http://wiki.openstreetmap.org/wiki/OpenLayers All you have to do is naming your generated files in the right way and a litte configuration.
Even if you must for some reasons create the tiles on the fly or can't use something like OpenLayers I would try to stick to its naming scheme. Having something working to start with is never a bad idea.

How should I go about providing image previews of sites while using Google's Web Search API?

I'm using Google's Custom Search API to dynamically provide web search results. I very intensely searched the API's docs and could not find anything that states it grants you access to Google's site image previews, which happen to be stored as base64 encodes.
I want to be able to provide image previews for sites for each of the urls that the Google web search API returns. Keep in mind that I do not want these images to be thumbnails, but rather large images. My question is what is the best way to go about doing this, in terms of both efficiency and cost, in both the short and long term.
One option would be to crawl the web and generate and store the images myself. However this is way beyond my technical ability, and plus storing all of these images would be too expensive.
The other option would be to dynamically fetch the images right after Google's API returns the search results. However where/how I fetch the images is another question.
Would there be a low cost way of me generating the images myself? Or would the best solution be to use some sort of site thumbnailing service that does this for me? Would this be fast enough? Would it be too expensive? Would the service provide the image in the correct size for me? If not, how could I change the size of the image?
I'd really appreciate answers that are comprehensive and for any code examples to be in ruby using rails.
So as you pointed out in your question, there are two approaches that I can see to your issue:
Use an external service to render and host the images.
Render and host the images yourself.
I'm no expert in field, but my Googling has so far only returned services that allow you to generate thumbnails and not full-size screenshots (like the few mentioned here). If there are hosted services out there that will do this for you, I wasn't able to find them easily.
So, that leaves #2. For this, my first instinct was to look for a ruby library that could generate an image from a webpage, which quickly led me to IMGKit (there may be others, but this one looked clean and simple). With this library, you can easily pass in a URL and it will use the webkit engine to generate a screenshot of the page for you. From there, I would save it to wherever your assets are stored (like Amazon S3) using a file attachment gem like Paperclip or CarrierWave (railscast). Store your attachment with a field recording the original URL you passed to IMGKit from WSAPI (Web Search API) so that you can compare against it on subsequent searches and use the cached version instead of re-rendering the preview. You can also use the created_at field for your attachment model to throw in some "if older than x days, refresh the image" type logic. Lastly, I'd put this all in a background job using something like resque (railscast) so that the user isn't blocked when waiting for screenshots to render. Pass the array of returned URLs from WSAPI to background workers in resque that will generate the images via IMGKit--saving them to S3 via paperclip/carrierwave, basically. All of these projects are well-documented, and the Railscasts will walk you through the basics of the resque and carrierwave gems.
I haven't crunched the numbers, but you can against hosting the images yourself on S3 versus any other external provider of web thumbnail generation. Of course, doing it yourself gives you full control over how the image looks (quality, format, etc.), whereas most of the services I've come across only offer a small thumbnail, so there's something to be said for that. If you don't cache the images from previous searches, then your costs reduces even further, since you'll always be rendering the images on the fly. However I suspect that this won't scale very well, as you may end up paying a lot more for server power (for IMGKit and image processing) and bandwidth (for external requests to fetch the source HTML for IMGKit). I'd be sure to include some metrics in your project to attach some exact numbers to the kind of requests you're dealing with to help determine what the subsequent costs would be.
Anywho, that would be my high-level approach. I hope it helps some.
Screen shotting web pages reliably is extremely hard to pull off. The main problem is that all the current solutions (khtml2png, CutyCapt, Phantom.js etc) are all based around QT which provides access to an embedded Webkit library. However that webkit build is quite old and with HTML5 and CSS3, most of the effects either don't show, or render incorrectly.
One of my colleagues has used most, if not all, of the current technologies for generating screenshots of web pages for one of his personal projects. He has written an informative post about it here about how he now uses a SaaS solution instead of trying to maintain a solution himself.
The TLDR version; he now uses URL2PNG to do all his thumbnail and full size screenshots. It isn't free, but he says that it does the job for him. If you don't want to use them, they have a list of their competitors here.

Resources