I want to save the thumbnails of a website by just entering their urls , e.g if I enter http://www.google.com , it should generate the thumbnail of the google search page .
One such API that I was using till now is http://counter2.goingup.com/thumboo/image.php.
A sample url for that :
http://counter2.goingup.com/thumboo/image.php?i=1f899e4e1abf9473ccae69de4f3ec1ca|||www.google.com|||80x50
But , off late it's showing the error "URL not found" . Do anybody know what exacly has gone wrong with this API ?
Is there any other such convenient third party API out there which can be of some help to me . By convenient , I mean, it should not show a lame Screenshot queued up message everytime it fails to find any pre-existent snapshot for that website in their db.
Use PhantomJS to create screenshots. PhantomJS comes with an example called rasterize.js, which does exactly this. Example:
phantomjs rasterize.js http://raphaeljs.com/polar-clock.html clock.png
Docs here. Related projects including web services here.
http://snapit.io works well, and gives you historical caching on a CDN. For example this URL would look like
http://www.snapit.io/snaps?url=https://stackoverflow.com/questions/7907170/get-thumbnails-of-a-website-from-their-urls
if you wanted a thumbnail of 200x200 pixels (keeping aspect ratio) you could do
http://www.snapit.io/snaps?url=https://stackoverflow.com/questions/7907170/get-thumbnails-of-a-website-from-their-urls&max_width=200&max_height=200
There's a lot of other services just like this out there, most require a subscription for any substantial amount of use though (including snapit.io), http://url2png.com, http://www.shrinktheweb.com, http://www.thumbalizr.com.
Related
I am working on an iOS app where the users can add description/text when uploading images like Snapchat.
Do they render images and add the text to image so that it becomes part of the image itself or is it shown as a UILabel over the image?
For the 2nd option the text would have to be sent separately to server.
P.S. Just having an argument with Server side programmer and I'm suggesting the 2nd option.
If we check /ph/upload in Snapchat API (last updated 23-12-2013) we can see that you can upload either a photo or a video.
Of course, this is not the latest version (although this is the last documentation I could find) but I am assumming nothing has changed in that regard.
That means the text is inserted to the photo in the mobile client app, not on the server.
In my opinion, you shouldn't base any decisions about your API architecture on Snapchat because it's unlikely you have the same use cases. In general:
Sending data separately is more flexible and makes client implementation simpler.
Rendering data on the client is better for user experience (everything is faster and the user can see the final result) and also it saves a lot of server resources (the more users you have the more this will be visible).
i share websites content through Facebook Open Graph on Facebook.
But some of the URL give the above error (though the url of posts and its images are correct).
in example.
this url works:
http://dehmazang.org/post?id=00061&cat=articles
or if i change its category(articles) to (roznigar)
it also works:
and if i just change its category(articles) to (goftago)
it doesn't work
and also changing to some other categories doesn't work.
I don't really know is there fault with website or Facebook.
Stackoverflow is a place for people writing their own code, webapps might be a better place to ask, or superuser.stackexchange.com
Facebook's scraper works the same way with all website, it's about the setup on their individual website, put the links into http://developers.facebook. com/tools/debug
I have previously used the following URL to access my tweets and embed them on a website:
http://twitter.com/statuses/user_timeline/my_username.rss
It seems that just this morning though, I'm getting the following error:
Sorry, that page does not exist
Does anyone know what might have happened to this service, and what an alternative might be?
Try http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=USERNAME
It appears that this is a permanent change made by Twitter (see Twitter API's tweet)
Instead they're moving to a versioning system like below:
https://api.twitter.com/1/statuses/user_timeline.rss?screen_name=twitter_username
By changing the user_timeline file extention you're able to receive the feed in different formats, IE:
https://api.twitter.com/1/statuses/user_timeline.xml?screen_name=twitter_username
https://api.twitter.com/1/statuses/user_timeline.json?screen_name=twitter_username
I may fall under the list of possible duplicates but still i could not find the answer according to my req.In my rails application i have some web pages with a link of print option.
Now what i need is when i click on this print ,i should get a screen shot of the current web page and a popup box to save the image .Is there any plugin available for the same.
Also any queries for code are welcome
Thanx
You could go for phantomJs. Where you can alter the page HTML, manipulate CSS, insert & call javascript snippets, the way you need it.
Good documentation available and also you can find great deal of help in stackoverflow-phantomjs
Regards,
HBK
May i suggest that you use http://www.w3.org/TR/css-print/
if you want to make a screeenshot you have to work with the user env, so i suggest you look at this topic How do I take screenshots of web pages using ruby and a unix server?
Similar to Facebook's UI, I am attempting at generating a preview image from an external linked website. So that when a user types in a url he is linking, the UI will by default, scan that site for an img and scrape a preview thumb.
Is there a specific name for this technique? Or can anyone point me in the direction of learning this?
Thanks so much!
Its called scraping. There is a library called scrAPI.
Here is a code example http://crunchlife.com/articles/2007/08/13/code-snippet-ruby-image-scraper
There are a couple different options when it comes to page scraping. Another one to check out would be nokogiri, http://nokogiri.org/. You can find tutorials on how to use it at http://nokogiri.org/tutorials.
Instead of grabbing an image from the site, why not grab the image of the entire page? You could make use of a free screenshot service like http://www.websnapr.com/ or http://www.thumbshots.com/ among others. In one application, I use that for my preview image, and use nokogiri to scrape the page title and description. Just an idea.