Combine 2 urls into 1 - url

I would like to combine 2 urls into 1 clickable link. The first link is a link that sets a cookie and the second is a query.
1st link: http://www.homeaway.com/?CID=a_cj_7123410&utm_source=cj&utm_medium=affiliates&utm_content=7123410&utm_campaign=10938928
2nd: http://www.homeaway.com/search/keywords:boston
Thanks
Marc

You can't 'combine' the two URLs, but you can make a request to both of them using Javascript.
Unfortunately, because the URL is on another domain you can't simply make an AJAX request. However you can use a web service like Yahoo's YQL to do this for you.
This approach is explained here: (see accepted answer).
Cross-domain requests with JQuery using YQL
HTML:
Linky
Javascript:
// User clicks your link:
$('.my_link').click(function(){
// The link to fire silently before redirecting:
var first_url = "http://www.homeaway.com/?CID=a_cj_7123410&utm_source=cj&utm_medium=affiliates&utm_content=7123410&utm_campaign=10938928";
// YQL query:
var yql_url = "http://query.yahooapis.com/v1/public/yql?"+
"q=select%20*%20from%20html%20where%20url%3D%22"+
encodeURIComponent(first_url)+
"%22&format=xml'&callback=?";
// Fetch YQL URL before returning true:
return $.getJSON(yqlUrl2Use, function(data){
return true; // Goto the homeaway.com address.
});
});

Related

How to retrieve Medium stories for a user from the API?

I'm trying to integrate Medium blogging into an app by showing some cards with posts images and links to the original Medium publication.
From Medium API docs I can see how to retrieve publications and create posts, but it doesn't mention retrieving posts. Is retrieving posts/stories for a user currently possible using the Medium's API?
The API is write-only and is not intended to retrieve posts (Medium staff told me)
You can simply use the RSS feed as such:
https://medium.com/feed/#your_profile
You can simply get the RSS feed via GET, then if you need it in JSON format just use a NPM module like rss-to-json and you're good to go.
Edit:
It is possible to make a request to the following URL and you will get the response. Unfortunately, the response is in RSS format which would require some parsing to JSON if needed.
https://medium.com/feed/#yourhandle
⚠️ The following approach is not applicable anymore as it is behind Cloudflare's DDoS protection.
If you planning to get it from the Client-side using JavaScript or jQuery or Angular, etc. then you need to build an API gateway or web service that serves your feed. In the case of PHP, RoR, or any server-side that should not be the case.
You can get it directly in JSON format as given beneath:
https://medium.com/#yourhandle/latest?format=json
In my case, I made a simple web service in the express app and host it over Heroku. React App hits the API exposed over Heroku and gets the data.
const MEDIUM_URL = "https://medium.com/#yourhandle/latest?format=json";
router.get("/posts", (req, res, next) => {
request.get(MEDIUM_URL, (err, apiRes, body) => {
if (!err && apiRes.statusCode === 200) {
let i = body.indexOf("{");
const data = body.substr(i);
res.send(data);
} else {
res.sendStatus(500).json(err);
}
});
});
Nowadays this URL:
https://medium.com/#username/latest?format=json
sits behind Cloudflare's DDoS protection service so instead of consistently being served your feed in JSON format, you will usually receive instead an HTML which is suppose to render a website to complete a reCAPTCHA and leaving you with no data from an API request.
And the following:
https://medium.com/feed/#username
has a limit of the latest 10 posts.
I'd suggest this free Cloudflare Worker that I made for this purpose. It works as a facade so you don't have to worry about neither how the posts are obtained from source, reCAPTCHAs or pagination.
Full article about it.
Live example. To fetch the following items add the query param ?next= with the value of the JSON field next which the API provides.
const MdFetch = async (name) => {
const res = await fetch(
`https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/${name}`
);
return await res.json();
};
const data = await MdFetch('#chawki726');
To get your posts as JSON objects
you can replace your user name instead of #USERNAME.
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/#USERNAME
With that REST method you would do this: GET https://api.medium.com/v1/users/{{userId}}/publications and this would return the title, image, and the item's URL.
Further details: https://github.com/Medium/medium-api-docs#32-publications .
You can also add "?format=json" to the end of any URL on Medium and get useful data back.
Use this url, this url will give json format of posts
Replace studytact with your feed name
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/studytact
I have built a basic function using AWS Lambda and AWS API Gateway if anyone is interested. A detailed explanation is found on this blog post here and the repository for the the Lambda function built with Node.js is found here on Github. Hopefully someone here finds it useful.
(Updating the JS Fiddle and the Clay function that explains it as we updated the function syntax to be cleaner)
I wrapped the Github package #mark-fasel was mentioning below into a Clay microservice that enables you to do exactly this:
Simplified Return Format: https://www.clay.run/services/nicoslepicos/medium-get-user-posts-new/code
I put together a little fiddle, since a user was asking how to use the endpoint in HTML to get the titles for their last 3 posts:
https://jsfiddle.net/h405m3ma/3/
You can call the API as:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"nicolaerusan"}' https://clay.run/services/nicoslepicos/medium-get-users-posts-simple
You can also use it easily in your node code using the clay-client npm package and just write:
Clay.run('nicoslepicos/medium-get-user-posts-new', {"profile":"profileValue"})
.then((result) => {
// Do what you want with returned result
console.log(result);
})
.catch((error) => {
console.log(error);
});
Hope that's helpful!
Check this One you will get all info about your own post........
mediumController.getBlogs = (req, res) => {
parser('https://medium.com/feed/#profileName', function (err, rss) {
if (err) {
console.log(err);
}
var stories = [];
for (var i = rss.length - 1; i >= 0; i--) {
var new_story = {};
new_story.title = rss[i].title;
new_story.description = rss[i].description;
new_story.date = rss[i].date;
new_story.link = rss[i].link;
new_story.author = rss[i].author;
new_story.comments = rss[i].comments;
stories.push(new_story);
}
console.log('stories:');
console.dir(stories);
res.json(200, {
Data: stories
})
});
}
I have created a custom REST API to retrieve the stats of a given post on Medium, all you need is to send a GET request to my custom API and you will retrieve the stats as a Json abject as follows:
Request :
curl https://endpoint/api/stats?story_url=THE_URL_OF_THE_MEDIUM_STORY
Response:
{
"claps": 78,
"comments": 1
}
The API responds within a reasonable response time (< 2 sec), you can find more about it in the following Medium article.

Twitter API 1.1 - render twitter's t.co links

I want to display the tweets of an account in my website. The problem is that the tweets appear always with the format http://t.co/..., instead of the full link as desired by me.
For instance, I obtain:
the rules of the game are all implemented - local players can play together in this link: http://t.co/Nf7j4TaB
if you are very curious... then, here is the link to the xodul's section under development: http://t.co/6Zbti36T
etc...
and I want that these tweets appear like this:
the rules of the game are all implemented - local players can play together in this link: http://xodul.com/tests/js/
if you are very curious... then, here is the link to the xodul's section under development: http://xodul.com/tests
etc...
To make my application I've followed the instructions from:
Simplest PHP example for retrieving user_timeline with Twitter API version 1.1 (from here we can get the text of each tweet, with the links coming in the format: http://t.co/...)
Rendering links in tweet when using Get Statuses API 1.1 (the code of the highest scored answer, in this link replaces, for instance, the text "http://t.co/Nf7j4TaB" with the hyperlink "<a target='_blank' href='http://t.co/Nf7j4TaB'>http://t.co/Nf7j4TaB</a>")
I appreciate very much any help on how to render the twitter's links!
With the tutorial you followed you can use these attributes to show actual link.
Note: In API v1.1, entities will always be included unless you set include_entities to False or 0.
The urls entity
An array of URLs extracted from the Tweet text. Each URL entity comes with the following attributes:
url: The URL that was extracted
display_url: (only for t.co links) Not a URL but a string to display instead of the URL
expanded_url: (only for t.co links) The fully resolved URL
indices: The character positions the URL was extracted from
https://dev.twitter.com/docs/tweet-entities
JavaScript only solution for now to get Twitter posts on your site without using new 1.1 API and actually returns the full url in posts, not the twitter shortened version :-) http://goo.gl/JinwJ
Thank you for your answers.
After analyzing the JSON in the suggested link (https://dev.twitter.com/docs/tweet-entities), I wrote a solution to the exposed problem:
// ...
$twitter_data = json_decode($json); // last line of the code in: http://stackoverflow.com/questions/12916539
// print the tweets, with the full URLs:
foreach ($twitter_data as $item) {
$text = $item->text;
foreach ($item->entities->urls as $url) {
$text = str_replace($url->url, $url->expanded_url, $text);
}
echo $text . '<br /><br />';
// optionally, here, the code from: http://stackoverflow.com/questions/15610968/
// can be added, too.
}

Counting clicks to external links with rails

I have Entry model with url field, which contains link to external site.
In view I list these links, and now I'd like to start counting when someone clicks it, and keep this info in database. What's the best way of doing it?
You can easily use google analytics to track outbound links: http://support.google.com/analytics/bin/answer.py?hl=en&answer=1136920
If that is not an option you will need to add some javascript to your links make an ajax request to the server to increment the count before transferring the user to the new url. Something similar to this jquery code:
$('a').click(function(){
var stored_ulr = $(this).attr('href');
$.ajax({
url: #your server url to increment count,
data: #data you need to send,
success: function() { window.location = stored_url; },
});
return false;
});
The above code is just a general outline. You will have to fill in the blanks and make it work for your needs.

Modify URL before loading page in firefox

I want to prefix URLs which match my patterns. When I open a new tab in Firefox and enter a matching URL the page should not be loaded normally, the URL should first be modified and then loading the page should start.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
Browsing the HTTPS Everywhere add-on suggests the following steps:
Register an observer for the "http-on-modify-request" observer topic with nsIObserverService
Proceed if the subject of your observer notification is an instance of nsIHttpChannel and subject.URI.spec (the URL) matches your criteria
Create a new nsIStandardURL
Create a new nsIHttpChannel
Replace the old channel with the new. The code for doing this in HTTPS Everywhere is quite dense and probably much more than you need. I'd suggest starting with chrome/content/IOUtils.js.
Note that you should register a single "http-on-modify-request" observer for your entire application, which means you should put it in an XPCOM component (see HTTPS Everywhere for an example).
The following articles do not solve your problem directly, but they do contain a lot of sample code that you might find helpful:
https://developer.mozilla.org/en/Setting_HTTP_request_headers
https://developer.mozilla.org/en/XUL_School/Intercepting_Page_Loads
Thanks to Iwburk, I have been able to do this.
We can do this my overriding the nsiHttpChannel with a new one, doing this is slightly complicated but luckily the add-on https-everywhere implements this to force a https connection.
https-everywhere's source code is available here
Most of the code needed for this is in the files
IO Util.js
ChannelReplacement.js
We can work with the above files alone provided we have the basic variables like Cc,Ci set up and the function xpcom_generateQI defined.
var httpRequestObserver =
{
observe: function(subject, topic, data) {
if (topic == "http-on-modify-request") {
var httpChannel = subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var requestURL = subject.URI.spec;
if(isToBeReplaced(requestURL)) {
var newURL = getURL(requestURL);
ChannelReplacement.runWhenPending(subject, function() {
var cr = new ChannelReplacement(subject, ch);
cr.replace(true,null);
cr.open();
});
}
}
},
get observerService() {
return Components.classes["#mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
},
register: function() {
this.observerService.addObserver(this, "http-on-modify-request", false);
},
unregister: function() {
this.observerService.removeObserver(this, "http-on-modify-request");
}
};
httpRequestObserver.register();
The code will replace the request not redirect.
While I have tested the above code well enough, I am not sure about its implementation. As far I can make out, it copies all the attributes of the requested channel and sets them to the channel to be overridden. After which somehow the output requested by original request is supplied using the new channel.
P.S. I had seen a SO post in which this approach was suggested.
You could listen for the page load event or maybe the DOMContentLoaded event instead. Or you can make an nsIURIContentListener but that's probably more complicated.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
YES it is possible.
Use page-mod of the Addon-SDK by setting contentScriptWhen: "start"
Then after completely preventing the document from getting parsed you can either
fetch a different document from the same domain and inject it in the page.
after some document.URL processing do a location.replace() call
Here is an example of doing 1. https://stackoverflow.com/a/36097573/6085033

Is there a way to get the twitter share count for a specific URL?

I looked through the API documentation but couldn't find it. It would be nice to grab that number to see how popular a url is. Engadget uses the twitter share button on articles if you're looking for an example. I'm attempting to do this through javascript. Any help is appreciated.
You can use the following API endpoint,
http://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com
Note that the http://urls.api.twitter.com/ endpoint is not public.)
The endpoint will return a JSON string similar to,
{"count":27438,"url":"http:\/\/stackoverflow.com\/"}
On the client, if you are making a request to get the URL share count for your own domain (the one the script is running from), then an AJAX request will work (e.g. jQuery.getJSON). Otherwise, issue a JSONP request by appending callback=?:
jQuery.getJSON('https://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com/&callback=?', function (data) {
jQuery('#so-url-shares').text(data.count);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="so-url-shares">Calculating...</div>
Update:
As of 21st November 2015, this way of getting twitter share count, does not work anymore. Read more at: https://blog.twitter.com/2015/hard-decisions-for-a-sustainable-platform
This is not possible anymore as from today, you can read more here:
https://twitter.com/twitterdev/status/667836799897591808
And no plans to implement it back, unfortunately.
Up vote so users do not lose time trying out.
Update:
It is however possible via http://opensharecount.com, they provide a drop-in replacement for the old private JSON URL based on searches made via the API (so you don't need to do all that work).
It's based on the REST API Search endpoints. Its still new system, so we should see how it goes. In the future we can expect more of similar systems, because there is huge demand.
this is for url with https (for Brodie)
https://cdn.api.twitter.com/1/urls/count.json?url=YOUR_URL
No.
How do I access the count API to find out how many Tweets my URL has had?
In this early stage of the Tweet Button the count API is private. This means you need to use either our javascript or iframe Tweet Button to be able to render the count. As our systems scale we will look to make the count API public for developers to use.
http://dev.twitter.com/pages/tweet_button_faq#custom-shortener-count
Yes,
https://share.yandex.ru/gpp.xml?url=http://www.web-technology-experts-notes.in
Replace "http://www.web-technology-experts-notes.in" with "your full web page URL".
Check the Sharing count of Facebook, Twitter, LinkedIn and Pinterest
http://www.web-technology-experts-notes.in/2015/04/share-count-and-share-url-of-facebook-twitter-linkedin-and-pininterest.html
Update:
As of 21st November 2015, Twitter has removed the "Tweet count endpoint" API.
Read More: https://twitter.com/twitterdev/status/667836799897591808
The approved reply is the right one. There are other versions of the same endpoint, used internally by Twitter.
For example, the official share button with count uses this one:
https://cdn.syndication.twitter.com/widgets/tweetbutton/count.json?url=[URL]
JSONP support is there adding &callback=func.
I know that is an old question but for me the url http://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com did not work in ajax calls due to Cross-origin issues.
I solved using PHP CURL, I made a custom route and called it through ajax.
/* Other Code */
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_ENCODING => "", // handle compressed
CURLOPT_USERAGENT => "test", // name of client
CURLOPT_AUTOREFERER => true, // set referrer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // time-out on connect
CURLOPT_TIMEOUT => 120, // time-out on response
);
$url = $_POST["url"]; //whatever you need
if($url !== ""){
$curl = curl_init("http://urls.api.twitter.com/1/urls/count.json?url=".$url);
curl_setopt_array($curl, $options);
$result = curl_exec($curl);
curl_close($curl);
echo json_encode(json_decode($result)); //whatever response you need
}
It is important to use a POST because passsing url in GET request cause issues.
Hope it helped.
This comment https://stackoverflow.com/a/8641185/1118419 proposes to use Topsy API. I am not sure that API is correct:
Twitter response for www.e-conomic.dk:
http://urls.api.twitter.com/1/urls/count.json?url=http://www.e-conomic.dk
shows 10 count
Topsy response fro www.e-conomic.dk:
http://otter.topsy.com/stats.json?url=http://www.e-conomic.dk
18 count
This way you can get it with jquery. The div id="twitterCount" will be populated automatic when the page is loaded.
function getTwitterCount(url){
var tweets;
$.getJSON('http://urls.api.twitter.com/1/urls/count.json?url=' + url + '&callback=?', function(data){
tweets = data.count;
$('#twitterCount').html(tweets);
});
}
var urlBase='http://http://stackoverflow.com';
getTwitterCount(urlBase);
Cheers!
Yes, there is. As long as you do the following:
Issue a JSONP request to one of the urls:
http://cdn.api.twitter.com/1/urls/count.json?url=[URL_IN_REQUEST]&callback=[YOUR_CALLBACK]
http://urls.api.twitter.com/1/urls/count.json?url=[URL_IN_REQUEST]&callback=[YOUR_CALLBACK]
Make sure that the request you are making is from the same domain as the [URL_IN_REQUEST]. Otherwise, it will not work.
Example:
Making requests from example.com to request the count of example.com/page/1. Should work.
Making requests from another-example.com to request the count of example.com/page/1. Will NOT work.
I just read the contents into a json object via php, then parse it out..
<script>
<?php
$tweet_count_url = 'http://urls.api.twitter.com/1/urls/count.json?url='.$post_link;
$tweet_count_open = fopen($tweet_count_url,"r");
$tweet_count_read = fread($tweet_count_open,2048);
fclose($tweet_count_open);
?>
var obj = jQuery.parseJSON('<?=$tweet_count_read;?>');
jQuery("#tweet-count").html("("+obj.count+") ");
</script>
Simple enough, and it serves my purposes perfectly.
This Javascript class will let you fetch share information from Facebook, Twitter and LinkedIn.
Example of usage
<p>Facebook count: <span id="facebook_count"></span>.</p>
<p>Twitter count: <span id="twitter_count"></span>.</p>
<p>LinkedIn count: <span id="linkedin_count"></span>.</p>
<script type="text/javascript">
var smStats=new SocialMediaStats('https://google.com/'); // Replace with your desired URL
smStats.facebookCount('facebook_count'); // 'facebook_count' refers to the ID of the HTML tag where the result will be placed.
smStats.twitterCount('twitter_count');
smStats.linkedinCount('linkedin_count');
</script>
Download
https://404it.no/js/blog/SocialMediaStats.js
More examples and documentation
Javascript Class For Getting URL Shares On Facebook, Twitter And LinkedIn

Resources