Twitter streaming API not tracking URLs - twitter

Have gone through https://dev.twitter.com/docs/streaming-apis/parameters
Per documentation it should be able to track URLs such as example.com/foobarbaz but I can't seem it to be tracking such URLs. It just doesn't return me any result when I tweet this URL and track it using Streaming API. Am I missing something?

Pretty late, but I found this by Google so this might help someone...
There are a few answers to this. The main answer being that Twitter treats URLs differently than anything else.
First, make sure you do NOT include the "www".
Twitter currently canonicalizes the domain “www.example.com” to “example.com” before the match is performed, so omit the “www” from URL track terms.
For me, sending the track parameter as "example.com/foobarz" and then tweeting "a test, please ignore: http://example.com/foobarz" worked perfectly.
You can NOT, in general, ask for substrings of URLs:
URLs are considered words for the purposes of matches which means that the entire domain and path must be included in the track query for a Tweet containing an URL to match.
But if you are willing to take every tweet from the whole domain (and a bit more edge cases), Twitter will accommodate:
Finally, to address a common use case where you may want to track all mentions of a particular domain name (i.e., regardless of subdomain or path), you should use “example com” as the track parameter for “example.com” (notice the lack of period between “example” and “com” in the track parameter).
All quotes are from the Twitter docs: https://dev.twitter.com/streaming/overview/request-parameters#track
They have more information, including examples.
Good luck!

Related

Does twitter stream API allow filtering by urls?

I am trying to do a filter by urls but no result is being returned.
From the following doc, it shows it is possible https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/premium-operators
but I think it's a premium feature. Is this true? If yes then is there any other way to filter by urls without using the premium feature?
Standard Twitter streaming API provide us with 'track' parameter. This is a Standard streaming API parameter (see the doc). It matches Tweets as by phrases as by URLs. A common use case according to the doc:
a common use case where you may want to track all mentions of a particular domain name (i.e., regardless of subdomain or path), you should use “example com” as the track parameter for “example.com”
This parameter value Will match...:
example.comwww.example.com foo.example.com foo.example.com/bar I hope my startup isn’t merely another example of a dot com boom!
I tested the option by means of twitter-hbc library for Java. It works as expected!
To avoid confusion, please, take the note:
The text of the Tweet and some entity fields are considered for matches. Specifically, the text attribute of the Tweet, expanded_url and display_url for links and media, text for hashtags, and screen_name for user mentions are checked for matches.

Does the Twitter API auto shorten URLs

I know this question has been asked before however the answers I found where a little old and I know that twitter has now made the t.co/xyz service manditory on all posts recently. I'm just wondering if when using the oAuth API service it will automatically shorten any url? Anyone know what the current process in play is? Do I still need to shorten the url pre posting?
Thanks very much
Yes. Even when using OAuth, all links will be wrapped in t.co short URLs. Even if the link is shorter than 20 characters!
See the post in the discussion groups.
There is also some documentation about recent and upcoming changes to t.co shortening (2012-04-12).
You can shorten the URL yourself if you want, but it will still get wrapped.

Any short URL service that you can POST variables on?

I work for a small SMS marketing company, where we're sending out text message that each contain a unique code for the user (as a variable). My url is rather long, and I want to attach a unique variable for each one.
For example, the full URL might be:
http://www.mybigwebsiteurlishuge.com/more/more/?code={variable}
but I want it to be something like:
http://bit.ly/2398h?code={variable}
Anybody know any services that can do this? Otherwise I need to purchase small domain name just for this.
Thanks so much!
Most shortening services have APIs that you can use to shorten your URLs. Including bit.ly. Yu will have to use their API to the shortened URL.
I kept on looking, and still couldn't find anything suitable, so I got a new 3-character domain name, and also make a redirecting script that changed miniaturized variable names t the full ones. This works just as good really.

How would I find all the short urls that link to a particular long url?

Basically I want to know how many people have tweeted a link to a url, but since there are dozens of link shortener out there I don't see any way to do this without having access to all of their url maps. I found a previous question here but it was over a year old and didn't have any new answers.
So #1, does anyone know of a service/API for doing this?
And #2, can anyone think of a way to accomplish this task other than submitting the long url in question to all the popular link shortening sites?
ps- I'm also open to comments about why this is impossible or impractical.
You could perform a Google search (or the equivalent via API) for any pages that link to your page. This is done with the link: keyword. So if you're trying to figure out how many people link to www.example.com (regardless of whether it's through a link shortner URL), then you would just do a Google search for link:www.example.com.
e.g.: http://www.google.com/search?q=link:www.example.com
Note that this will only find pages that have been indexed, so pages that haven't been crawled, or pages that get crawled infrequently, will not show up in the results until a later date (if at all).
Since all sites have different algorithms for shortening the URLs, and these are different sites that most likely do not share their data with each other, how can you hope to find all of them in a single or small number of queries?
All you can do is brute-force it, and even then this might not be any good if a site is content to create a new value for the same long-form URL (especially if you send a different long-form URL that maps to the same place, like http://www.stackoverflow.com/ rather than http://stackoverflow.com/).
In order to really get this to work, there would have to be a site that ALREADY automatically collects all of this information from every site, which the URL shortening sites voluntarily call. And even if you wrote such a site, that doesn't account for the URL-shortening sites already out there who already have data!
In short, I do not see how this is remotely possible, unless I'm wrong about there being such a database somewhere out there.
So months after asking this question I came across a solution to a similar question, that is how to tell how many times a link has been shared on facebook. The solution, via a simple new API call:
http://graph.facebook.com/http://stackoverflow.com
returns the following json data:
{
"id": "http://stackoverflow.com",
"shares": 1627
}

Can an "SEO Friendly" url contain a unique ID?

I'd like to start using "SEO Friendly Urls" but the notion of generating and looking up large, unique text "ids" seems to be a significant performance challenge relative to simply looking up by an integer. Now, I know this isn't as "human friendly", but if I switched from
http://mysite.com/products/details?id=1000
to
http://mysite.com/products/spacelysprokets/sproket/id
I could still use the ID alone to quickly lookup the details, but the URL itself contains keywords that will display in that detail. Is that friendly enough for Google? I hope so as it seems a much easier process than generating something at the end that is both unique and meaningful.
Thanks!
James
Be careful with allowing a page to render using the same method as Stack overflow.
http://stackoverflow.com/questions/820493/random-text-can-cause-problems
Black hats can this to cause duplicate content penalty for long tail competitors (trust me).
Here are two things you can do to protect yourself from this.
HTTP 301 redirect any inbound display url that matches your ID but doesn't match the text to the correct text.
Example:
http://stackoverflow.com/questions/820493/random-text-can-cause-problems
301 ->
http://stackoverflow.com/questions/820493/can-an-seo-friendly-url-contain-a-unique-id
Use canonical URLs.
<link rel="canonical"
href="http://stackoverflow.com/questions/820493/can-an-seo-friendly-url-contain-a-unique-id"
/>
https://stackoverflow.com/questions/820493/can-an-seo-friendly-url-contain-a-unique-id
I'd say you're fine.
Have a look at the URLs that StackOverflow uses. They have a unique id, then they have the SEO-friendly stuff. You can omit the SEO-friendly stuff and the URL still works.
You are making a devils bargan here, you are trading away business goals for technology goals.
If you were to ask "From a purely business and SEO prospective, is it better to include unique IDs in the URL or not?"; the answer would clearly be to not use them.
The question then becomes, if you do use them, how much does it hurt you in the search engines? The answer is that it definately has some negative impact. How much is yet to be determined.
In terms of "user friendly", no, they are definitely not user friendly.
In terms of Google, they state "Whenever possible, shorten URLs by trimming unnecessary parameters." See their URL structure document.
I'm not aware of any problems caused by adding an ID to a URL. In fact it can be extremely useful, as it allows the human/search engine friendly part of the URL to be changed without causing a broken link to a page that a search engine has already indexed. Using SO as an example, here's a link to your question:
https://stackoverflow.com/questions/820493/you-can-put-any-text-you-want-here
Nothing wrong with that. An increasing number of services have started to use a hybrid solution as Paul Tomblin already pointed out. In addition to SO, Tumblr uses this pattern too (maybe it was the first).
Furthermore, in certain services—like Google News—the URL must contain a unique numeric ID.
Getting rid of the parameterized URL will definitely help. From my experience, including the ID does not hurt or help, as long as there are no '?key=value' pairs in the url.
I have two seemingly contradictory points to make here:-
Nobody looks at URLs! Experience has "trained" browser users to render the "Address" box contents as invisable, they know the contents will be any two of 'ureadable', 'meaningless' and 'confusing', hence they just ignore it completely.
Using a String which can be easily converted to an integer may offer a slight performance advantage over using a longer string which is slightly harder (hash() vs. to_int() ) to convert into an integer. However in the context of the average web application any performance difference would would be negligable.
My advice would be to stick with what your comfortable with.
Use something like modrewrite to parse URLs before they reach your server. So you could convert a slug like http://oorl.com/99942/My-Friendly-Text-For-Search-Engines/ into http://oorl.com/lookup.php?id=99942. This will also let you change slug and keywords used to optimize certain links without damaging functionality.
Duplicate refer cause more negative impact compare to friendly URL, be careful about using fake text with id, your competitors could miss use this.
Yes, and in fact it's more SEO friendly to include a number in your url as it implies to google that you are consistently updating your content.
I am fairly sure that it makes it much more difficult to get indexed in Google News if you don't have an incrementing number attached in some way to your URLs.

Resources