What does the "uo=4" parameter in an iTunes link specify? - hyperlink

Similar to this question (What is "mt=8" in iTunes links for the appstore?), I'm curious what the param uo does. On iDevices and browsers, I haven't been able to see the difference...

UO – Unique Origin: This parameter identifies the tool or sources used to generated the link itself (e.g. RSS Feed Generator, Search API, Enterprise Partner Feed, etc.). This helps identify where the link came from for your benefit, but doesn’t actually affect the end user experience. This can be removed if necessary to tidy up the link.
Taken from here. http://blog.georiot.com/2013/12/06/parameter-cheat-sheet-for-itunes-and-app-store-links/

Related

How to loop through modules, link topics, and LTI links

When we import content from our system to D2L, we create an LTI Link, a Quick Link given the LTI Link, and a Link Topic given the public URL from the Quick Link. This is all good, a nice traceable chain and relationship.
Now, I am in need to pull this same information out.
I can see a GET to /orgId/content/root will give me modules.
I can see a GET to /orgId/content/modules/moduleId/structure will give a array which includes Link Topics and Modules (and recursively more of the same).
However, I am stuck on obtaining LTI Links for the Link Topics. These are they 2 key abstractions for us.
I am further stuck on what exactly the Quick Link does for us. There is no way to GET a Quick Link.
Now, going the other way, I can see a GET to /lti/link/orgId will get me all the LTI Links in the course. But, there's no way to tell which Link Topic it's associated with.
Ditto for the Quick Link in this approach; I just don't know where this abstraction fits in.
Please advise. Thanks dearly.
A response has been posted in the Developer Community of Practice: https://community.brightspace.com/devcop/f_technical/how_to_loop_through_modules_link_topics_and_lti_links

Google+ Authorship: #REL, GET Parameters and Redirects

I recently decided to start to take advantage of rich snippets to improve my personal website's content for the search engines and, IMHO most importantly, the site readers – hi, Mam! ;-). One of these are Google Authorship. Personally, I think the idea behind Google Authorship is a sound one: it helps to brings a sense of identity, personality and – arguably, most importantly – credibility to what is still largely an anonymous web.
Normally, I would link my article to Google Authorship using the following line of HTML:
<A REL="author" HREF="https://plus.google.com/112431363835029530079?rel=author">Jordan Clark</A>
However, in the instance of a website that publishes articles that are written by multiple authors, manually entering each another’s Google+ UID string starts to become a tiresome process.
Is is valid to do the following:
(a) Link to the author like so, using the script "author.php" (or other type of server-side script).
<A REL="author" HREF="/author.php?by=Alice&rel=author/[UID]?rel=author">Alice</A>
(b) The file "author.php" scripts simply do a quick check for Alice's (or whoever) User ID string provided by Google, and then uses a simple HTTP redirect header to pass this data to Google.
What I would like to know is:
Is it okay to use a local script to redirect to your Google+ user profile? (i.e. will it affect the PageRank of already indexed page or have any other unforeseen negative effects on new and indexed pages?)
Why do I not see more people linking with Google’s “prettified” version:
http://profiles.google.com/clarky.y2k?rel=author
Are there any drawbacks to using the “prettified” version of this method?
Ideally, I would like to use the intermediate PHP script, as I have already described above (see part 1). However, any tips, suggestions or other ways you may have implemented on your websites are very welcome!
For item (1), you can maintain your own app's profiles (author.php in your case) for your authors. On your own app's profile page (author.php), you would add a link from that page to Google and specify the rel="me" attribute on that link. So Alice's profile page might say something like "Find Alice on Google+.
This indirect authorship linking is supported. You also will need the link from Alice's Google+ profile that lists her as a contributor to your site. Once the linking is setup in both directions, authorship can start to show up. Authorship won't always display in all cases and can take some time for it to start appearing as Google would need to reindex your pages.
For item (2), I don't think the profiles URL will enable authorship. Some people use that URL as a vanity URL, but as far as I know it isn't supported for use with things like authorship, badges, etc.
You should test if your redirects are followed using the Rich Snippets Testing Tool: http://www.google.com/webmasters/tools/richsnippets
rel="author" is no longer supported.

Find the number of times a tweet has been viewed

There have been quite a few number of start-up pertaining to analyzing Twitter data. There is CrowdBooster, then there is Klout, which use Twitter data to tell the user their True reach.
I have got the following two questions:
1) Is there a way to find out who has viewed one's tweet, or the number of people that have viewed a tweet. Crowdbooster claims to tell you how many impression one received per tweet. How do they do it?
2) Thousands and thousands of links are shared each day on Twitter. Can we find out which user has clicked the link in a tweet?
I have looked through Twitter API and some of the companies that have licensed Twitter's Firehose, but have not found anything that meet my needs.
Also, to give you a short answer to your 2nd question. Now that we've established that view analysis is impossible. Can you find out which user has clicked on that link, absolutely. And depending on what your talking about, user as far as the user who has clicked on the link or the user that has the link on their Twitter stream. Both are possible,
in the case of A, you would get the referring users IP address. Methods vary depending on language.
But what I think your asking for is scenario B, finding out which user has the link in their Twitter stream. This can be done by querying the link, the API response you will get can include tweet entities which will list all this information out for you and more. Open up a firehose with your link and watch what comes in.
https://dev.twitter.com/docs/streaming-api/methods
1) Is there a way to find out who has viewed one's tweet, or the
number of people that have viewed a tweet. Crowdbooster claims to tell
you how many impression one received per tweet. How do they do it?
No, in the case of a view - this would be impossible. The tweet impression can happen in multiple silos. On the website, in a widget, in a mobile app. You can imagine that it's simply not possible to get the impression of a tweet on a view because of this reason and because unlike a click, there is no I viewed this tweet identifier sent when a view has been enacted. I spent a great deal of time researching for a way to get the tweet impression even based on a similar clicked link and this is not even possible. (edit: it's possible see the last paragraph) This brings us to question 2.
2) Thousands and thousands of links are shared each day on Twitter.
Can we find out which user has clicked the link in a tweet?
Yes, what these websites are mainly doing is analyzing links that you process through their website. If you can have a unique hash marker on a link then analysis becomes possible. Without a unique hash marker, Twitter will re-interrupt two of the same links in a exactly the same way, even in the case that it shortens your link to it's custom t.co wrapper.
This means the only reliable way to do tweet analysis is by including a unique link marker code on your tweet and analyze the the fact that somebody that has hit your server has clicked on that link.
There is a somewhat hidden Twitter API feature that helps you understand how popular a particular link is. That being the link count API .. http://urls.api.twitter.com/1/urls/count.json?url=
Something really outside of the box you can do if your set on analyzing multiple versions of exactly the same link without using markers and if your also using the Streaming (firehose) would be to analyze the tweet views (using the link count API) on similar links that hit your server. The link that got the +1 boost in view is the one that hit your server. But that's about the extent of creative analysis you can get with your tweets and more specifically the links, as mentioned links are the only thing your really able to analyze when it comes to Twitter.
1) Is there a way to find out who has viewed one's tweet, or the number of people that have viewed a tweet. Crowdbooster claims to tell you how many impression one received per tweet. How do they do it?
Yes, sign up for Twitter Analytics https://analytics.twitter.com (free service provided by Twitter) and you can see how many people view (impressions) for each tweet and totals for specific dates or a date range.
2) Thousands and thousands of links are shared each day on Twitter. Can we find out which user has clicked the link in a tweet?
Yes, you can do this. Using a URL shortening service like Bitly.com you can track how many clicks you had from Twitter (only give out that Bitly link on Twitter to do this). But if you want more indept information you may need to create a tracking software, as I don't know of any available. To do that you would need the tracking software to track the link and find out the refer header and see if it's from Twitter (or better yet, just give out a unique URL for your tweets), then you would need to use the Twitter API to find out the handle (username) of that visitor who clicked your link. Lastly store this information in a database so you can review who clicked what link.

How do search engines see dynamic profiles?

Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name?
As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.
EDIT: I guess I didn't really ask what I wanted to know about. Do I need to be as big as twitter or facebook in order for google to make special ways to crawl my site? Will google automatically find my users profiles if I allow anyone to view them? If not what do I have to do to make that work?
In the case of tweets in particular, Google isn't 'crawling' for them in the traditional sense; they've integrated with Twitter to provide the search results in real-time.
In the more general case of your question, dynamic content is not new to Facebook or Twitter, though it may seem to be. Google crawls a URL; the URL provides HTML data; Google indexes it. Whether it's a dynamic query that's rendering the page, or whether it's a cache of static HTML, makes little difference to the indexing process in theory. In practice, there's a lot more to it (see Michael B's comment below.)
And see Vartec's succinct post on how Google might find all those public Facebook profiles without actually logging in and poking around FB.
OK, that was vastly oversimplified, but let's see what else people have to say..
As far as I know Google isn't able to read and store the actual contents of profiles, because the Google bot doesn't have a Facebook account, and it would be a huge privacy breach.
The bot works by hitting facebook.com and then following every link it can find. Whatever content it sees on the page it hits, it stores. So even if it follows a dynamic url like www.facebook.com/username, it will just remember whatever it saw when it went there. Hopefully in that particular case, it isn't all the private data of said user.
Additionally, facebook can and does provide special instructions that search bots can follow, so that google results don't include a bunch of login pages.
profiles can be linked from outside;
site may provide sitemap

How to get product information from amazon, just based on the URL?

I just have a link to a product page, at amazon. How do I get all the information (photo, price etc), in my ruby program, just using this link?
Here's the list of supported urls as disclosed by amazon for their oembed, product advertising API would come to picture only after parsing through these URLs and getting the ASINs
http://*amazon.*/gp/product/*
http://*amazon.*/*/dp/*
http://*amazon.*/dp/*
http://*amazon.*/o/ASIN/*
http://*amazon.*/gp/offer-listing/*
http://*amazon.*/*/ASIN/*
http://*amazon.*/gp/product/images/*
http://*amazon.*/gp/aw/d/*
http://www.amzn.com/*
http://amzn.com/*
I found this library (I'm using Rails)
amazon-ecs
I'm experimenting with it. Still, I'd require some kind of ID (product id?) to get details of a particular product. For example, consider this link to kindle
http://www.amazon.com/Kindle-Amazons-Wireless-Reading-Generation/dp/B00154JDAI/ref=amb_link_84372271_1?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-1&pf_rd_r=06JJGQP9J3BHKPE38SXP&pf_rd_t=101&pf_rd_p=478184871&pf_rd_i=507846
In that link, I noticed ASIN, which is B00154JDAI.
Looks like I can use this ID, to get product information (using amazon-ecs). I just need to parse the URL, to get ASIN.
Is there any other way to do it?
No, I am not going to do screen scraping, that is not a good idea anytime.
If you want to do this, the Nokogiri or hpricot libraries both allow HTML parsing and searching. However, this kind of screen-scraping is notoriously unreliable (as it may break any time Amazon decides to reorganize their HTML), so if you're planning to do this sort of thing for any length of time I'd recommend leveraging the Amazon Product Advertising API instead.
In your program: fetch the page and parse HTML. Filter out the required information. There may be some libraries in Ruby (that I am unaware of), which parse HTML.
hpricot seems to do what you want.
You should use the library Ruby/AWS (google for it, my karma is not high enough to allow external links...). It has been written exactly for that.
You might need to use the built-in Search to find the item you're looking for. After that, the API gives access to pictures, links and all usable information.

Resources