"Share this on..." URLs - hyperlink

I have to create a bunch of these "Share this on" Technorati, Digg, Facebook, Reddit, del.icio.us, StumbleUpon, MySpace and so on. It is very easy to find online icons for this task, but it is a little more difficult to find what URLs I should link.
Is there any updated list of all these services? Of course I could copy them from other sites having this, but I am not sure they are updated, and moreover maybe there is some GET parameter I want to set differently.
EDIT: I do not really care for services doing this for me, I just need the addresses. Among the reasons I do not like the external services is the fact that I can't customize the buttons.

Try AddThis - it is a free service that allows you to embed a ton of these things pretty easily.

You should not reinvent the wheel. There are plenty of tools that already do this, as well as keeping the information up-to-date for you.
http://sharethis.com/ is the first one I found on Google.

AddThis and ShareThis have +350 ways of sharing links to your site.
If you don't want to depend on them, I'm afraid you do have to reinvent the wheel.
Check out the API:s of FaceBook and Twitter to start with.
Then continue with the rest, or surrender to AddThis.
On Facebook's developer pages (link above), it's very easy to find this example (which I assume is what you want, but for all social networks):
<html>
<head>
<title>My Great Web page</title>
</head>
<body>
<iframe src="http://www.facebook.com/plugins/like.php?href=YOUR_URL"
scrolling="no" frameborder="0"
style="border:none; width:450px; height:80px"><iframe>
</body>
</html>
Have you inspected the javascripts from AddThis and ShareThis yet?
Good luck!

Related

Show disclaimer site when opening external link

I'm running a typo3 v. 7.6.4
I alredy looked into existing plugins an even how to write my own... but i can't find a solution.
My goal is pretty simple:
Show a simple disclaimer page whenever the user clicks a link to any external page.
Is there any easy ways to accomplish this?
The easiest way would in fact be to add a on('click') eventHandler on all links. This would be additional JavaScript and work with all existing content. Figuring out if a link refers to an external site should be easy (exclude relative urls and match absolute urls against your baseUrl).
However, if this is a legal requirement, you should decide if JavaScript works for you, because with disabled JS the disclaimer would not be triggered.

Linking web page domains

So I have brought 2 domain names for my new business one is .org through Google and the other is .co from Pop! is there any way that I can link the two so if someone was to go to the .org page they would be automatically directed to the .co url? Thanks :)
yes, there are a couple ways, but easiest would be to configure your domain forwarding. I assume you're using google domains? I wish I could help more, but I don't have an invite to google domains, so I can't provide links to their docs...there seems to be a link right off their splash page, though.
You could add meta tags to your page for redirection (example: meta http-equiv="refresh" content="0; url=http://yourdomain.com/") , but since you mentioned using Google domains, try this out https://support.google.com/domains/answer/4522141?hl=en

Why do sites still use <meta name="keywords">? [duplicate]

This question already has answers here:
Keywords meta tag: Useful or time waster?
(2 answers)
Closed 8 years ago.
The keywords meta tag seems like a staggering dinosaur from the early days of failing to trick Google. Search engines naturally prioritize actually readable words, as users don't want information they can't see, right?
So why do Tumblr and Youtube automatically insert meta tags?
A youtube watch page:
<meta name="keywords" content="sonic the hedgehog, final fantasy, mega man x, i swear i am not abusing tags, newgrounds, flash">
Tumblr's official staff blog:
<meta name="keywords" content="features,HQ Update,Tumblr Tuesday,Esther Day" />
In both cases, the keywords are taken from the explicitly user-entered tags. Youtube takes them from any tags that the uploader specified, and Tumblr takes the first 5 post tags on the page. (Tumblr even automatically inserts these tags on every blog page without the ability to opt-out.)
There must be some reason they're going through this trouble, right? Are they for older/smaller search engines? Internal analytics? I can't imagine it's an enormous strain on their servers, but the existence shows they prioritize something highly enough for the small additional loads.
Firstly, its not much trouble. The tags are already defined. Secondly, just because google won't use the meta data exclusively doesn't mean that google or other sites can't use it. It's provided in an easy to read place for programs that need it. Parsing html can be hard, especially when your site is constantly changing, so providing a constant place for tags with little to no effort is just something that they do.

Search engines ignoring meta description content and showing footer

I have a site that is very simple and has mostly images and a login form and a link to signup. No actual text exist in the body except for the footer which shows the link to usage terms and copyright notice.
My site is currently showing up on search engine results with the footer content showing instead of what I put in the <meta name="description"...> tag. Why is this?
How can I not allow the search engines to index my site with the footer content showing? Or at least show the meta description first? Do I need to put some text in the form of a title attribute or alt attribute somewhere?
As +Filburt pointed out, you could add your site to Webmaster Tools which will offer you valuable information about your site's presence on the web and in the Google Search results. It may also provide you hints about what do we think about your meta descriptions :)
Generally, you will want to
write the meta description to be as natural as possible, don't stuff keywords in it,
describe the page's content accurately in this tag,
and to have a unique meta description for each page.
While we can't guarantee that the meta description that you provided will be used as search result snippets, following the above tips will greatly increase the chance.
Here are some more information about the meta description tag: http://www.google.com/support/webmasters/bin/answer.py?answer=35264
It works to some extent to use <meta name="description" /> but Google will complain (and choose to ignore it) when every page has the same description.
If you are interested in how Google deals with your site you could sign up for their Webmaster Tools - they offer a good starting point for SEO-ing your site.
You could add content invisible to your visitor but Google checks this and considers hidden content as cheating for page rank because this used to be a common SEO technique.
meta tags were a failure and have been broadly ignored ever since the Google era began picked-up again with enthusiasm.
The problem was, humans would put stale, inaccurate, or irrelevant information in the meta tags. This was fifteen years ago when cataloging the Internet still seemed feasible.
Google came along and decided that what a web page actually says was more useful. Everybody else followed suit shortly after.
Now people are trying human-authored metadata again, they're calling it the "semantic web". My hopes are not high.

Google sees something that it shouldn't see. Why?

For some mysterious reason, Google has indexed both these adresses, that lead to the same page:
/something/some-text-1055.html
and
/index.php?pg=something&id=1055
(short notice - the site has had friendly urls since its launch, I have no idea how google found the "index.php?" url - there are "unfriendly" urls only in the content management system, which is password-restricted)
What can I do to solve the situation? (I have around 1000 pages that are double-indexed.) Somebody told me to use "disallow: index.php?" in the robots.txt file.
Right or wrong? Any other suggestions?
You'd be surprised as how pervasive and quick the google bots are at indexing site content. That, combined with lots of CMS systems creating unintended pages/links making it likely that at some point those links were exposed is the most likely culprit. It's also possible your administration area isn't as secure as you think, the google bot got through that way.
The well-behaved, and google recommended, things to do here are
If possible, create 301 redirects from you query string style URLs to your canonical style URLs. That's you saying "hey there, web bot/browser, the content that used to be at this URL is now at this other URL"
Block the query string content in your robots.txt. That's like asking the spiders or other automated programs "Hey, please don't look at this stuff. These aren't the URLs you're looking for"
Google apparently allows you to specify a canonical URL now via a <link /> tag in the top of your page. Consider adding these in.
As to whether doing the well behaved things is the the "right" thing to do re: Google rankings ... who knows. Only "Google" knows how their algorithms work now, and will work in the future, and by Google, I mean a bunch of engineers and executives with conflicting goals on how search should work.
Google now offers a way to specify a page's canonical URL. You can use the following code in your HTML to tell Google your canonical URL:
<link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish" />
You can read more about canonical URLs on Google on their blog post on the subject, here: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
According to the blog post, Ask.com, Microsoft Live Search and Yahoo! all support the canonical tag.
If you use sitemap generators to submit to search engines, you'll want to disallow in them as well. They are likely where Google got your links, from crawling your folder and from checking your logs.
Better check what URI has been requested ($_SERVER['REQUEST_URI']) and redirect if it was /index.php.
Changing robots.txt will not help, since the page is already indexed.
The best is to use a permanent redirect (301).
If you want to remove a page once indexed by Google the only way, more or less, is to make it return a 404 not found message.
Is it possible you're posting a form to a similar url and google is simply picking it up from the source?

Resources