I'm running a typo3 v. 7.6.4
I alredy looked into existing plugins an even how to write my own... but i can't find a solution.
My goal is pretty simple:
Show a simple disclaimer page whenever the user clicks a link to any external page.
Is there any easy ways to accomplish this?
The easiest way would in fact be to add a on('click') eventHandler on all links. This would be additional JavaScript and work with all existing content. Figuring out if a link refers to an external site should be easy (exclude relative urls and match absolute urls against your baseUrl).
However, if this is a legal requirement, you should decide if JavaScript works for you, because with disabled JS the disclaimer would not be triggered.
Related
I'm developing a SPA web app and it will support various languages. It is build with AngularJS and I am using angular-translate to provide i18n.
But I am struggling a little bit with how the URL structure should be. I do no plan on using either gTLDs nor ccTLDs, so that leaves me with three options.
Use query params: ?locale=en-us
Use url paths: /en-us/page
Store the chosen locale in localStorage or a cookie
The first option is a no-go according to Google's guidelines for web apps SEO. So that leaves me with the last two options.
I have a hard time deciding which is more beneficial, though I am inclined to believe that using url paths would probably be more crawler friendly.
P.S: Not sure if this is the best place to ask such a question either.
The second option is your safest bet as according to https://webmasters.stackexchange.com/questions/59652/what-happens-if-i-try-to-set-a-cookie-on-a-bot cookies are ignored. You can test this yourself by going to the Google Console and fetching your website.
As of now most crawlers ignore cookies and DO NOT execute JavaScript. This means that they usually just download the html and make their judgements from there.
Some developers get around the no javascript problem by pre-rendering parts of their content. I haven't done it personally but you might want to check out https://prerender.io/
Edit
As rolandjitsu mentioned google crawls and executes javascript content.
You should go with second option: provide the language tag (and, optionally, region subtags) in the URL path as first segment.
For the simple reason that it allows you, visitors, and bots to link to specific translations.
I have a link that I typically would have href="#". I would like the url to stay the same when it is clicked, but it seems like backbone copies and pastes the link to the URL no matter what it is. I even put
<a href="javascript:alert('true')>Link</a>
and the browsers URL was "localhost:5000/javascript:alert('true').
How can I get backbone to refrain from copy and pasting the link to the broswers URL
This is one of those "it's a feature not a bug" type of things. Backbone does that on purpose, for (at least) two reasons:
it gives a URL that users can copy/paste, email to each other, etc. and still take them to the correct place in your site; without such URL manipulation that's impossible
it allows for browser-based back/forward functionality (in browsers that don't yet support the history API)
There's probably other reasons too, but that's all I can think of at the moment. The point is, this is what the Backbone router is supposed to do. Using it, and then wondering how to make it not manipulate the URL is somewhat akin to using an <span> on the page and asking how to get let the user edit its text.
If you don't want that functionality, don't use the Router at all; just have your views invoke each other.
Could anyone please tell how the site http://www.outsharked.com/imagemapster/default.aspx?what.html is working in such way? Modifying the url without loading/reloading the page. I think this is not done by html5. Because it works in IE6 which doesn't support html5.
I created that site. The commenter is correct, it uses Javascript to change the URL. There's nothing about how that navigation works that is different for IE6 - that browser supports the necessary client-side functionality to do this kind of thing. The basic functionality involves:
capturing click events on the nav, and loading the inner content via AJAX
update the URL to reflect a working direct URL to target.
The links also are valid anchor links that, in the absence of Javascript, would go to the same page (but load the whole thing). This is your basic AJAX web site setup with one minor difference. It's common practice to use a URLs like this in AJAX/single page web sites:
http://mysite.com/home#somepage
or even just
http://mysite.com/#somepage
Where the hashtag part represents the actual page a user has navigated to. If someone accessed that url directly, e.g. from outside the site, the site would use Javascript to load the correct content based on the hashtag, after the page had loaded. This means that there might be a little delay for the inner content to reflect the correct page, since it has to run another request after the initial page has loaded from the browser to get the inner content via AJAX.
I was trying to avoid that by creating a setup that worked completely with and without Javascript. If you go directly to a URL within the site such as http://www.outsharked.com/imagemapster/default.aspx?faq.html you will notice it loads the content directly. This URL will work even if Javascript is disabled. You can't actually do this using hashtags, since hashtag content is not sent to the server. Only the client knows what's after the hashtag in a URL. That's why I was using query strings to represent inner pages.
This site architecture was sort of an experiment at the time. It works pretty well but the code isn't fantastic, I didn't really do anything else with it, and I'm sure there are other better-fleshed-out/tested/full-featured frameworks out there to do much the same thing.
But it might not be a bad example of the nuts and bolts of creating a basic AJAX navigation setup, as a learning tool, since it's pretty concise, and also does HTML5 history navigation (e.g. so the back button works on modern browsers).
Would be great if you guys could shed some light on this, has baffled me:
I was asked by a client if I could try and make the search term for his comedy night "sketchercise" put his website top of the Google ranking. I simply changed the title tag of the header for the whole site from "Allnutt and Simpson" to "Allnutt and Simpson - Sketchercise # Ginglik - Sketch Duo". It did the trick and now the site comes up top of the Google listing when typing in "sketchercise". However, it gives off this very strange link:
http://www.allnuttandsimpson.com/index.php/videos/
This is the link to the google search result too:
http://www.google.co.uk/search?sourceid=chrome&ie=UTF-8&q=sketchercise
This link is invalid, it doesn't make any sense. I guess it has something to do with the use of hash tags and the AJAX driven site, but before I changed the title tag, it linked to the site fine using the # tags. What is the deal with this slash?
The strangest part is that the valid URL for the videos page on that site is /index.php#vidspics, I have never used the word "videos" in a url!
If anyone can explain the cause of this or just help me stop it from happening, I'd be very grateful. I realise that this is an SEO question and I hate that stuff generally, but I hope you can see this is a bit of a strange case!
Just to compare, if you google "allnutt and simpson" it works just fine links to the site and all of it's pages absolutely fine as .php pages (and then my JS converts them to hash tags to keep things clean)
It's because there must be a folder called 'videos' under your hosted files, use an FTP client and check this.
Google crawls every folder and file unless you tell him not to do this, look for robot.txt files to learn how to avoid indexation.
Also ask google to remove that result when you solve this.
Finally that behaviour is not related with hash tags, these are just references to javascript in order to display the appropiate content in you webpage.
Not sure why its posted like this but the only way to stop that page from appearing is using a google webmaster account for this website and make sure the crawlers can't find this link anymore. The alternative is have the site admin put this tag, <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"> , in the header when isset($_REQUEST(videos)) is true.
The slash in the address is the parsed form of www.allnuttandsimpson.com/index.php?=videos. You can have the web server change all the php parameters into slashes to make the links look pretty.
Best option for correct results is to create a sitemap and submit it to https://www.google.com/webmasters/tools/ for that site. You will need access.
Oh forgot, the sitemap will make google see all the pages you want it to post, use this for the major pages like those in the main menu. To remove links you don't want requires a robots.txt in the main directory of the site.
Normally Web Intent is used as a pop-up. According to Twitter, it also provides embedding functionality.
"Some sites may prefer to embed the unobtrusive Web Intents pop-up Javascript inline or without a dependency to platform.twitter.com. The snippet below will offer the equivalent functionality without the external dependency."
The snippet can be found at https://gist.github.com/894540#file_intents.html
See: http://dev.twitter.com/pages/intents
However, I can't get this snippet work. I copied the snippet(JavaScript) code to an html file and open that in a browser. Nothing happened! What should I do to make it work?
I had the same problem during NYC Startup Weekend.
The snippet they provide does open up the Twitter popup, as required, but the ability of the Twitter popup window to pass a message back to your web page is a little more complicated. You will need to understand how their widgets.js code works and reproduce what's necessary to set up the RPC framework. My short-term workaround was to include a slightly modified (un-obfuscated) version of widgets.js that would not replace my button with theirs.
I will be tackling this in a week or two, if you can wait.
... or you can just include their widgets.js directly :)
Welcome to the vague world of the Twitter API documentation.. I think you're misunderstanding, no small part to the incredibly poor wording on the API page:
Embedding that javascript code allows you to open the twitter web intent pages in a "twitter style" popup without requiring a dependency on platform.twitter.com. It does not allow you to embed a twitter intent.
You can see it at work by adding that javascript and then adding the following to your page
<p>New</p>
Clicking "New" will open it in a popup. Remove the javascript, and clicking "New" will navigate the page away to the tweet box.
It's incredibly frustrating to me they don't, at least to my knowledge, allow for proper embedding. I know why they oped for this method, but it's frustrating none the less.