How to disallow certain page to be viewed i.e. success.xhtml - jsf-2

I would like to check how to manage contexts / pages that are displayable to users.
For example, to allow /register.xthml, but not /success.xhtml.
Thx.

I found one possible way, by hiding them in the web information folder WEB-INF.

Related

Ember search engine

I am creating an Ember app that has a search engine built into it say for houses. My results change a lot as houses are found / added or removed / sold. Therefore my search results change all the time.
I also have pages for each house which has a similar houses section on it that shows always changing similar houses to this one.
I am trying to find the best way to make this app crawlable to search engines.
I could like discourse use noscript tages for each page but as all my houses pages can hold different information and structure depending on the agent/ seller this would be a lot more work basically duplicating what the client is doing!
I could go down the phantomjs route and cache all my pages and serve via the escapedfragment_ method but i am thinking this would be a resource intensive approach with content changing so much. Also with my house pages having similar houses on them that can change depending on the user / location etc, i am not sure how to cache these sections.
Another method i am toying with is to convert my page / section templates into a serverside template so i can render it on the server. For example when a customer creates a house page via my ember app in the format they require they click publish and i convert the rendered html into serverside template with placeholders etc for data.
Anyone help with this ? Any ideas / suggestions / advice would be great!
I think you've kind of answered your own question. This is all about trade offs and finding the solution that is best for your particular case. There is no silver bullet. Personally I go with something close to the noscript route, but instead of putting things inside noscript tags, I put them in regular divs with a class of no-ember, which are visible by default. Then when the document is ready I test to see if the client supports push state. If so, I initialize my Ember app and hide the no-ember divs. If not, then all of the no-ember divs are visible so that the client can see/use the content like normal.

MOSS to have a certain page available in all sites

I have a MOSS Publishing Site, say it's http://dev. It's basically a magazine site, with an issue for every month, so it's dev/2011-01, dev/2011-02, and so on.
There are some general pages like About.aspx, ContactUs.aspx which should be available for all issues. I don't want to create these pages in every issue/site. I know we can put the page in TEMPLATE\LAYOUTS folder.
But I don't really like it, because I want the pages to reside in dev/Pages folder, so it's in 1 repository, instead of here and there.
Is there any other way to achieve this? Like a custom handler that will direct request from dev/2011-01/Pages/About.aspx to dev/Pages/About.aspx.
You can always write a HttpHandler which will redirect the request for these pages.
However , the best way would be to
Put these pages in a feature
Deploy the feature using Module Feature with GhostableInLibrary=TRUE
Activate the feature on all magazine sites.
This way, your pages will reside on file system in feature folder, but will be visible on all sites as ghosted files. You can upgrade the feature to upgrade the files.
I'm not sure why you would need a custom handler. Either you are leveraging the navigation, which means these pages exist on every subsite, or you are manually entering the URL. If it is the latter, why not point to the URL on the root site?
When I've done something like this in the past, I create a custom master page and add the URLs to the pages in the root site in either the left nav or footer (or both). You should be able to use a URL in the style of: ~sitecollection/Pages/About.aspx

Rails: How to separate static content and application but while maintaining a connection between the 2?

Ok this question might sound a bit weird, let me try to explain what I am trying to achieve here.
I need:
- some mostly static pages: home page, about us, etc. the usual suspects
- a full complex rails web app
The web app being the heart of the system will have a lot of stuff, including user authentication (with devise by the way). The application will have a standard navigation menu with possible actions changing depending on user status (login or not, admin or not, etc).
Until now, nothing out of the ordinary.
However for unrelated reason, I MUST have the entry point of the whole system be the home page that will be hosted on another server (ergh).
So now, since my home page and other static pages will be on server A and all the application will be on server B how can I maintain contact between the 2 ?
Meaning: keep my navigation menu dynamic even on my static pages, have a sign-in / sign-up form on my static server but registering an account on the "real" application server ?
They can share the same database, no pb there.
Any pointers on how to do this ? I would really like not to put some iframes on the static site...
Thanks !
Alex
For the signin/signup stuff, you can have your forms action going to B and redirecting to A.
To display the right stuff in the menus you can make a jsonp call(as Chris said) to fetch either the entire header or specific parts of the header that are dynamic.
If you are just looking to include the users name, you can also simply store their name in a cookie and then use javascript to display it in the header.
If there's no cookie display a link to login/signup.
edit: For the jsonp calls take a look at a javascript framework to make the call client side, I personally use jQuery http://api.jquery.com/jQuery.ajax (and look at the jsonp options).
Thinking out loud...
Can you dynamically build the menus using javascript/AJAX in the static code? Perhaps that could query server B (via jsonp) to determine the options...
Its going to have do some "funky" (tm) stuff to track whether there is a user session or not... and linking them...

Disable link if user is not allowed to access target

I have created an ASP.NET MVC application and created different kind of roles for my users. I have then created different kinds of AuthorizeAttributes for allowing/disallowing access to different Actions in my controls.
However I have got a lot of links that points to different of theese actions that are restricted for different roles. Can you somehow fix so that theese links get disabled automatically? I could of course add a lot of UserIsInRole(....)-stuff in my code but I really would prefer not to if there is a better way.
Du you have any suggestions?
Are they in a list or menu? Is this something your controller could pass to your View? you could pass out a list of all allowed (or forbidden, whichever is more appropriate) and check that before you display a link.
if (allowedLink.Contans(myLink)
// show enabled link
else
//show disabled
The other good way would be to override the HtmlHelper for ActionLinks and make it do the check for permissions for you. Then if they do not have permissions, your html helper would display it disabled.
For examples, see this link http://www.asp.net/learn/mvc/tutorial-09-cs.aspx
Off the top of my head...
• You can set the onClick action for each link to do nothing.
• You can set the URL for the link to "#", which does nothing.

Prevent bot from crawling certain areas of site

I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings?
edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file.
I appreciate any help!
You can prevent some malicious clients from hitting your server too heavily by implementing throttling on the server. "Sorry, your IP has made too many requests to this server in the past few minutes. Please try again later." In practice, though, assume that you can't stop a truly malicious user from bypassing any throttling mechanisms that you put in place.
Given that, here's the more important question:
Are you comfortable with the information that you're making available for all the world to see? Are your users comfortable with this?
If the answer to those questions is no, then you should be ensuring that only authorized users are able to see the sensitive information. If the information isn't particularly sensitive but you don't want clients crawling it, throttling is probably a good alternative. Is it even likely that you're going to be crawled anyway? If not, robots.txt should be just fine.
It seems like you have 2 issues.
Firstly a concern about certain data appearing in search results. The second about malicious or unscrupulous user harvesting user related data.
The first issue will be covered by appropriate use of a robots.txt file as all the big search engines honour this.
The second issue seems more to do with data privacy. The first question which immediately springs to mind is: If there is user information which people may not want displayed, why are you making it available at all?
What is the privacy policy for such data?
Do users have the ability to control what information is made available?
If the information is potentially sensitive but important to the system could it be restricted so it is only available to logged in users?
Check out the Robots exclusion standard. It's a text file that you put on your site that tells a bot what it can and can't index. You will also want to address what happens if a bot doesn't honour the robots.txt file.
robots.txt file as mentioned. If that is not enough then you can:
Block unknown useragents - hard to maintain, easy for a bot to forge a browser's (although most legitimate bots wont)
Block unknown IP addresses - not useful for a public site
Require logins
Throttle user connections - tricky to tune, you will still be disclosing information.
Perhaps by using a combination. Either way it is a trade off, if the public can browse to it, so can a bot. Be sure you don't block & alienate people in your attempts to block bots.
a few options:
force the user to login to view the content
add a CAPTCHA page before the content
embed content in Flash
load dynamically with JavaScript

Resources