So my web apis depend a lot on the current time of the user for authorization. The problem is the current time of the user is almost always different from the server's datetime. Was wondering if anybody can give me a suggestion as to how to properly deal with it.
My first solution was to convert every time to GMT 0400 time. But it seems like I'll have to store the location of the user or something like that, also I'm not really sure how to do it.
Thanks!
Welcome to the wonderful horrible world of localization. There's no easy way to handle this type of stuff in a reliable way. It takes work and lots of testing. First off, you should dump .NET's built-in DateTime localization immediately. Use something like NodaTime. It will make your life much easier (especially when it comes to testing that your localization code actually works).
The chief problem you're going to have is that there's no reliable way to get the user's timezone server-side. You have two options:
Just have the user explicitly choose their timezone and store it in their profile for future use. This is obviously the most reliable method, but it means you'll either need to force your users to enter this information or resort to some default plan if it's missing.
Use Javascript (you can see the methodology here). Essentially, you can use JS to set the value of a hidden field or send the info with AJAX. Obviously the user's client will need to have JS support and have that support enabled (pretty safe bet in 99.99% of cases, but there are still screen readers and such that don't have JS support and some users prefer to disable JS out of security concerns).
However, typically when it comes to use timestamps in authorization, only the server time matters anyways. The only use I know of is creating digests to prevent replay attacks, but the timestamp will be created based on server time and then validated based on server time. How is your use case different?
Related
I have a new feature in my Rails project. I need to insert a "New!" flag in its menu, so user will notice that a new feature is available. Once the new feature page is visited, this "flag" must disappear.
How is it possible with Ruby on Rails?
The absolute simplest way is to look for a sawFeatureX cookie and set it when the page is rendered or the user dismisses the notification.
A more robust solution would be to store the info on the user model in the db, but that ends up giving you a lot of one-off boolean fields which may or may not be what you want.
There are MANY variations. You could use something like HelloBar to point out the new content without inlining it into the menu. So. Many. UX. Variations.
But for a one-time thing, a cookie or db-backed solution seems simple and easy.
I hate this problem.
A cookie is easy, but gross and doesn't scale. You really don't want to pay the price of sending this data back and forth on every request until the end of time.
Saving on the user record seems like a sin against database design.
A separate DB table with all these "I saw feature X" seems like such overkill and I hate something that is just going to grow without bound being in my main DB.
You can put it in Redis, memcached, but do you really need to store it in RAM? that's the most expensive place to do this.
I think the ideal solution is something like https://www.prefab.cloud/documentation/once_and_only_once which is a service (i wrote) that stores this little "bob saw X" off in a database I don't need to manage/care. It handles cacheing etc so that it's as fast as having it in Redis/etc but durable and doesn't get expired.
For a rails 3 app I am building, a user gets to share a post which has numerous different parameters. Some parameters are optional, others are required. While the user is filling out the parameters, I want to generate a preview for how the post will look on the fly. Some parameters are URLs which need to be sent back to the server to process, so basically, the preview cannot be 100% generated client side.
I was wondering what it the best way to go about this. Since it could be a lot of data, I don't want to send all the data back to the server every time something changes to regenerate the preview. I would rather only like to send the data that has changed. But in this case, where is the rest of the data stored? In a session, perhaps? Also, I would prefer to not rebuild the model object with all the data every time. Is there a way to persist the model object that represents the post as it is being created?
Thanks.
How big is that "a lot of data"? If you send it all, does it have a noticeable impact on performance, or are you just imagining that it would?
As you provided not too much information, here's basic info on what I would do:
process client-side. As much as possible.
data that can't be processed on the client - send to the server (only that part, not the rest of it). Receive result of processing and incorporate into what you already built.
no sessions, partially built models and any other state on the server. Stateless protocols are simple. Simplicity is prerequisite for reliability.
I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings?
edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file.
I appreciate any help!
You can prevent some malicious clients from hitting your server too heavily by implementing throttling on the server. "Sorry, your IP has made too many requests to this server in the past few minutes. Please try again later." In practice, though, assume that you can't stop a truly malicious user from bypassing any throttling mechanisms that you put in place.
Given that, here's the more important question:
Are you comfortable with the information that you're making available for all the world to see? Are your users comfortable with this?
If the answer to those questions is no, then you should be ensuring that only authorized users are able to see the sensitive information. If the information isn't particularly sensitive but you don't want clients crawling it, throttling is probably a good alternative. Is it even likely that you're going to be crawled anyway? If not, robots.txt should be just fine.
It seems like you have 2 issues.
Firstly a concern about certain data appearing in search results. The second about malicious or unscrupulous user harvesting user related data.
The first issue will be covered by appropriate use of a robots.txt file as all the big search engines honour this.
The second issue seems more to do with data privacy. The first question which immediately springs to mind is: If there is user information which people may not want displayed, why are you making it available at all?
What is the privacy policy for such data?
Do users have the ability to control what information is made available?
If the information is potentially sensitive but important to the system could it be restricted so it is only available to logged in users?
Check out the Robots exclusion standard. It's a text file that you put on your site that tells a bot what it can and can't index. You will also want to address what happens if a bot doesn't honour the robots.txt file.
robots.txt file as mentioned. If that is not enough then you can:
Block unknown useragents - hard to maintain, easy for a bot to forge a browser's (although most legitimate bots wont)
Block unknown IP addresses - not useful for a public site
Require logins
Throttle user connections - tricky to tune, you will still be disclosing information.
Perhaps by using a combination. Either way it is a trade off, if the public can browse to it, so can a bot. Be sure you don't block & alienate people in your attempts to block bots.
a few options:
force the user to login to view the content
add a CAPTCHA page before the content
embed content in Flash
load dynamically with JavaScript
Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.
I have this idea for a project. Associated with any web page, i want to create notes that will be saved locally in a database, the notes will be reloaded automatically from that database the next time i visit the same page.
Creating the note is easy, but i'm looking for how to link the notes to the web page url and how to keep aware of the active web page. Any idea?
(Note: i have come to this searching on the internet: http://webkit.org/demos/sticky-notes/ - this is part of WebKit Open source projects) - this is about what i'm looking for.
Thank.
Browserdependent probably. You'll have to have a plugin for every browser type.
IE might be doable via the COM interface, but that probably would require starting IE via a way you control. So that probably will have to be a plugin too.
For browser independence, there are quite a few challenges in this one. One way would be to implement a proxy server and watch for text/html content....this will work for most of the general cases, but not every case. Handling frames for instance... which resource is the "parent" and which is the "child"? Which one contains the sticky note? I think you would have to inject some client side javascript to keep track of things, and that might break some websites.
protonotes.com is a web service version of this. Not sure how they do it though.
Actually, Daniel H hit the nail on the head mate: http://www.protonotes.com
It does exactly as you want, in fact it gives you two options to store your data, the first is hosted, the second is your own mySQL db - protonotes pipes the data from the tack-on style notes to your own db, if you prefer. This means that you're not the only person who can see the notes - access is granted by a unique 'group' key.
I've just deployed protonotes as our main online review tool for two reasons, we can save our own data, and it lacks some features which I generally label "dubious" anyway.
It's simplicity is great, the only thing I'm aware of that could cause a prob is that it dumps a bunch of stuff in the global namespace - if that's a potential problem for you.
d