I have a controller dealing with sending SMS Messages, and it's set up as a default resource. Now, business requirements have changed and we want to offer the user three different ways of sending the message:
To everybody on their contact list
To a segmented portion of their contact list (predefined)
To individual contacts that they can choose
In addition, there are two ways they can send an SMS Message: Premium (via an sms gateway) or Standard (via SMTP). So there are essentially six different ways to send a message (three for premium, three for standard).
The requirements state that the three options above need to be presented in a "wizard-like" format, as three radio button choices and then a submit button which displays the appropriate form and list:
If option 1 (send to everyone) then just display a textbox for sending the SMS
If option 2 (send to a segment) then display a list of segments as radio buttons
If option 3 (send to specific) then display a searchable/sortable list of all contacts with checkboxes next to their names, and on submit select everyone checked to send.
The issue I'm running into is how to make this fit the RESTful conventions imposed by the resource. Each of these use cases is technically only one action (well, two since it would correspond to new/create) but it looks like there needs to be an inordinate amount of logic in the action then, and rather messy as well (switch statement or similar).
Is there a better way do to this?
I would consider "the simplest thing that works" approach first. Different params processing and setting up several delivery methods looks like something that cannot be easily wrapped into a pair of "new" and "create" actions.
Without knowing all the details, my first recommendation would be just to implement actions like "choose_contacts", "send_to_contacts" and "choose_segments", "send_to_segments" which either prepare the data for these use cases and render their own templates, or process incoming data and handle possible errors. Then "new" action will just dispatch processing to an appropriate action based on selected option, while "create" action won't longer be needed (or you can re-use it for sending a single message). I bet the code for sending a single message (or a set of messages) is in model anyway, so you'll just have to call it with correct arguments once you processed your form data in a reasonable way.
Benefits of this approach:
It reads more naturally by someone who will maintain your code later, as it clearly differentiates available options;
You'll have more flexibility processing an input params and handling errors from different forms, as you'll know which erroneous state to render as a response, without carrying additional data from the form. Segments and contacts probably require different processing, aren't they?
If you use some kind of a performance management application (such as NewRelic), you'll see if one of certain delivery processing actions has any performance problems much faster than in case of a general SMSDelivery#create action;
I had the similar situation with "inviting/finding friends via uploading/searching contacts" process, and the given approach worked really well for me. Overall, REST is a great convention, but trying to squeeze a lot of business logic into a CRUD sometimes is not perfect and error-prone. YMMV, I would just suggest to keep logic clear and easy to read, modifying existing conventions if you're not feeling flexible enough with them.
Related
I have a form in which user can select two players from a list (two separate select fields). I managed to do it using options_for_select helper, but
user shouldn’t be allowed to select the same player twice - it obviously can't be something like player1 vs player1. I was experimenting with ‘disabled’ option, but without success, because list of available users should change dynamically after selecting first user, which probably can't be done in Rails?
This is a very broad question, with little details, hence my answer will be generic to cover the majority of cases.
You'll have to use JavaScript to hide the same options from the other list. Rails works server-side, therefore you should reload the page if you want Rails to re-render the list without the selected option. But this is a terrible user experience.
However, you should also perform a server-side check. Even if you add the JS in place, it will still be possible to send a crafted request where the players are the same. This is something that must ultimately be verified at server-side level, in your Rails controller or wherever you have the logic to handle the comparison.
I've built a Rails app, basically a CRUD app for memos/notes.
A notes title must be unique. If a user enters a name already taken a warning message is shown prompting them to chose another.
My question is how to make this latency for this feedback as close to zero as possible. When creating a note little UX speed bumps like this will get annoying for user quickly.
Of course the main bottleneck is the network. Inspired by Meteor (and mini-mongo) I was thinking some kind of local storage could be a solution?
I.E. When app first loads, send ALL JSON to the client with ALL note titles. The app (front end is Angular JS) could check LocalStorage (or App Cache, Web SQL?) instead of incurring a network round trip. The feedback would be instant.
I've used LocalStorage in the past to augment an app, but in the scenario it'd really seriously depend on it. I'm not sure how confident I'd be building on something that user might not have. Also as the number of user Notes/Memos I have doubts how feasible it is to send a JSON object down the wire with ALL the note titles. That might get pretty big. On the other hand MeteorJS seems to do this with no probs.
Has anyone done something similar or have any pointers? Thanks!
I don't know how Meteor works here, but you're right that storing all note titles in localStorage is not a good idea. Actually, you don't need localStorage here, you can just put it in a JS array, because you need this data only once (when checking new note title).
I think, there could be 2 possible solutions:
You can change your business requirements and allow non-unique title. Is there really a necessity for titles to be unique?
You can verify note title when user submits form. In this case you can provide suggestions for users, so they not spend time guessing vacant title.
Or, if titles must be unique only within a user (two users can have same title for their notes), you can really load all note titles in JS array and check uniqueness while users types in a title.
Or you can send an AJAX request checking title uniqueness as soon as user finished typing the title. In this case you can win some seconds.
Or you can send an AJAX request as soon as user typed in 3 symbols. The request will return all titles that begin with these 3 symbols, so you don't need to load all the titles.
Is it possible to embed multiple GMail schemas in a single email? I'd like to provide users the ability to retry or cancel an action (the cancel operation would perform some cleaning stuff in the server app). However, if I try to embed more than one script, only the first one appears in the inbox (each script is correct and shows up properly when it is the only one).
Only one action is currently supported and if you include multiple actions, the first one will be used. The user experience for exposing multiple actions would be really different, so if/how to handle them is still being discussed.
Say you have a Recipe Manager application that you're building with a Web Api project. Do you send the list of recipes along with their ingredient names in JSON? Or do you send the recipes, ingredient names, and ingredient details? What's the process in determining how big the initial payload should be for a SPA?
These are the determining factors in how much to send to the client in an initial page:
Data that will be displayed for that first page
Lookup list data for any drop downs on that page
Data that is required for and presentation rules (might not be displayed but is used)
On a recipe page that would show a list of recipes, I would get the recipes and some key factors to display (like recipe name, the dish, and other key info) that can be displayed in a list. Enough for the user to make a determination on what to pick. Then when the user dives into a recipe, then go get that 1 recipe's details.
The general rule is get what you user will almost certainly need up front. Then get other data as they request it.
The process by which you determine how much data to send solely depends on the experience you want to provide your users - however it's as simple as this. If my experience demands that I readily display all of the recipes with a brief description and then allow them to drill into the recipe to get more information, then I'm only going to send enough information to produce the display and navigate further into the entity.
If then after navigating into the recipe it requires that you display the ingredient names and measures then send down that and enough information to navigate further into any single ingredient.
And as you can see it just goes on and on.
It depends if your application is just a simple HTTP API backing your web page, or your goal is something more akin to Platform As A Service. One driver for the adoption of SPA is that it makes the browser another client, just like an iOS or Android app,or a 3rd party.
If you want to support multiple clients, then it's likely that you want to design your APIs around the resources that you are trying to expose, such that you can use the uniform interface of GET/POST/PUT etc. against that resource. This will means it is much more likely that you are not coding in an client specific style and your API will be usable by a wide range of clients.
A resource is anything you would want to have its own URN.
I would suggest that is likely that in this case you would want a Recipe Book resource which has links to individual Recipe resources, which probably contain all the information necessary for that Recipe. Ingredients would only be a separate resource if you had more depth on what an Ingredient contained and they had their own resource.
At Huddle we use a Documentation Driven Design approach. That is we write the documentation for our API up front so that we can understand how usable our API would be. You can measure API quality in WTFs. http://code.google.com/p/huddle-apis/
Now this logical division might not be optimal in terms of performance. Your dealing with a classic tradeoff (ultimately architecture is all about balancing design tradeoffs) here between usability of your API and the performance of your API. Usually, don't favour performance until you know that it is an issue, because you will pay a penalty in usability or maintainability for early optimization.
Another possibility is to implement the OData query support for WebAPI. http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api
That way, your clients can perform their own queries to return only the data they need.
Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.