Custom replacement strings from third party app - desire2learn

Is there any ability to populate a learning module's content using data passed from a third party application. For example:
Third party data:
userid = 12, username = Sally, user_q1_answer = Jim, user_q2_answer
= 101
Module Content setup:
[[username]], since you are in room [[user_q1_answer]], you should
contact [[user_q2_answer]] in the event of the fire alarm going off.
Module Content Delivered:
Sally, since you are in room 101, you should contact Jim in the event of the fire alarm going off.
Thanks for any help

Currently, no facility in the LMS exists to do this kind of dynamic substitution at render time. A number of other questions here have covered this ground. As of Spring 2013, this kind of functionality is on the development roadmap but there is not yet a committed release vehicle for it.
It might be possible to use a client-side browser extension to detect specially formatted strings in page content and make Valence Learning Framework API calls to find values it can replace those strings with. However, this technique would probably only practically be able to replace values that are known about the current user and their relationship to the LMS. Through URL and page content examination, it might also be possible to gather knowledge about the user's current browsing context (i.e. what course or course section they're looking at), but we never recommend screen-scraping because you can't depend on meaningful tokens or data appearing reliably going forward (where as you can depend on the Learning Framework APIs to be able to get you information about the current operating user).

Related

CQRS, event sourcing and a translated application

I am working on an application (CQRS + event sourcing) that should support multiple user languages. The user will have the ability to translate some of his input different languages. E.g. some labels or descriptions can be given in Dutch and/or English. Depending on the language preferences of the user, the application should show the correct translation.
I suspect the read model plays a big role in this process.
I was thinking of creating events like ItemDescriptionTranslated, telling 'The description of item X was translated to language Y as Z'.
I would think that the aggregate can safely ignore this kind of events, and that only the read models should do something with this information.
Does this make sense? Does any of you have experience with CQRS/ES in a translated application? Any hints are greatly appreciated.
Of cource you can use event sourcing. You can code aggregate's build funcs to ignore ItemDescriptionTranslated event.
The main question is - if you really need event sourcing in this part of application. For example you can build authorization using both ways - es or not. If you want to log all users' login and auth, you prefer ES. But if you want login only, without any analysis - i suggest not to use es.
So, do you want to collect some additional info about translating? When, who, maybe check some statistics about corrections by different authors and so on.

How do you decide how much data to push to the user in Single Page Applications?

Say you have a Recipe Manager application that you're building with a Web Api project. Do you send the list of recipes along with their ingredient names in JSON? Or do you send the recipes, ingredient names, and ingredient details? What's the process in determining how big the initial payload should be for a SPA?
These are the determining factors in how much to send to the client in an initial page:
Data that will be displayed for that first page
Lookup list data for any drop downs on that page
Data that is required for and presentation rules (might not be displayed but is used)
On a recipe page that would show a list of recipes, I would get the recipes and some key factors to display (like recipe name, the dish, and other key info) that can be displayed in a list. Enough for the user to make a determination on what to pick. Then when the user dives into a recipe, then go get that 1 recipe's details.
The general rule is get what you user will almost certainly need up front. Then get other data as they request it.
The process by which you determine how much data to send solely depends on the experience you want to provide your users - however it's as simple as this. If my experience demands that I readily display all of the recipes with a brief description and then allow them to drill into the recipe to get more information, then I'm only going to send enough information to produce the display and navigate further into the entity.
If then after navigating into the recipe it requires that you display the ingredient names and measures then send down that and enough information to navigate further into any single ingredient.
And as you can see it just goes on and on.
It depends if your application is just a simple HTTP API backing your web page, or your goal is something more akin to Platform As A Service. One driver for the adoption of SPA is that it makes the browser another client, just like an iOS or Android app,or a 3rd party.
If you want to support multiple clients, then it's likely that you want to design your APIs around the resources that you are trying to expose, such that you can use the uniform interface of GET/POST/PUT etc. against that resource. This will means it is much more likely that you are not coding in an client specific style and your API will be usable by a wide range of clients.
A resource is anything you would want to have its own URN.
I would suggest that is likely that in this case you would want a Recipe Book resource which has links to individual Recipe resources, which probably contain all the information necessary for that Recipe. Ingredients would only be a separate resource if you had more depth on what an Ingredient contained and they had their own resource.
At Huddle we use a Documentation Driven Design approach. That is we write the documentation for our API up front so that we can understand how usable our API would be. You can measure API quality in WTFs. http://code.google.com/p/huddle-apis/
Now this logical division might not be optimal in terms of performance. Your dealing with a classic tradeoff (ultimately architecture is all about balancing design tradeoffs) here between usability of your API and the performance of your API. Usually, don't favour performance until you know that it is an issue, because you will pay a penalty in usability or maintainability for early optimization.
Another possibility is to implement the OData query support for WebAPI. http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api
That way, your clients can perform their own queries to return only the data they need.

How to design a good QuickBooks integration solution

I have now played with the QBO and QBD APIs and feel I have a fair understanding of how it thinks and how to interact with it. So now it is time to design the actual integration solution.
Inside my application you can create new customers, quote services, perform services, and soon, pass invoices to QuickBooks, sounds easy.
But what if the customer is not in QB yet? No problem - for each invoice I will look up the customer (need the id anyway) and if it doesn’t exist, add it. But if I have to look up the customer for each invoice it seems like it might be slow. I will likely have 30,000 customers and have 500-3000 invoices per day.
So my question is this; what are others doing?
a) Are you storing the QB id for each customer in your data?
b) How do you detect address changes (changed in your app and changed in QB)?
c) Is the batch submission interface so much faster I should use that?
Thanks for your help!
We often times do store the QB id in our database for use. If we post an invoice into QB, we'll then store the QB id for future use if we need to modify it.
As far as detecting changes on the customer record and other info, there's a couple ways to handle the conflict resolution. One is to keep a timestamp on your side as to when changes are made. You can then compare this with the timestamp of the last change on the QB record and then make your decision as to which one gets updated.
FreddyMac,
To detect changes on the Intuit side you can construct a query with a CDCasOf Filter, which will return only the data that has changed since a date you provide. (ChangeDataCapture as of)
https://ipp.developer.intuit.com/0010_Intuit_Partner_Platform/0050_Data_Services/0500_QuickBooks_Windows/0100_Calling_Data_Services/0015_Retrieving_Objects
You need to keep track of data changes on your side.
The batch submission is not faster, its just easier for you to write the code.
The IPP SDK can queue the API calls for your and aggregate the responses.
regards,
Jarred

MongoDB and embedded documents, good use cases

I am using embedded documents in MongoDB for a Rails 3 app. I like that I can use embedded documents and the values are all returned with one query and there is less load on the database server. But what happens if I want my users to be able to update properties that really should be shared across documents. Is this sort of operation feasible with MongoDB or would I be better off using normal id based relations? If ID based relations are the way to go would it affect performance to a great degree?
If you need to know anything else about the application or data I would be happy to let you know what I am working with.
Document that has many properties that all documents share.
Person
name: string
description: string
Document that wants to use these properties:
Post
(references many people)
body: string
This all depends on what are you going to do with your Person model later. I know of at least one working example (blog using MongoDB) where its developer keeps user data inside comments they make and uses one collection for the entire blog. Well, ok, he uses second one for his "tag cloud" :) He just doesn't need to keep centralized list of all commenters, he doesn't care. His blog contains consolidated data from all his previous sites/blogs?, almost 6000 posts total. Posts contain comments, comments contain users, users have emails, he got "subscribe to comments" option for every user who comments some post, authorization is handled by the external OpenID service aggregator (Loginza), he keeps user email got from Loginza response and their "login token" in their cookies. So the functionality is pretty good.
So, the real question is - what are you going to do with your Users later? If really feel like you need a separate collection (you're going to let users have centralized control panels, have site-based registration, you're going to make user-centristic features and so on), make it separate. If not - keep it simple and have fun :)
It depends on what user info you want to share acrross documents. Lets say if you have user and user have emails. Does not make sence to move emails into separate collection since will be not more that 10, 20, 100 emails per user. But if user say have some big related information that always growing, like blog posts then make sence to move it into separate collection.
So answer depend on user document structure. If you show your user document structure and what you planning to move into separate collection i will help you make decision.

Why would Google Search use client-side URL parameters?

Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.

Resources