Accessing and Modifying the Course Overview via the API - desire2learn

Looking at the methods available to the Learning Environment, (http://docs.valence.desire2learn.com/http-routingtable.html#cap-Learning%20Environment) I didn't initially see any references that would allow the API to retrieve and/or modify the Course Overview within the Content module.
Edit
Of course after posting I immediately discovered the following methods:
/d2l/api/le/(version)/(orgUnitId)/overview [GET]
/d2l/api/le/(version)/(orgUnitId)/overview/attachment [GET]
The issue still remains where I do not see methods to MODIFY this information.
Am I missing something or was this functionality overlooked / excluded for some reason?

Doesn't look like it is included. I played around with this call:
GET /d2l/api/le/(version)/(orgUnitId)/content/root/
Thinking it may be part of the content as a whole, but it doesn't seem to be. When viewing through the browser, it loads the Overview in an AJAXy way, so looking at parameters I tried this:
GET /d2l/api/le/(version)/(orgUnitId)/content/modules/(moduleId)
And put in Overview as the module ID as that is what was being passed along in the AJAX call as the identifier, but that didn't work either. Looks like Overview is completely separate from the content and not accessible through those API end points. And looking through the docs, I can see no other end point that looks like it would address your needs.

Related

Rendering different partials depending on user window size

I would like to test if the user's browser window.width is >= 800px
if so i would like to render partial A otherwise if window.width >= 800px.
I have little experience, please explain my options on implementation:
I am expecting either a javascript method on the page or jQuery.
I have tried
http://scottwb.com/blog/2012/02/23/a-better-way-to-add-mobile-pages-to-a-rails-site/
but
1. it doesn't work for me.
2. even if it did I expect it will work based on device being used, not pixel count.
Thank you in advance!
You could use Ahoy. The current_visit method contains the following information.
When someone visits your website, Ahoy creates a visit with lots of
useful information.
traffic source - referrer, referring domain, landing page, search
keyword location - country, region, and city
technology - browser, OS,
and device type
utm parameters - source, medium, term, content,
campaign
A request won't contain data about screen size, so in common sense the sever has no way to know what is screen width and what response to serve, as PinnyM also mentioned in comment.
A general practice is to use User Agent to detect mobile device from server. User Agent is part of request. It's not 100% accurate, but it's something you could still depend on for most cases.
However, there is still solutions for your question - serve page based on screen size.
The workaround is to use Javascript to detect screen size at first, then use JS to drop a cookie. Server is able to read the cookie and decide which template to render.
The basic repo is here: https://github.com/mattstauffer/Simple-RESS It's for PHP, but you can get the idea from the source code.
There is also Rails implementation: https://github.com/matthewrobertson/ress, and the introduction: http://matthewrobertson.org/blog/2013/02/15/introducing-ress/
My opinion: I don't like this solution though it is viable. Lots of works to do and lots of things to taking care of. I would rather user User Agent detect instead.
Sounds like using a responsive front end framework might be something to look into. I'm a big fan of Foundation, it's super easy to use it with Rails apps. And the new version of Foundation just launched today! Check it out: http://foundation5.zurb.com/

SignalR dynamic endpoints

I'm working on a collaborative document editing application where clients can open up a document, post edits via a webservice, and subscribe to updates made to the document using SignalR. I'm experimenting with my SignalR setup and can't quite get what I want.
My gut tells me that I should shoot for a setup where each document has an endpoint with a name like "subscribe", so the full path would be "/documents/1/subscribe" for document 1 and "/documents/2/subscribe" for document 2. However, as far as I can tell, SignalR wants me to have a single endpoint, and then manage which clients get updates either by using Groups or by managing the list of subscribers for a document in code myself and send out individual messages.
As a result I have two questions.
Is there a way to do what I want to do what I want to do with SignalR?
Is there a reason what I want to do is totally wrong headed and silly?
Aside from "dedicated", friendly looking URLs I don't really see any value to this vs. just using groups. In fact, the only thing I could see it doing is adding more overhead because of the way the message bus internals of SignalR work with respect to scale.
If you did want to try this, the base thing you'd need to figure out would be registering routes on the fly per document, which, as Phil Haack's RouteMagic has done for MVC, I suppose it might be possible for SignalR route configurations as well.

Making a website without hyperlinks

I am making a simple CMS to use in my own blog and I have been using the following code to display articles.
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState==4 && xmlhttp.status==200){
document.getElementById("maincontent").innerHTML=xmlhttp.responseText;
}
}
What it does is it sends a request to the database and gets the content associated with the article that was clicked on and writes it to the main viewing area with ".innerHTML".
Thus I don't have actual links to other articles. I know that I can use PHP to output HTML so that it forms a link like :
<a href=getcontent.php?q=article+title>Article Title</a>
But being slightly OCD I wanted my output to be as neat as possible. Although search engine visibility is not a concern for my personal blog I intend to adapt this to a few other sites which have search engine optimization as a priority.
From what I understand, basically search engine robots follow links to index the web sites.
My question is:
Does this practice have any negative implications for search engine visibility? Also; are there other reasons for preferring one approach over the other as I see that almost every site uses the 'link' method.
The link you've written will cause a page reload. In order to leverage the standard AJAX stuff you've got at the top, you need to write the links as something along the lines of
Article Title
This assumes you have a javascript function called ajaxGet that takes an argument of the identifier for the article you're searching for.
If you were to write your entire site that way, search engines wouldn't be able to crawl you at all since they don't execute javascript. Therefore they can't get to anything off the front page. Also, even if they could follow the links, they'd have no way of referencing the page they got to since it doesn't have a unique URL. This is also annoying for users, since they can't get a link to an exact story to bookmark, send to a friend etc.

how do I block my rails app from being hit by bots?

I'm not even sure I'm using the right terminology, whether this is actually bots or not. I didn't want to use the word 'spam' because it's not like I have comments or posts that are being created/spammed. It looks more like something is making the same repeated request to my domain, which is what made me think it was some kind of bot.
I've opened my first rails app to the 'public', which is a really a small group of users, <50 currently. That was last Friday. I started having performance issues today, so I looked at the log and I see tons of these RoutingErrors
ActionController::RoutingError (No route matches "/portalApp/APF/pages/business/util/whichServer.jsp" with {:method=>:get}):
They are filling up the log and I'm assuming this is causing the slowdown. Note the .jsp on the end and this is a rails app, so I've got no urls remotely like this in my app. I mean, the /portalApp I don't even have, so I don't know where this is coming from.
This is hosted at Dreamhost and I chatted with one of their support people, and he suggested a couple sites that detail using htaccess to block things. But that looks like you need to know the IP or domain that the requests are coming from, which I don't.
How can I block this? How can I find the IP or domain from the request? Any other suggestions?
Follow up info:
After looking at the access logs, it looks like it's not a bot. Maybe I'm not reading the logs right, but there are valid url requests (generated from within my Flex app) coming from the same IP. So now I'm wondering if it's some kind of plugin generating the requests, but I really don't know. Now I'm wondering if it's possible to block a certain url request, based on a pattern, but I suppose that's a separate question.
Old question, but for people who are still looking for alternatives I suggest checking out Kickstarter's rack-attack gem. Allows not only blacklisting and whitelisting, but also throttling.
These page seems to offer some good advice:
Here
The section on blocking by user agent may be something you could look at implementing. Is there anyway you can get the useragent from the bot from your logs? If so look for the unique aspect of the useragent that probably identifies the bot and add the following to .htaccess replacing the relevant bits
BrowserMatchNoCase SpammerRobot bad_bot
Order Deny,Allow
Deny from env=bad_bot
Its detail on that link in more detail and of course, if you can't get the useragent from your logs then this will be of no use to you!
You can also update your public/robots.txt file to allow/disallow robots.
http://www.robotstxt.org/wc/robots.html

Why would I put ?src= in a link?

I feel dumb for not knowing this, but I see a lot of links in web pages and instead of this:
<a href="http://foo.com/">
...they use this:
<a href="http://foo.com/?src=bar.com">
Now I understand that the ?src= is telling something that this referral is coming from bar.com, but I don't understand why this needs to be called out explicitly. Can anyone shed some light on it for me? Is this something I need to include in my program generated links?
EDIT: Ok, sorry, I'm not being clear enough. I understand the GET syntax with a question mark and parameters separated by ampersands. I'm wondering what's this special src parameter? Why would one site link to another and tack an src parameter on the end even though there's no indication that the destination site uses this normally.
For example, on this page hover your mouse over the screenshot. The link URL is http://moms4mom.com/?src=stackexchangesites
But moms4mom.com is our site. Passing the src parameter does nothing, so why include it?
There are a few reasons that the src is being used explicitly. But in general, it is easier and more reliable to trust a query string to determine referer[sic] than it is to trust the referer, since the latter is often broken, deliberately or not. On the other hand, browsers almost never break the query string in a url, since this, unlike referers, is pretty important for pages to function. Besides, a referer is often done without any deliberate action on the part of the site doing the refering, which some users dislike.
The reason (I do it) is that popular analytics tools sometimes make it easier to filter on query strings than referrers.
There is no standard to the src parameter. Each site has its own and it's usually up to the site that gets the link to define how it wants to read it (as usually it's that site that's going to pay for the click).
The second is a dynamic link, it's a URL that another language(like ASP and PHP) interpret as something to do, like in those Google URLs, but i never used this site(foo.com), then i don't much things about this parameter.
Depending on how the site processes its URL, you may or may not need to include the ?... information.
This is passed to the website, and the server can process it just like form input. Some sites require this - and build their navigation off a single page, using nothing but the "extra" stuff passed afterwards. If you're generating a link to a site like that, it will be required.
In other cases, this is just used to pass extra, unrequired info (such as advertising, tracking info, etc)... In those cases, you can leave it off.
Unfortunately, there's no way to know without trying whether you can remove the "extra" bits from the URL.
After reading some of your comments - I'll also say:
There is nothing special about the "src" field in a query string. The server is free to use it any way it wishes. Unless you know specific info about the server, you cannot assume it can be left out.
The part after the ? is the query string. Different sites use it for different things, and it is usually used for passing information to the server side code for that URL, but can also be used in javascript.
For more info see Query String

Resources