Saving html to 'manage files' - desire2learn

I have an external editable html file (using blocks of contentEditable="true") linked in the table of contents. Using Valence, could I save the edited contents of that file to the 'Manage Files' section of that course? Started reading through the Valence docs, but thought i'd ask here before wasting too much time there. Thanks, S

I'm not quite sure what exactly your meaning. Currently, the only direct access that you have to course content through the Valence Learning Framework APIs is via the course content module-topic structure. The URL property associated with an uploaded topic file situates it within the "manage files" tree, and you can use this property to fetch the contents of a topic from the structure.
There are no API calls to in-place update the contents of a topic file; to update it, you must fetch it, delete the topic, then re-upload the updated file content to a new topic node. This may leave behind the previous topic file, unlinked from the content structure.
The current course content API design proceeds from the user scenario of roughing out the framework for a new course's content, and not necessarily maintaining or changing that course content over time. Enhancement to the course content APIs to more comfortably accomodate additional use-cases is on the roadmap, but is not currently fully scheduled and fleshed out.

Related

Is there a way to edit the source code of the JIRA Issue Collector?

I am trying to allow users to create issues from a webpage, just like the Issue Collector. The problem is, there are only three templates provided for the collector and none of them are quite right.
What I want is to have three required fields that then combine to become the description. (Similar to how the first template has "what do you like" "what do you not like" which both go in the description)
The problem is there's no obvious way to edit the popup's contents.
Is there any way I can get at the source code of the collector to create my own modified version? Alternatively, if I just copy the html of the popup using inspect element could I create a working clone?
EDIT: Well, I've managed to get at the source code using a java decompiler, but now I haven't got a clue how to put it back together again...
Do you have a paid license for JIRA? If so, Atlassian will give you a copy of the source code.
From their FAQ's
After an order has been placed, how and when can the license key and source be accessed?
Access to your license key(s) and any
applicable source code is provided only after the successful receipt
and processing of your payment. Once payment is received, the Billing
and Technical contact specified on the order can log into their My
Atlassian account, and view all corresponding license keys.
And instructions on how to "put it all back together" :)
Then you are free to customize to your heart's content.
Of course, you'll need to re-customize every time there's an update from Atlassian ...
See also this post on Atlassian's wiki

Attaching/uploading files to not-yet-saved Note - what is best strategy for this?

In my application, I have a textarea input where users can type a note.
When they click Save, there is an AJAX call to Web Api that saves the note to the database.
I would like for users to be able to attach multiple files to this note (Gmail style) before saving the Note. It would be nice if the upload could start as soon as attached, before saving the note.
What is the best strategy for this?
P.S. I can't use jQuery fineuploader plugin or anything like that because I need to give the files unique names on the server before uploading them to Azure.
Is what I'm trying to do possible, or do I have to make the whole 'Note' a normal form post instead of an API call?
Thanks!
This approach is file-based, but you can apply the same logic to Azure Blob Storage containers if you wish.
What I normally do is give the user a unique GUID when they GET the AddNote page. I create a folder called:
C:\TemporaryUploads\UNIQUE-USER-GUID\
Then any files the user uploads at this stage get assigned to this folder:
C:\TemporaryUploads\UNIQUE-USER-GUID\file1.txt
C:\TemporaryUploads\UNIQUE-USER-GUID\file2.txt
C:\TemporaryUploads\UNIQUE-USER-GUID\file3.txt
When the user does a POST and I have confirmed that all validation has passed, I simply copy the files to the completed folder, with the newly generated note ID:
C:\NodeUploads\Note-100001\file1.txt
Then delete the C:\TemporaryUploads\UNIQUE-USER-GUID folder
Cleaning Up
Now. That's all well and good for users who actually go ahead and save a note, but what about the ones who uploaded a file and closed the browser? There are two options at this stage:
Have a background service clean up these files on a scheduled basis. Daily, weekly, etc. This should be a job for Azure's Web Jobs
Clean up the old files via the web app each time a new note is saved. Not a great approach as you're doing File IO when there are potentially no files to delete
Building on RGraham's answer, here's another approach you could take:
Create a blob container for storing note attachments. Let's call it note-attachments.
When the user comes to the screen of creating a note, assign a GUID to the note.
When user uploads the file, you just prefix the file name with this note id. So if a user uploads a file say file1.txt, it gets saved into blob storage as note-attachments/{note id}/file1.txt.
Depending on your requirement, once you save the note, you may move this blob to another blob container or keep it here only. Since the blob has note id in its name, searching for attachments for a note is easy.
For uploading files, I would recommend doing it directly from the browser to blob storage making use of AJAX, CORS and Shared Access Signature. This way you will avoid data going through your servers. You may find these blog posts useful:
Revisiting Windows Azure Shared Access Signature
Windows Azure Storage and Cross-Origin Resource Sharing (CORS) – Lets Have Some Fun

Copy course content

We have set up "master" templates for each of our courses. These templates contain both structure and content of each course.
I want automate the creating of the courses at the commencement of each semester, based on our timetable information.
I have got Valance to the point of creating a course from a template. From what I can see in the documentation it looks like I will have to parse the content of the template and copy individual items across to the unique courses.
Is this correct, or is there a simple way to copy the entire content from the template across to the actual course instance?
Content assigned to a course template does not get copied into a newly created course offering that lists the template as its CourseTemplate. If you want to store content in a course template and then copy it into a new course associated with that template, you can use the course content APIs to inquire about the template's content structure, and replicate it in the newly created course: the Content.ContentObjectData JSON blocks you use to create new content structure are a superset of the Content.ContentObject JSON blocks you see when you ask about the content structure.
Unfortunately, because of the rules around an org unit's file content store, we really don't recommend that you put actual file data into a course template's content store, because there's no easy way to refer to them from child course offerings, or copy them remotely into the child course offering's content space.
If you do store file data in the template's content space and want to put it into child course offerings, you need to fetch it from the LMS to the client and re-upload it into the new course offering.
You may get more leverage out of storing common course data objects in Desire2Learn's Learning Object Repository where what you put into the course template/offering's content structure are links, not files.
The answer seems to be that there is no simple way of bulk copying all the content from a template to a course offering using the Valence API.
I had a go at doing it by traversing the content structure by accessing the TOC object from the template then copying each individual module and topic in the structure.
Unfortunately this is made all the more difficult by the fact that the API doesn't return the id of the module or topic created. So, when it comes to adding the nested content objects, you have to requery the current course modules to find the object you just added.
At that point it has become all too hard and we're going to automate the creation of course offerings from the template, but advise teachers to use the built in Import/Export/Copy Components feature to copy the content from the template into the course offering.

Differentiating between file content changes and metadata changes in changes API feed

My app caches Google Docs files locally and needs to update them then whey change. When I request a changes feed, the results include all items that have changed, regardless of the type of change. I only need to re-download those items whose actual content has changed; I don't need to download documents that have merely been shared with somebody new or otherwise had their metadata changed. I know that you can request that expanded ACL data be included in the changes feed, but that may not be sufficient if it will only help me detect permissions changes, and not other changes to metadata.
Is there a way to do this? The files that are being downloaded are quite large at times (5-10MB), and the accounts that I'm tracking frequently have thousands of files, so imagine my users' consternation if they're on a slow connection and my app suddenly has to re-download hundreds of files due to a simple change like a folder being shared with a new user.
Thanks!
How about the revision feed?
You can find exactly what you need.
Okay. I overlooked the simple answer hidden in the XML: there is a checksum element included in the docs feed for all documents.

How to provide cid email attachments to embedded browser

I'm using embedded web browser from Bsalsa to write an email client in Delphi
I have a problem with cid embedded attachments such as:
<IMG src="cid:5D4219C71EAE43B1864AE9CB27C224A8#somehost">
I store the attachments in the database but can't figure out how to provide them to the browser. It seems custom moniker might need to be implemented but the documentation is scarce.
Any help would be appreciated.
I've implemented it using a "pluggable protocol" handler and it's easier thant it looks. Start here: http://msdn.microsoft.com/en-us/library/aa767916(VS.85).aspx and here: http://www.bsalsa.com/protocols.html
I am sorry I can't share the code I wrote but it's written for the company I work for and I have restrictions about it. Basically you need a com object that implements the proper interface to get the data and allow the web browser control read them.
That's IMHO the correct way to do it - altering the mail and storing temporary data may bring issue in the long run.
The simplest solution is to extract your "attachments" as requested into a temporary folder, then change the reference in the source to point to these temporary files, prior to being displayed. In the past I have used diHTMLParser to just this with great success.
If I remember correctly, the message contains these mime attachments along with an optional filename which doesn't always exist, but will have a mime type so you might have to have a translation table to get a default file extension for an attachment. Also, keep track of the files you place in your temp directory and clean up once your message window is closed. If you allow multiple messages opened at once, allow for name collisions and generate unique files.. it is common for signatures to have the same name, but be from different people... can be confusing if your message from John is signed Mary. :)

Resources