We're converting from ANGEL to D2L. We've created JavaScript-based widgets that use the $SECTION_ID$ token to tell the script what course it is running from. In Desire2Learn the equivalent would be the replacement string {OrgUnitCode}. We'd like to just use that replacement string, but they don't work properly in course page (where most of my widget useage is). Is there some other way to find out what course a JavaScript widget is running in?
Here is an example the code for one of our widgets that a user would paste into a page in their course:
<script type="text/javascript" data-id="NotablePAD540" section="{OrgUnitCode}">
var DAT = DAT || {}; if (!DAT.n) { (function (d) {
var f = d.getElementsByTagName('SCRIPT')[0], s = d.createElement('SCRIPT'); s.async = true;
s.type = 'text/javascript'; s.charset = 'utf-8'; s.src = '//dev.notable.vudat.msu.edu/n.js';
f.parentNode.insertBefore(s, f); }(document)); }; DAT.n = 1; </script>
Each notepad has a unique ID, this code snippet example is for a notepad with the id 540. The dynamic bit, {OrgUnitCode}, is what would separate out data, allowing me to use the same snippet in multiple course offerings and have the data stored separately.
If the case is purely to have a widget on course home page that contains dynamic information the most effective solution is probably to use a "Remote Plugin Widget".
These can be set up by administrators uing the "Manage Remote Plugins" tool.
They Remote Plugins effectively just combine LTI launches into iframes and make them available as widgets when you are configuring home pages(or in other areas of the system). There are examples posted of how to use this for richer server side applications.
The LTI launch itself passes information on where it is being launched from (i.e. context related information).
In your case it looks like just hosting a static page containing javascript would work. Then you could use the javascript to inspect the query string. (A bit tricky to test the LTI signatures from javascript safely, if you care about trust at that point...)
Also, Valence APIs (the GET calls) can be used from the javascript if you need to supplement with other information that is available.
Related
tl;dr. My Service Worker is caching HTML pages and CSS files in different versions. Going offline: since I have to limit the number of files I’m caching, how can I make sure that, for each HTML page in the cache, the versioned CSS files it needs are also in the cache? I need to delete old CSS files at some point, and they have no direct relation with the HTML files.
I’m trying to turn a traditional website into a PWA (sort of), implementing caching strategies with a Service Worker (I’m using Workbox but the question is supposed to be more generalist).
I’m caching pages as I navigate through the website (network-first strategy), in order to make them available offline.
I’m also caching a limited number of CSS and JS assets with a cache-first strategy. The URLs pointing to them are already "cachebusted" using a timestamp embedded in the filename: front.320472502.css for instance. Because of the cachebusting technique already in place, I only need/want to keep a small number of assets in this cache.
Now here’s the issue I’m having. Let’s suppose I cached page /contact which referenced /front.123.css (hence was also cached). As I navigate to other pages, CSS has changed several times in the meantime, and my CSS cache now might contain only /front.455.css and /front.456.css. If I’m going offline now, trying to load /contact will successfully retrieve the contents of the page, but the CSS will fail to load because it’s not in the cache anymore, and it will render an unstyled content page.
Either I keep versions of my CSS in cache for a long time, which is not ideal, or I try to purge cached CSS only if it is not required by any of the cached pages. But how would you go about that? Looping through the cached pages, looking for the front.123.css string?
Another solution might be to give back an offline page rather than an unstyled content page, but I’m not sure if it is doable, since the worker responds with the HTML before knowing what assets it will need.
The "best" solution here is to use precaching (either via Workbox, or via some other way of generating a build-time manifest), and making sure that all of your HTML and subresources are cached and expired atomically. You don't have to worry about version mismatches or cache misses if you can precache everything.
That being said, precaching everything isn't always a viable option, if your site relies on a lot of dynamic, server-rendered content, or if you have a lot of distinct HTML pages, or if you have a larger variety of subresources, many of which are only required on a subset of pages.
If you want to go with the runtime caching approach, I'd recommend a technique along the lines of what's described in "Smarter runtime caching of hashed assets". That uses a custom Workbox plugin to handle cache expiration and finding a "best-effort" cache match for a given subresource when the network is unavailable. The main difficulty in generalizing that code is that you need to use a consistent naming scheme for your hashes, and write some utility functions to programmatically translate a hashed URL into the "base" URL.
In the interest of providing some code along with this answer, here's a version of the plugin that I currently use. You'll need to customize it as described above for your hashing scheme, though.
import {WorkboxPlugin} from 'workbox-core';
import {HASH_CHARS} from './path/to/constants';
function getOriginalFilename(hashedFilename: string): string {
return hashedFilename.substring(HASH_CHARS + 1);
}
function parseFilenameFromURL(url: string): string {
const urlObject = new URL(url);
return urlObject.pathname.split('/').pop();
}
function filterPredicate(
hashedURL: string,
potentialMatchURL: string,
): boolean {
const hashedFilename = parseFilenameFromURL(hashedURL);
const hashedFilenameOfPotentialMatch =
parseFilenameFromURL(potentialMatchURL);
return (
getOriginalFilename(hashedFilename) ===
getOriginalFilename(hashedFilenameOfPotentialMatch)
);
}
export const revisionedAssetsPlugin: WorkboxPlugin = {
cachedResponseWillBeUsed: async ({cacheName, cachedResponse, state}) => {
state.cacheName = cacheName;
return cachedResponse;
},
cacheDidUpdate: async ({cacheName, request}) => {
const cache = await caches.open(cacheName);
const keys = await cache.keys();
for (const key of keys) {
if (filterPredicate(request.url, key.url) && request.url !== key.url) {
await cache.delete(key);
}
}
},
handlerDidError: async ({request, state}) => {
if (state.cacheName) {
const cache = await caches.open(state.cacheName);
const keys = await cache.keys();
for (const key of keys) {
if (filterPredicate(request.url, key.url)) {
return cache.match(key);
}
}
}
},
};
I am trying create a storybook for my react-realy app, but i don't know how to set mockup data for that component. For simple a component it is ok, because i can use dummy UI component vs Container approach, but i can't use this for nested relay components, for example there is a UserList component, which i want add to storybook, i can split relay fragment part to container and UI part to the component, but what if UserList children are too relay component? I can't split their when they are a part of the composition of UserList?
Is there some solution for add relay components to the storybook?
I created a NPM package called use-relay-mock-environment, which is based on relay-test-utils which allows you to make Storybook stories out of your Relay components.
It allows nesting of Relay components, so you can actually make stories for full pages made out of Relay components. Here's an example:
// MyComponent.stories.(js | jsx | ts | tsx)
import React from 'react';
import { RelayEnvironmentProvider } from 'react-relay';
import createRelayMockEnvironmentHook from 'use-relay-mock-environment';
import MyComponent from './MyComponentQuery';
const useRelayMockEnvironment = createRelayMockEnvironmentHook({
// ...Add global options here (optional)
});
export default {
title: 'MyComponent',
component: MyComponent,
};
export const Default = () => {
const environment = useRelayMockEnvironment({
// ...Add story specific options here (optional)
});
return (
<RelayEnvironmentProvider environment={environment}>
<MyComponent />
</RelayEnvironmentProvider>
);
};
export const Loading = () => {
const environment = useRelayMockEnvironment({
forceLoading: true
});
return (
<RelayEnvironmentProvider environment={environment}>
<MyComponent />
</RelayEnvironmentProvider>
);
};
You can also add <RelayEnvironmentProvider /> as a decorator, but I recommend not doing that if you want to create multiple stories for different states/mock data. In the above example I show 2 stories, the Default one, and a Loading one.
Not only that, it requires minimal coding, where you don't need to add the #relay-test-operation directive to your query, and the mocked data is automatically generated for you using faker.js, allowing you to focus on what matters, which is building great UI.
Feel free to review the source code here if you want to implement something similar: https://github.com/richardguerre/use-relay-mock-environment.
Note: it's still in its early days, so some things might change, but would love some feedback!
I also created relay-butler, which is a CLI that takes in GraphQL fragments and outputs Relay components, including a auto-generated query component that wraps the fragment component, and Storybook stories (the Default and Loading ones by default) that wrap that query component. And literally within minutes, I can create beautiful Relay components that are "documented" within Storybook.
Would also love some feedback for it!
I've a Progressive web app that is added on my home screen. I've chosen standalone run type, I've a service-worker running in it.
All working perfectly, only one doubt: if I update my site (with its relative service-worker, I can see updates if I load it directly into browser, but if I launch it by home added link I see always the old site.
There is a way to request updates when launching my site in standalone mode?
I think you are asking "is there a way to dynamically update my precached assets without updating my service worker?"
Yes!
I have been working on an upgrade to the JSON cache strategy here -> https://serviceworke.rs/json-cache.html We will publish this soon!
After digging I've discovered the simple solution, I report it for others.
iOS does not support service-workers, so that is not the problem.
iOS keep in cache many resources, so the solution is to add a parameter to the various imports like so:
The best solution to ensure updates is to add an hash of imported as the parameter.
Alternatively we can use a timestamp to ensure that resources are ALWAYS updated.
To attach this parameter we can import these resources injecting them with javascript, like so
/**
* inietta uno script nella pagina
*/
function appendScript(url) {
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = url;
script.async = false; // async false to wait for previous file loading
head.appendChild(script);
}
// create parameter with date
var currVersion = '?v=' + new Date().getTime();
// get head html element
var head = document.getElementsByTagName("head")[0];
// append script
appendScript('app.js' + currVersion);
// append css
head.insertAdjacentHTML('beforeend', '<link rel="stylesheet" type="text/css" href="style.css' + currVersion + '">');
I'm trying to override a JS function named replaceMe in the web page from my add-on's content script, but I see that the original function implementation always gets executed.
Original HTML contains the following function definition:
function replaceMe()
{
alert('original');
}
I'm trying to override it my add-on like (main.js):
tabs.activeTab.attach({
contentScriptFile: self.data.url("replacerContent.js")
});
Here's what my replacerContent.js looks like:
this.replaceMe = function()
{
alert('overridden');
}
However, when I run my addon, I always see the text original being alerted, meaning the redefinition in replacerContent.js never took effect. Can you let me know why? replaceMe not being a privileged method, I should be allowed to override, eh?
This is because there is an intentional security between web content and content scripts. If you want to communicate between web content and you have control over the web page as well, you should use postMessage.
If you don't have control over the web page, there is a hacky workaround. In your content script you can access the window object of the page directly via the global variable unsafeWindow:
var aliased = unsafeWindow.somefunction;
unsafeWindow.somefunction = function(args) {
// do stuff
aliased(args);
}
There are two main caveats to this:
this is unsafe, so you should never trust data that comes from the page.
we have never considered the unsafeWindow hack and have plans to remove it and replace it with a safer api.
Rather than relying on unsafeWindow hack, consider using the DOM.
You can create a page script from a content script:
var script = 'rwt=function()();';
document.addEventListener('DOMContentLoaded', function() {
var scriptEl = document.createElement('script');
scriptEl.textContent = script;
document.head.appendChild(scriptEl);
});
The benefit of this approach is that you can use it in environments without unsafeWindow, e. g. chrome extensions.
You can then use postMessage or DOM events to communicate between the page script and the content script.
I am using the following code in the fusion tables 'customize info window' to create hyperlinked URL's from a column in my table:
"{URL}"
This works fine in returning the clickable hyperlink in the info box except fusion maps by default tacks on https: instead of http: when the link is clicked. This causes problems when the user clicks the hyperlink and it tries to take them to a secure site when in fact it is not secure. The browsers throw up all sorts of warnings that will scare the pants off a lot of users who don't know what is happening.
Does anybody know how to remedy this and have the default be http and not the current https?
Thanks, Shep
You may want to abandon the "{URL}" approach and display the hyperlink with some simple HTML. This example from Google shows how to modify the info window's HTML in Javascript:
google.maps.event.addListener(layer, 'click', function(e) {
// Change the content of the InfoWindow
e.infoWindowHtml = e.row['Store Name'].value + "<br>";
// If the delivery == yes, add content to the window
if (e.row['delivery'].value == 'yes') {
e.infoWindowHtml += "Delivers!";
}
});
Changing the e.row['Store Name'] to your URL column name (maybe, e.row['URL']) and surrounding by a couple of hyperlink tags <a> should do the trick:
e.infoWindowHtml = "<a href='" + e.row['URL'].value + "'>Click here!</a>";
There are three ways:
Specify a full URL including protocol in your data
Use Link here in a custom info window layout
Completely override the content as in the answer above
I would recommend #1 because you can choose the right protocol for every link. However #2 is probably easier in that you can leave your data the way it is.