Is it possible to use ActiveStorage without a file input? - ruby-on-rails

I haven't seen any documentation on the matter, but to elaborate:
I want to use active storage to upload files in my rails app without having to use a browser's file input element. Whether it be using Drag/Drop, or various custom file pickers, it'd be nice to tell ActiveStorage to upload a file and save it without having to use a file input element.
Also: afaik, it's not allowed to hide a file input and to set it's file contents (as sort of a work around).
Is this possible? Does anyone have an example of how this is done without a file input element?

You can use the DirectUpload class for this purpose. Upon receiving a file from your library of choice, instantiate a DirectUpload and call its create method. create takes a callback to invoke when the upload completes:
import { DirectUpload } from "activestorage"
// on file selection/drop {
const url = element.dataset.directUploadUrl
const upload = new DirectUpload(file, url)
upload.create((error, blob) => {
if (error) {
// Handle the error
} else {
// Add an appropriately-named hidden input to the form with a value of blob.signed_id
}
})
// }
This class is the rare exception to the rule that undocumented Rails APIs are internal. We just haven’t gotten around to documenting it yet.

Related

How to make sure a cached page has its corresponding assets cached?

tl;dr. My Service Worker is caching HTML pages and CSS files in different versions. Going offline: since I have to limit the number of files I’m caching, how can I make sure that, for each HTML page in the cache, the versioned CSS files it needs are also in the cache? I need to delete old CSS files at some point, and they have no direct relation with the HTML files.
I’m trying to turn a traditional website into a PWA (sort of), implementing caching strategies with a Service Worker (I’m using Workbox but the question is supposed to be more generalist).
I’m caching pages as I navigate through the website (network-first strategy), in order to make them available offline.
I’m also caching a limited number of CSS and JS assets with a cache-first strategy. The URLs pointing to them are already "cachebusted" using a timestamp embedded in the filename: front.320472502.css for instance. Because of the cachebusting technique already in place, I only need/want to keep a small number of assets in this cache.
Now here’s the issue I’m having. Let’s suppose I cached page /contact which referenced /front.123.css (hence was also cached). As I navigate to other pages, CSS has changed several times in the meantime, and my CSS cache now might contain only /front.455.css and /front.456.css. If I’m going offline now, trying to load /contact will successfully retrieve the contents of the page, but the CSS will fail to load because it’s not in the cache anymore, and it will render an unstyled content page.
Either I keep versions of my CSS in cache for a long time, which is not ideal, or I try to purge cached CSS only if it is not required by any of the cached pages. But how would you go about that? Looping through the cached pages, looking for the front.123.css string?
Another solution might be to give back an offline page rather than an unstyled content page, but I’m not sure if it is doable, since the worker responds with the HTML before knowing what assets it will need.
The "best" solution here is to use precaching (either via Workbox, or via some other way of generating a build-time manifest), and making sure that all of your HTML and subresources are cached and expired atomically. You don't have to worry about version mismatches or cache misses if you can precache everything.
That being said, precaching everything isn't always a viable option, if your site relies on a lot of dynamic, server-rendered content, or if you have a lot of distinct HTML pages, or if you have a larger variety of subresources, many of which are only required on a subset of pages.
If you want to go with the runtime caching approach, I'd recommend a technique along the lines of what's described in "Smarter runtime caching of hashed assets". That uses a custom Workbox plugin to handle cache expiration and finding a "best-effort" cache match for a given subresource when the network is unavailable. The main difficulty in generalizing that code is that you need to use a consistent naming scheme for your hashes, and write some utility functions to programmatically translate a hashed URL into the "base" URL.
In the interest of providing some code along with this answer, here's a version of the plugin that I currently use. You'll need to customize it as described above for your hashing scheme, though.
import {WorkboxPlugin} from 'workbox-core';
import {HASH_CHARS} from './path/to/constants';
function getOriginalFilename(hashedFilename: string): string {
return hashedFilename.substring(HASH_CHARS + 1);
}
function parseFilenameFromURL(url: string): string {
const urlObject = new URL(url);
return urlObject.pathname.split('/').pop();
}
function filterPredicate(
hashedURL: string,
potentialMatchURL: string,
): boolean {
const hashedFilename = parseFilenameFromURL(hashedURL);
const hashedFilenameOfPotentialMatch =
parseFilenameFromURL(potentialMatchURL);
return (
getOriginalFilename(hashedFilename) ===
getOriginalFilename(hashedFilenameOfPotentialMatch)
);
}
export const revisionedAssetsPlugin: WorkboxPlugin = {
cachedResponseWillBeUsed: async ({cacheName, cachedResponse, state}) => {
state.cacheName = cacheName;
return cachedResponse;
},
cacheDidUpdate: async ({cacheName, request}) => {
const cache = await caches.open(cacheName);
const keys = await cache.keys();
for (const key of keys) {
if (filterPredicate(request.url, key.url) && request.url !== key.url) {
await cache.delete(key);
}
}
},
handlerDidError: async ({request, state}) => {
if (state.cacheName) {
const cache = await caches.open(state.cacheName);
const keys = await cache.keys();
for (const key of keys) {
if (filterPredicate(request.url, key.url)) {
return cache.match(key);
}
}
}
},
};

HTMLIFrameElement.contentWindow.print() from GoogleSheets is not opening print preview

I am using GoogleSheets to print a png/image file using HTMLService. I created a temporary Iframe element with an img tag in the modalDialog and call IFrame element's contentWindow.print() function after IFrame element and its image are loaded. (I have not set visibility:hidden attribute of IFrame element to check if image is getting loaded.)
However, I only see the printer dialog without any print preview. I am testing on Firefox. Am I missing anything?
[Updated] - I am using Googles Apps script. performPrint() is in printJsSource.html and openUrl() is in Code.gs.
Inside printJsSource.html
function performPrint(iframeElement, params) {
try {
iframeElement.focus()
// If Edge or IE, try catch with execCommand
if (Browser.isEdge() || Browser.isIE()) {
try {
iframeElement.contentWindow.document.execCommand('print', false, null)
} catch (e) {
iframeElement.contentWindow.print()
}
} else {
// Other browsers
iframeElement.contentWindow.print() // as I am using Firefox, it is coming here
}
} catch (error) {
params.onError(error)
} finally {
//cleanUp(params)
}
}
Inside Code.gs
function openUrl() {
var html = HtmlService.createHtmlOutputFromFile("printJsSource");
html.setWidth(500).setHeight(500);
var ui = SpreadsheetApp.getUi().showModalDialog(html, "Opening ..." );
}
I think there is some general confusion about the concept
First of all, function performPrint() seems to be a client-side Javascript funciton, while function openUrl() is a server-side Apps Script function.
While you did not specify either you use Google Apps Script - if you do so, function openUrl()belongs into the code.gs file and function performPrint() into printJsSource.html file
function openUrl() allows you to open a modal dialog which can show some data on the UI, e.g. your image
Do not confuse this behavior with actual printing (preview)!
It is NOT possible to trigger the opening of a Google Sheets printing preview programamticaly!
The Javascript method you are using iframeElement.contentWindow.print() might trigger the printing of the whole content of a browser window (different from the Google Sheets printing dialog, also depends on the browser), but if you try to incorporate it into the client-side coe of an Apps Script project, you will most likely run into restrictions due to the scopes of modal diloags and usage of iframes.
While from your code it is hard to say either you implemented the funcitons in the correct files of the Apps Script project, keep in mind that to work with iframes you need to specify in function openUrl()
html.setXFrameOptionsMode(HtmlService.XFrameOptionsMode.ALLOWALL);

How to access url params in sapper outside of preload function?

In Sapper, AFAIK from documentation. The only way to access URL params are through preload() function, from which params are available inside params object.
The thing is that I want to access these params ouside of preload() function. From an eagle eye view of documentation. I don't / can't see the solution to my problem / requirement.
I have tried setting a property for url param inside data(). But it seems preload() has no access to data whether getting wise or setting wise. It is not meant for those things.
<script>
import { stores } from "#sapper/app";
const { page } = stores();
const { slug } = $page.params;
</script>
https://sapper.svelte.dev/docs/#Stores
If you are using v3 Svelte and latest alpha of Sapper, import page which is now provided as a store.
import { page } from '#sapper/app';
const {slug} = $page.params;

NetSuite/Suitescript/Workflow: How do I open a URL from a field after clicking button?

I have a workflow that adds a button "Open Link" and a field on the record called "URL" that contains a hyperlink to an attachment in NetSuite. I want to add a workflow action script that opens this url in a different page. I have added the script and the workflow action to the workflow. My script:
function openURL() {
var url = nlapiGetFieldValue('custbody_url');
window.open(url);
}
I get this script error after clicking the button: "TypeError: Cannot find function open in object [object Object].
How can I change my script so it opens the URL in the field?
(This function works when I try it in the console)
Thanks!
Do you want it to work when the record is being viewed or edited? They have slightly different scripts. I'm going to assume you want the button to work when the record is being viewed, but I'll write it so it works even when the document is being edited as well.
The hard part about the way Netsuite has set it up is that it requires two scripts, a user event script, and a client script. The way #michoel suggests may work too... I've never inserted the script by text before personally though.
I'll try that sometime today perhaps.
Here's a user event you could use (haven't tested it myself though, so you should run it through a test before deploying it to everyone).
function userEvent_beforeLoad(type, form, request)
{
/*
Add the specified client script to the document that is being shown
It looks it up by id, so you'll want to make sure the id is correct
*/
form.setScript("customscript_my_client_script");
/*
Add a button to the page which calls the openURL() method from a client script
*/
form.addButton("custpage_open_url", "Open URL", "openURL()");
}
Use this as the Suitescript file for a User Event script. Set the Before Load function in the Script Page to userEvent_beforeLoad. Make sure to deploy it to the record you want it to run on.
Here's the client script to go with it.
function openURL()
{
/*
nlapiGetFieldValue() gets the url client side in a changeable field, which nlapiLookupField (which looks it up server side) can't do
if your url is hidden/unchanging or you only care about view mode, you can just get rid of the below and use nlapiLookupField() instead
*/
var url = nlapiGetFieldValue('custbody_url');
/*
nlapiGetFieldValue() doesn't work in view mode (it returns null), so we need to use nlapiLookupField() instead
if you only care about edit mode, you don't need to use nlapiLookupField so you can ignore this
*/
if(url == null)
{
var myType = nlapiGetRecordType();
var myId = nlapiGetRecordId();
url = nlapiLookupField(myType, myId,'custbody_url');
}
//opening up the url
window.open(url);
}
Add it as a Client Script, but don't make any deployments (the User Event Script will attach it to the form for you). Make sure this script has the id customscript_my_client_script (or whatever script id you used in the user event script in form.setScript()) or else this won't work.
Another thing to keep in mind is that each record can only have one script appended to it using form.setScript() (I think?) so you may want to title the user event script and client script something related to the form you are deploying it on. Using form.setScript is equivalent to setting the script value when you are in the Customize Form menu.
If you can get #michoel's answer working, that may end up being better because you're keeping the logic all in one script which (from my point of view) makes it easier to manage your Suitescripts.
The problem you are running into is that Workflow Action Scripts execute on the server side, so you are not able to perform client side actions like opening up a new tab. I would suggest using a User Event Script which can "inject" client code into the button onclick function.
function beforeLoad(type, form) {
var script = "window.open(nlapiGetFieldValue('custbody_url'))";
form.addButton('custpage_custom_button', 'Open URL', script);
}

How to I preload existing files and display them in the blueimp upload table?

I am using the jquery-ui version of Blueimp upload and I like how I can format a table and display files that were just uploaded. But I'd like to use it as a file manager as well so I want to preload existing files and display than as if they were just uploaded. How can I do that? A sample link to where someone else has addressed this would suffice. BTW, I am uploading several different file types, not just images.
Thanks!
Or without an ajax call:
Prepare array containing details of existing files, e.g:
var files = [
{
"name":"fileName.jpg",
"size":775702,
"type":"image/jpeg",
"url":"http://mydomain.com/files/fileName.jpg",
"deleteUrl":"http://mydomain.com/files/fileName.jpg",
"deleteType":"DELETE"
},
{
"name":"file2.jpg",
"size":68222,
"type":"image/jpeg",
"url":"http://mydomain.com/files/file2.jpg",
"deleteUrl":"http://mydomain.com/files/file2.jpg",
"deleteType":"DELETE"
}
];
Call done callback
var $form = $('#fileupload');
// Init fileuploader if not initialized
// $form.fileupload();
$form.fileupload('option', 'done').call($form, $.Event('done'), {result: {files: files}});
I also had the same problem. It is not magic how it works. I recommend to examine the UploadHandler.php file. Then you will be able to modify this plugin accordind to your needs.
The code above in your second post is just an ajax call to the uploader script (by default index.php in server/php/ folder). The call method is set to "get" by default in $.ajax object.
Open the UploadHandler.php file and go to the class method "initialize(...)". You will see how the call with "get" handled. UploadHandler calls the class method this->get(.:.) to prepare and send the list of existing files. If you use other upload directory, you need pass a parameter to the UploadHänder. Simply chage the url property in the $.ajax object like :
url: $('#fileupload').fileupload('option', 'url')+'?otherDir='+myDir,
then you should initialize the option property of the UploadHandler before you create a new UploadHandler object like this:
$otherDir = trim($_REQUEST['otherDir']);
$otherDir_url = [anyURL] .'/'.$otherDir;//so that the files can be downloaded by clicking on the link
$options = array(
'upload_dir'=> $otherDir,
'upload_url'=> $otherDir_url,
);
$upload_handler = new UploadHandler($options);
Found the code in the main js file... It wasn't obvious how it worked. Got it working just fine.
// Load existing files:
$.ajax({
url: $('#fileupload').fileupload('option', 'url'),
dataType: 'json',
context: $('#fileupload')[0]
}).done(function (result) {
$(this).fileupload('option', 'done').call(this, null, {result: result});
});
If any of you looking at this is doing it in .NET, find this: (for me it is in application.js
For a fairly recent version, there is a function
// Load existing files:
$.getJSON($('#fileupload form').prop('action'), function(files) {
files = somethingelse;
var fu = $('#fileupload').data('fileupload');
fu._adjustMaxNumberOfFiles(-files.length);
fu._renderDownload(files)
.appendTo($('#fileupload .files'))
.fadeIn(function() {
// Fix for IE7 and lower:
$(this).show();
});
});
Inside the application.js
I'm doing it for .NET though, and actually needed this gone.
Then set your somethingelse to either your files or "" depending on what you want to show. If you remove the line files = somethingelse then it will preload all files from the folder.

Resources