I am currently working on a google sheet.
There are two tabs, one is a master sheet and the other one is a table that allow the user to input the product name and the sales price will show up.
However, I don't want the user to see the master sheet. Even though I hide the master sheet, the user is able to unhide it if they download the sheet.
Is it possible to
Disable the download option for the editor?
OR
Reference the master list without including it in the same file?
OR
Hide the master list even the user download the file?
AFAIK, the editor access to print the file cannot be disable (tried and tested using this guide). I would suggest creating a file (master) and a dependent file which users can download. You can also create a script that will copy the content of the master file to the dependent file. Then only limit the master file access to yourself.
Here is a code snippet that will copy the master file to your public file:
var source = SpreadsheetApp.getActiveSpreadsheet();
var sheet = source.getSheets()[0];
var destination = SpreadsheetApp.openById("ID_GOES HERE");
sheet.copyTo(destination);
Hope this helps.
Related
Within the Jenkins UI, on the project page, you can use the Description Setter plugin to set a description AFTER a build. Is there any way to dynamically set this before a build? I would want to pull information from a file in the workspace that shows information about files that will be changed when the user builds the project.
Edit: While I haven't found a way to dynamically set the project description, I did find that I can create an Active Choices Reactive Reference Parameter, give it a descriptive name that serves as a title, and then read the contents of a file containing HTML with a Groovy script like so..
// The contents of the file should be HTML
String contents = new File('/tmp/some_file.html').text
return contents
So I made this my last parameter, and it shows the information that I need to show, before the end user clicks the build button. Solves my problem.
I'm going to leave this question open though, in case someone has a better idea.
While I haven't found a way to dynamically set the project description, I did find that I can create an Active Choices Reactive Reference Parameter (this requires the Active Choices plugin),
Give it a descriptive name that serves as a title, and then read the contents of a file containing HTML with a Groovy script like so..
// The contents of the file should be HTML
String contents = new File('/tmp/some_file.html').text
return contents
You will need to select a Choice Type of: Formatted HTML.
So I made this my last parameter, and it shows the information that I need to show, before the end user clicks the build button. Solves my problem.
Given:
Some kind of DSL parsed with Xtext parser and then edited by user in TMF-based editor.
When user open file for editing I want first get access to the parse tree of just opened file, modify loaded file content in a some way and then provide to user modified source for editing.
When user wish to save file I again want to preprocess text representation based on actual parse tree and save such altered version.
Is there any Xtext/EMF API to implement such pre-/post- processing?
The goal is to add some content not presented in the physical file, allow user to edit this content and remove it before saving to file. This extra content should be stored separately from DSL source file.
If I understand your question correctly, you want to display additional information in the text editor itself (and not add additional information only to the EMF model, not to the text, for which IDerivedStateComputer could be used).
If the user is not supposed to edit the additional text, the "Code Mining" feature might be useful: https://www.eclipse.org/Xtext/documentation/310_eclipse_support.html#code-mining and https://blogs.itemis.com/en/code-mining-support-in-xtext
To answer the question itself:
Is there any Xtext/EMF API to implement such pre-/post- processing?
No, I am pretty sure there is no such Xtext API for pre-/post-processing files based on their own parse tree (EMF is irrelevant as you want to change the physical content). You could try to mess around with the XtextDocumentProvider (i.e. create your own subclass and register it in the UI module), but this is very likely to break the UI because the line numbers and offsets won't match.
You might have more luck implementing a custom Eclipse action that is executed on the original file and creates a temporary modified copy based on the parsed original file and then opens an editor for the temporary file. Then you could implement a IXtextBuilderParticipant that writes the result back to the original file on save (you have to register it using the org.eclipse.xtext.builder.participant extension point).
Another idea would be not to use an Eclipse action but a tabbed editor using MultiPageEditorPart, with the original as one of three tabs (the composite file and the 'additional info' file being the other two).
The goal is to add some content not presented in the physical file, allow user to edit this content and remove it before saving to file. This extra content should be stored separately from DSL source file.
Couldn't you present this information in another view similar to the 'Properties' view of EMF ? e.g. the user opens file, the Xtext editor opens as well as the 'Properties' view, which presents a way to edit these "extra" information. Upon save of either view, the Xtext save is called and your extra properties are serialized in their own model.
We use Google Drive (GAFE) to prepare and present teaching/training materials. We'd like to maintain archived versions of past iterations, and then work on a new copy for each consecutive training session.
I've succeeded in making a copy of our training folder (using ericyd's gdrive-copy), and we're happily working away on that, BUT... the files are fairly heavily cross-linked. The Slides, for instance, will have links to the Docs handouts and PDF assignments associated with that lesson. When I made a copy of the whole folder structure, the files copied over, but the links are still all linked to the original files, when in fact what we want is for them to be linked to their respective copies.
This makes sense - obviously, when you make a copy of a file, you usually don't want to changes its contents at the same time. However, when you're making an archive of a whole folder, ideally you'd like the links within the files to update as well.
I can compile a spreadsheet with the file IDs for each "original and copy" pair. Is there any way to iterate through all Google Docs/Sheets/Slides in a folder, and substitute the original URLs from the spreadsheet file with their respective copy URLs?
I'm practically a beginner when it comes to Google Apps Scripts, so while I have found Get All Links in a Document and am guessing it would be part of the answer, I have no clue where to go beyond that.
(Btw, if there's a different way of going about all three, automating fixing the links in Slides would be the most helpful, as that's where the bulk of them are)
I know this is a rather old topic, but I recently ran into similar situation that I needed to solve. In my searching, this is the only reference I could find referring to cross-linking as a result of duplication. Unfortunately, I was not able to come up with a purely automated solution, but through a bit of ingenuity I was able to reduce the number of steps required to update my hyperlinks to reference the duplicated files rather than the originals.
First, I borrowed some script code I found online to generate a list of files within a Google Drive folder and their URL's. I'll post the code below. This generates a new Google Sheet named "URL LIST" (you can change the name if you wish in the script), that once generated you'll need to find on your recent list in your Google Drive and move to the folder containing the copied documents and sheets.
Next, in the Google Sheet that I have my hyperlinks to my documents, I created an additional Tab also called URL LIST, and in A1 added an IMPORTRANGE() to import the URL LIST contents. Once you're done with all of this, you will only have to update this one reference with each copy you make, thus dramatically reducing the number of updates you'll need to make, i.e. IMPORTRANGE() points at a specific URL, so each newly generated URL LIST will have a new URL that the copied document containing your hyperlinks and IMPORTRANGE() will need to point to. Hopefully, that makes sense.
Next, your hyperlinks will need a formula along the lines of =HYPERLINK(VLOOKUP(A1,'URL LIST'!$A$1:$B$10,2,FALSE) to grab the imported URL's. It's important to make sure you that you indicate that the look up range is not sorted, or FALSE, because the order that the script spits out the document list with URL's may change depending on how the folder is sorted at the time of running the script, and will ensure you don't need the list sorted. You can then copy the formula to each cell that you need a hyperlink.
Of equal importance is that your VLOOKUP() search key is exactly as it will be listed in your URL LIST.
This method allowed me to reduce the number of steps of updating hyperlinks from 9 steps down to the 1 step of updating the IMPORTRANGE() each time I make copies.
I hope this helps you or someone else!
Copy and past the following script into your script editor:
// replace your-folder below with the folder name for which you want a listing
function listFolderContents() {
var foldername = 'your-folder';
var folderlisting = 'URL LIST ';
var folders = DriveApp.getFoldersByName(foldername)
var folder = folders.next();
var contents = folder.getFiles();
var ss = SpreadsheetApp.create(folderlisting);
var sheet = ss.getActiveSheet();
sheet.appendRow( ['name', 'link'] );
var file;
var name;
var link;
var row;
while(contents.hasNext()) {
file = contents.next();
name = file.getName();
link = file.getUrl();
sheet.appendRow( [name, link] );
}
};
I'm trying to select a few csv files that are in my Google drive and convert them to Google Sheets files. I know that I can do this one by one using the Open option, but since I have hundreds of files, I'm looking for a way to do this using multi-select and convert.
I know two ways to convert multiple files, and "multi-select and convert" is not one of them.
Reupload
Download the files, then upload again, having first enabled "convert uploads" in Drive settings.
Script
Using an Apps Script, one can convert CSV files to Google Spreadsheet format automatically. First, move the files to be converted to a folder and take a note of its id (the part of the shareable link after ?id=). Use the folder id in the following script.
function convert() {
var folder = DriveApp.getFolderById('folder id here');
var files = folder.getFiles();
while (files.hasNext()) {
var file = files.next();
Drive.Files.copy({}, file.getId(), {convert: true});
}
}
Follow the instructions to enable Advanced Drive Service, which is used by the script. Finally, run it. It will create converted copies of all files in the given folder.
I have following problem.
I have two files:
Source file - https://docs.google.com/spreadsheets/d/15zIdIeYFlca-SQ0ryl89oX_tbGjO_6cipqHkkxog7ho/edit#gid=0
Target file - https://docs.google.com/spreadsheets/d/1gGExeO2x8pqNzTPRvel8p-wwe-BDkdF5c6BFA8j_Py0/edit#gid=0
In the source file there is a script (function is named onEdit triggered with onEdit event). When you change the value of R3 cell (Source File) to other "Advisor" whole row should be copied to target file, but sometimes it works, sometimes not. If you change the value of advisor field once and it works try couple of times more and for sure there will be a problem with permission in a while.
When it's not working I get msg that there is problem with permission of executing function called getFileById, which is used in following line:
var file = DriveApp.getFileById('1gGExeO2x8pqNzTPRvel8p-wwe-BDkdF5c6BFA8j_Py0');
Any ideas what to do to solve the problem and why sometimes it works fine ?
Scripts using a 'simple' trigger can modify the file they are bound to, but cannot access other files because that would require authorization.
See here to learn more about the restrictions on simple triggers.
You can make sure you have all the permissions following the next steps:
Open the script project. At the left, click Project Settings
Select the Show "appsscript.json" manifest file in editor checkbox.
At the left, click Editor <>.
At the left, click the appsscript.json file.
Locate the top-level field labeled oauthScopes. If it's not present, you can add it.
The oauthScopes field specifies an array of strings. To set the scopes your project
uses, replace the contents of this array with the scopes you want it to use. For
example:
{"oauthScopes": ["https://www.googleapis.com/auth/spreadsheets.readonly", "https://www.googleapis.com/auth/userinfo.email"], }
Retrieved from: https://developers.google.com/apps-script/concepts/scopes