Parse XML Feed via Google Apps Script (Cannot read property 'getChildren' of undefined") - parsing

I need to parse a Google Alert RSS Feed with Google Apps Script.
Google Alerts RSS-Feed
I found a script which should do the job but I cant get it working with Google's RSS Feed:
The feed looks like this:
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:idx="urn:atom-extension:indexing">
<id>tag:google.com,2005:reader/user/06807031914929345698/state/com.google/alerts/10604166159629661594</id>
<title>Google Alert – garbe industrial real estate</title>
<link href="https://www.google.com/alerts/feeds/06807031914929345698/10604166159629661594" rel="self"/>
<updated>2022-03-17T19:34:28Z</updated>
<entry>
<id>tag:google.com,2013:googlealerts/feed:10523743457612307958</id>
<title type="html"><b>Garbe Industrial</b> plant Multi-User-Immobilie in Ludwigsfelde - <b>Property</b> Magazine</title>
<link href="https://www.google.com/url?rct=j&sa=t&url=https://www.property-magazine.de/garbe-industrial-plant-multi-user-immobilie-in-ludwigsfelde-117551.html&ct=ga&cd=CAIyGWRmNjU0ZGNkMzJiZTRkOWY6ZGU6ZGU6REU&usg=AFQjCNENveXYlfrPc7pZTltgXY8lEAPe4A"/>
<published>2022-03-17T19:34:28Z</published>
<updated>2022-03-17T19:34:28Z</updated>
<content type="html">Die <b>Garbe Industrial Real Estate</b> GmbH startet ihr drittes Neubauprojekt in der Metropolregion Berlin/Brandenburg. Der Projektentwickler hat sich ...</content>
<author>
...
</feed>
I want to extract entry -> id, title, link, updated, content.
I used this script:
function ImportFeed(url, n) {
var res = UrlFetchApp.fetch(url).getContentText();
var xml = XmlService.parse(res);
//var item = xml.getRootElement().getChild("channel").getChildren("item")[n - 1].getChildren();
var item = xml.getRootElement().getChildren("entry")[n - 1].getChildren();
var values = item.reduce(function(obj, e) {
obj[e.getName()] = e.getValue();
return obj;
}, {});
return [[values.id, values.title, values.link, values.updated, values.content]];
}
I modified this part, but all i got was "TypeError: Cannot read property 'getChildren' of undefined"
//var item = xml.getRootElement().getChild("channel").getChildren("item")[n - 1].getChildren();
var item = xml.getRootElement().getChildren("entry")[n - 1].getChildren();
Any idea is welcome!

In your situation, how about the following modified script?
Modified script:
function SAMPLE(url, n = 1) {
var res = UrlFetchApp.fetch(url).getContentText();
var root = XmlService.parse(res.replace(/&/g, "&")).getRootElement();
var ns = root.getNamespace();
var entries = root.getChildren("entry", ns);
if (!entries || entries.length == 0) return "No values";
var header = ["id", "title", "link", "updated", "content"];
var values = header.map(f => f == "link" ? entries[n - 1].getChild(f, ns).getAttribute("href").getValue().trim() : entries[n - 1].getChild(f, ns).getValue().trim());
return [values];
}
In this case, when you use getChild and getChildren, please use the name space. I thought that this might be the reason of your issue.
From your script, I guessed that you might use your script as the custom function. In that case, please modify the function name from ImportFeed to others, because IMPORTFEED is a built-in function of Google Spreadsheet. In this sample, SAMPLE is used.
If you want to change the columns, please modify header.
In this sample, the default value of n is 1. In this case, the 1st entry is retrieved.
In this script, for example, you can put =SAMPLE("URL", 1) to a cell as the custom function. By this, the result value is returned.
Note:
If the above-modified script was not the direct solution of your issue, can you provide the sample value of res? By this, I would like to modify the script.
As the additional information, when you want to put all values by executing the script with the script editor, you can also use the following script.
function myFunction() {
var url = "###"; // Please set URL.
var res = UrlFetchApp.fetch(url).getContentText();
var root = XmlService.parse(res.replace(/&/g, "&")).getRootElement();
var ns = root.getNamespace();
var entries = root.getChildren("entry", ns);
if (!entries || entries.length == 0) return "No values";
var header = ["id", "title", "link", "updated", "content"];
var values = entries.map(e => header.map(f => f == "link" ? e.getChild(f, ns).getAttribute("href").getValue().trim() : e.getChild(f, ns).getValue().trim()));
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Sheet1"); // Please set the sheet name.
sheet.getRange(sheet.getLastRow() + 1, 1, values.length, values[0].length).setValues(values);
}
References:
XML Service
map()

Related

Trying to send email from Google Sheet to Gmail. Tab name is Analysis

Need your help and expert guidance as I need my google sheet to send emails every time a condition becomes true in "K" column which is named "Subject" as a header in the tab name "Analysis". Whenever I run the below code. Similarly last 3 lines I get while running the code as errors. Please explain in a simple possible way and not too much technical
function sendEmail(){
var ss = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Analysis");
var range = ss.getRange("J2:J35");
range.clear();
var n = ss.getLastRow();
for (var i = 2;i<n+1; i++){
var emailRequired = ss.getRange(i,9).getValue();
var subject = ss.getRange(i,11).getvalue();
var message = ss.getRange(i,12).getvalue();
if (emailRequired=="YES"){
MailApp.sendEmail("ksm272364#gmail.com",subject,message);
ss.getRange(i,10).setvalue("YES");
}
}
}
1:21:51 PM Error
TypeError: ss.getRange(...).getvalue is not a function
sendEmail # Code.gs:8

Import JSON API into Google Sheets

I need to import some information from a JSON API URL into Google Sheets.
This is one example:
https://api-apollo.pegaxy.io/v1/game-api/race/details/69357391
I've been successful in importing basic information using IMPORTJSON available on Github:
https://github.com/bradjasper/ImportJSON/
But now I am faced with a type of information (is it an object? an array?) which seems to be different from the usual and I find myself unable to import this.
Here is a piece of it:
{
"id": 969228010,
"raceId": 69357391,
"pegaId": 20042,
"gate": 8,
"pegaAttributes": "{\"id\":20042,\"name\":\"Bajaj\",\"ownerId\":623299,\"raceClass\":1,\"races\":1369,\"win\":504,\"lose\":865,\"energy\":18,\"gender\":\"Male\",\"bloodLine\":\"Campona\",\"breedType\":\"Legendary\",\"speed\":4.95,\"strength\":0.33,\"wind\":3.36,\"water\":1.84,\"fire\":8.83,\"lighting\":6.93,\"position\":4000,\"finished\":true,\"raceTime\":35.855,\"result\":8,\"gate\":8,\"lastSpeed\":22.721521955555556,\"stage\":4,\"isTopSpeedReached\":false,\"bonusStage\":false,\"topSpeed\":22.721521955555556,\"s0\":0,\"j0\":-0.02,\"a0\":0.4982185622222222,\"v0\":20.127527583333332,\"t0\":179.60000000000002,\"gears\":{},\"pb\":0}"**,
"position": 11,
"raceTime": 35.855,
"reward": 0
},
So using IMPORTJSON if I wanted to simply import the "raceId" element I'd go about doing this:
=ImportJSON("https://api-apollo.pegaxy.io/v1/game-api/race/details/69357391", "/race/registers/raceId", "noHeaders")
But when trying to import any information from within pegaAttributesthe IMPORTJSON is unable to recognize it as separate. The best I can do is import the whole block like so:
=ImportJSON("https://api-apollo.pegaxy.io/v1/game-api/race/details/69357391", "/race/registers/pegaAttributes", "noHeaders")
So some of the information after "pegaAttributes" and inside brackets { } I need to import. For example the attributes raceTime , topSpeed, lastSpeed and so on, how can I import this into Google Sheets?
Could anyone provide any pointers on how to do this? Thank you.
Try (you will have to apply JSON.parse on the pegaAttributes element which is also a json)
=importDataJSON(url,"id|position|raceTime","name|raceTime|topSpeed|lastSpeed")
with
function importDataJSON(url, items1, items2) {
let result = []
result = [[items1.split('|'), items2.split('|')].flat()]
const obj = JSON.parse(UrlFetchApp.fetch(url).getContentText())
obj.race.registers.forEach(o => {
let prov = []
items1.split('|').forEach(item1 => prov.push(o[item1]))
var pegaAttributes = JSON.parse(o.pegaAttributes)
items2.split('|').forEach(item2 => prov.push(pegaAttributes[item2]))
result.push(prov)
})
return result
}
with as parameters:
url
items1 (level 1) separated by |
items2 (level2, under pegaAttributes) separated by |
new edit
=importDataJSON(url,"totalReward|length","id|position|raceTime","name|raceTime|topSpeed|lastSpeed")
with
function importDataJSON(url, items0, items1, items2) {
let result = []
result = [[items0.split('|'), items1.split('|'), items2.split('|')].flat()]
const obj = JSON.parse(UrlFetchApp.fetch(url).getContentText())
let prov = []
items0.split('|').forEach(item0 => prov.push(obj.race[item0]))
result.push(prov)
obj.race.registers.forEach(o => {
let prov = []
items0.split('|').forEach(item0 => prov.push(''))
items1.split('|').forEach(item1 => prov.push(o[item1]))
var pegaAttributes = JSON.parse(o.pegaAttributes)
items2.split('|').forEach(item2 => prov.push(pegaAttributes[item2]))
result.push(prov)
})
return result
}
You have to parse it twice as that's an object just as text. I think using the custom formula might not be easiest since Google App Scripts can do this for you pretty cleanly. Consider using the standard JSON.parse() functions.
The below function got me the following values you were looking for. See the debug screen shot.
function getJSONData(){
const zURL = 'https://api-apollo.pegaxy.io/v1/game-api/race/details/69357391';
var response = UrlFetchApp.fetch(zURL);
var cleanedResponse = JSON.parse(response);
var theRace = cleanedResponse['race'];
var theRegisters = theRace['registers'];
var aRegister = theRegisters[0];
var oneID = oneRegister.id;
var aGate = oneRegister.gate;
var aPega = oneRegister.pegaAttributes;
var cleanedPega = JSON.parse(aPega);
var zTopSpeed = cleanedPega.topSpeed;
}
If you debug this, function and check to the right in your variables, you should be able to get everything you need. You'll have to find a way to get it back into sheets, but the values are available.
Updated
A request was made to figure out how this could be run as a Sheets Function. leveraging Mike Steelson's approach and presumption for what is needed as far as races... here's a function that could be used. Just paste the URL in the formula.
function getDataMyJSON(theURL) {
const data = JSON.parse(UrlFetchApp.fetch(theURL).getContentText())
const items = ['raceTime','topSpeed','lastSpeed']
let result=[]
data.race.registers.forEach(x => {
let prov = []
prov.push(x.raceId)
var p = JSON.parse(x.pegaAttributes)
items.forEach(i => prov.push(p[i]))
result.push(prov)
})
return result;
}
So then put the URL in the formula and you'd get this...

Extract visual text from Google Classic Site page using Apps Script in Google Sheets

I have about 5,000 Classic Google Sites pages that I need to have a Google Apps script under Google Sheets examine one by one, extract the data, and enter that data into the Google Sheet row by row.
I wrote an apps script to use one of the sheets called "Pages" that contains the exactly URL of each page row by row, to run down while doing the extraction.
That in return would get the HTML contents and I would then use regex to extract the data I want which is the values to the right of each of the following...
Job name
Domain owner
Urgency/Impact
ISOC instructions
Which would then write that date under the proper columns in the Google Sheet.
This worked except for one big problem. The HTML is not consistent. Also, ID's and tags were not used so really it makes trying to do this through SitesApp.getPageByUrl not possible.
Here is the code I came up with for that attempt.
function startCollection () {
var masterList = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Pages");
var startRow = 1;
var lastRow = masterList.getLastRow();
for(var i = startRow; i <= lastRow; i++) {
var target = masterList.getRange("A"+i).getValue();
sniff(target)
};
}
function sniff (target) {
var pageURL = target;
var pageContent = SitesApp.getPageByUrl(pageURL).getHtmlContent();
Logger.log("Scraping: ", target);
// Extract the job name
var JobNameRegExp = new RegExp(/(Job name:<\/b><\/td><td style='text-align:left;width:738px'>)(.*?)(\<\/td>)/m);
var JobNameValue = JobNameRegExp.exec(pageContent);
var JobMatch = JobNameValue[2];
if (JobMatch == null){
JobMatch = "NOTE FOUND: " + pageURL;
}
// Extract domain owner
var DomainRegExp = new RegExp(/(Domain owner:<\/b><\/td><td style='text-align:left;width:738px'><span style='font-family:arial,sans,sans-serif;font-size:13px'>)(.*?)(<\/span>)/m);
var DomainValue = DomainRegExp.exec(pageContent);
Logger.log("DUMP1:",SitesApp.getPageByUrl(pageURL).getHtmlContent());
var DomainMatch = DomainValue[2];
if (JobMatch == null){
DomainMatch = "N/A";
}
// Extract Urgency & Impact
var UrgRegExp = new RegExp(/(Urgency\/Impact:<\/b><\/td><td style='text-align:left;width:738px'>)(.*?)(<\/td>)/m);
var UrgValue = UrgRegExp.exec(pageContent);
var UrgMatch = UrgValue[2];
if (JobMatch == null){
UrgMatch = "N/A";
}
// Extract ISOC Instructions
var ISOCRegExp = new RegExp(/(ISOC instructions:<\/b><\/td><td style='text-align:left;width:738px'>)(.*?)(<\/td>)/m);
var ISOCValue = ISOCRegExp.exec(pageContent);
var ISOCMatch = ISOCValue[2];
if (JobMatch == null){
ISOCMatch = "N/A";
}
// Add record to sheet
var row_data = {
Job_Name:JobMatch,
Domain_Owner:DomainMatch,
Urgency_Impact:UrgMatch,
ISOC_Instructions:ISOCMatch,
};
insertRowInTracker(row_data)
}
function insertRowInTracker(rowData) {
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Jobs");
var rowValues = [];
var columnHeaders = sheet.getDataRange().offset(0, 0, 1).getValues()[0];
Logger.log("Writing to the sheet: ", sheet.getName());
Logger.log("Writing Row Data: ", rowData);
columnHeaders.forEach((header) => {
rowValues.push(rowData[header]);
});
sheet.appendRow(rowValues);
}
So for my next idea, I have thought about using UrlFetchApp.fetch. The one problem I have though is that these pages on that Classics Google Site sit behind a non-shared with the public domain. While using SitesApp.getPageByUrl has the script ask for authorization and works, SitesApp.getPageByUrl does not meaning when it tries to call the direct page, it just gets the Google login page.
I might be able to work around this and turn them public, but I am still working on that.
I am running out of ideas fast on this one and hoping there is another way I have not thought of or seen. What I would really like to do is not even mess with the HTML content. I would like to use apps script under the Google Sheet to just look at the actual data presented on the page and then match a text and capture the value to the right of it.
For example have it go down the list of URLS on sheet called "Pages" and do the following for each page:
Find the following values:
Find the text "Job name:", capture the text to the right of it.
Find the text "Domain owner:", capture the text to the right of it.
Find the text "Urgency/Impact:", capture the text to the right of it.
Find the text "ISOC instructions:", capture the text to the right of it.
Write those values to a new row in sheet called "Jobs" as seen below.
Then move on the the next URL in the sheet called "Pages" and repeat until all rows in the sheet "Pages" have been completed.
Example of the data I want to capture
I have created an exact copy of one of the pages for testing and is public.
https://sites.google.com/site/2020dump/test
An inspect example
The raw HTML of the table which contains all the data I am after.
<tr>
<td style="width:190px"><b>Domain owner:</b></td>
<td style="text-align:left;width:738px">IT.FinanceHRCore </td>
</tr>
<tr>
<td style="width:190px"> <b>Urgency/Impact:</b></td>
<td style="text-align:left;width:738px">Medium (3 - Urgency, 3 - Impact) </td>
</tr>
<tr>
<td style="width:190px"><b>ISOC instructions:</b></td>
<td style="text-align:left;width:738px">None </td>
</tr>
<tr>
<td style="width:190px"></td>
<td style="text-align:left;width:738px"> </td>
</tr>
</tbody>
</table>
Any examples of how I can accomplish this? I am not sure how from an apps script perspective to go about not looking at HTML and only looking at the actual data displayed on the page. For example looking for the text "Job name:" and then grabbing the text to the right of it.
The goal at the end of the day is to transfer the data from each page into one big Google Sheet so we can kill off the Google Classic Site.
I have been scraping data with apps script using regular expressions for a while, but I will say that the formatting of this page does make it difficult.
A lot of the pages that I scrape have tables in them so I made a helper script that will go through and clean them up and turn them into arrays. Copy and paste the script below into a new google script:
function scrapetables(html,startingtable,extractlinksTF) {
var totaltables = /<table.*?>/g
var total = html.match(totaltables)
var tableregex = /<table[\s\S]*?<\/table>/g;
var tables = html.match(tableregex);
var arrays = []
var i = startingtable || 0;
while (tables[i]) {
var thistable = []
var rows = tables[i].match(/<tr[\s\S]*?<\/tr>/g);
if(rows) {
var j = 0;
while (rows[j]) {
var thisrow = tablerow(rows[j])
if(thisrow.length > 2) {
thistable.push(tablerow(rows[j]))
} else {thistable.push(thisrow)}
j++
}
arrays.push(thistable);
}
i++
}
return arrays;
}
function removespaces(string) {
var newstring = string.trim().replace(/[\r\n\t]/g,'').replace(/ /g,' ');
return newstring
}
function tablerow(row,extractlinksTF) {
var cells = row.match(/<t[dh][\s\S]*?<\/t[dh]>/g);
var i = 0;
var thisrow = [];
while (cells[i]) {
thisrow.push(removehtmlmarkup(cells[i],extractlinksTF))
i++
}
return thisrow
}
function removehtmlmarkup(string,extractlinksTF) {
var string2 = removespaces(string.replace(/<\/?[A-Za-z].*?>/g,''))
var obj = {string: string2}
//check for link
if(/<a href=.*?<\/a>/.test(string)) {
obj['link'] = /<a href="(.*?)"/.exec(string)[1]
}
if(extractlinksTF) {
return obj;
} else {return string2}
}
Running this got close, but at the moment, this doesn't handle nested tables well so I cleaned up the input by sending only the table that we want by isolating it with a regular expression:
var tablehtml = /(<table[\s\S]{200,1000}Job Name[\s\S]*?<\/table>)/im.exec(html)[1]
Your parent function will then look like this:
function sniff(pageURL) {
var html= SitesApp.getPageByUrl(pageURL).getHtmlContent();
var tablehtml = /(<table[\s\S]{200,1000}Job Name[\s\S]*?<\/table>)/im.exec(html)[1]
var table = scrapetables(tablehtml);
var row_data =
{
Job_Name: na(table[0][3][1]), //indicates the 1st table in the html, row 4, cell 2
Domain_Owner: na(table[0][4][1]), // indicates 1st table in the html, row 5, cell 2 etc...
Urgency_Impact: na(table[0][5][1]),
ISOC_Instructions: na(table[0][6][1])
}
insertRowInTracker(row_data)
}
function na(string) {
if(string) {
return string
} else { return 'N/A'}
}

Google script: search and replace the url of linked text in a Doc (not sheet)

I am trying to search-and-replace linked text from an old url to a new url.
It is not working and I have spent hours and hours. If I remove the "if (found)" it gives me "TypeError: Cannot read property 'getElement' of null" even though my files have text that is linked to this old_url.
Please, help me.
function myFunction() {
var old_url ="http://hurlx1.com";
var new_url ="http://urlxa.com";
var files = DriveApp.getFolderById("my folder id").getFilesByType(MimeType.GOOGLE_DOCS);
while (files.hasNext()) {
var file = files.next();
var doc = DocumentApp.openById(file.getId());
found=doc.getBody().findText(old_url);
if (found) {
var link_element = found.getElement().asText();
var start = found.getStartOffset();
var end = found.getEndOffsetInclusive();
var correct_link = link_element.getText().slice(start, end);
link_element.setLinkUrl(start, end, correct_link);
}
}
}
I believe your situation and goal as follows.
In your Google Document,
Text and hyperlink are the same with old_url.
Hyperlink is old_url. But the text is different from old_url.
You want to update old_url with new_url using Google Apps Script.
For this, how about this answer?
Modification points:
About your error message, when the text of old_url is not found in the Google Document with found=doc.getBody().findText(old_url);, found becomes null even when old_url is set as the hyperlink. Because findText searches the text on Document body, and that cannot search the hyperlinks set to the texts. I think that this is the reason of your issue.
In your script, var new_url ="http://urlxa.com"; is used. But when the link is set, correct_link is used like link_element.setLinkUrl(start, end, correct_link);. By this, new_url is not set.
When you want to update the text of http://hurlx1.com to new_url of var new_url ="http://urlxa.com";, it is required to also modify the text.
In your script, only the 1st old_url is updated. If there are several values of old_url in the Document, it is required to update them using the loop.
Specification of modified script:
This modified script can be used for the following patterns.
Text and hyperlink are the same with old_url.
In this case, the text value of old_url is also updated with old_url.
Hyperlink is old_url. But the text is different from old_url.
In this case, only the hyperlink of old_url is updated.
There are several texts with the hyperlink of old_url in the Google Document.
Modified script:
function myFunction() {
var old_url ="http://hurlx1.com";
var new_url ="http://urlxa.com";
var files = DriveApp.getFolderById("my folder id").getFilesByType(MimeType.GOOGLE_DOCS);
while (files.hasNext()) {
var file = files.next();
var doc = DocumentApp.openById(file.getId());
var body = doc.getBody();
// The following script is used for the situation that the text and hyperlink are the same with `old_url`.
var found = body.findText(old_url);
while (found) {
var link_element = found.getElement().asText();
var start = found.getStartOffset();
var end = found.getEndOffsetInclusive();
var correct_link = link_element.getText().slice(start, end);
link_element.setLinkUrl(start, end, new_url).replaceText(old_url, new_url);
found = body.findText(old_url, found);
}
// The following script is used for the situation that although the hyperlink is `old_url`, the text is different from `old_url`.
var text = body.editAsText();
for (var i = 0; i < text.getText().length; i++) {
if (text.getLinkUrl(i) == old_url) {
text.setLinkUrl(i, i + 1, new_url);
}
}
}
}
References:
replaceText(searchPattern, replacement)
findText(searchPattern, from)
editAsText()

How to Print sheet/range using .gs script in Google Sheets?

I am trying to create a script in Google Sheets that select a range and print it. I am trying to print some information based on some parameters. I have the following script that sets the desired range, but I do not see a way to print it using script.
function printInvoice() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet();
var range = sheet.getRange("A1:H46");
range.activate();
}
Any suggestions? Thanks!
You can use the following script:
var PRINT_OPTIONS = {
'size': 7, // paper size. 0=letter, 1=tabloid, 2=Legal, 3=statement, 4=executive, 5=folio, 6=A3, 7=A4, 8=A5, 9=B4, 10=B
'fzr': false, // repeat row headers
'portrait': true, // false=landscape
'fitw': true, // fit window or actual size
'gridlines': false, // show gridlines
'printtitle': false,
'sheetnames': false,
'pagenum': 'UNDEFINED', // CENTER = show page numbers / UNDEFINED = do not show
'attachment': false
}
var PDF_OPTS = objectToQueryString(PRINT_OPTIONS);
function onOpen(e) {
SpreadsheetApp.getUi().createMenu('Print...').addItem('Print selected range', 'printSelectedRange').addToUi();
}
function printSelectedRange() {
SpreadsheetApp.flush();
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet();
var range = sheet.getActiveRange();
var gid = sheet.getSheetId();
var printRange = objectToQueryString({
'c1': range.getColumn() - 1,
'r1': range.getRow() - 1,
'c2': range.getColumn() + range.getWidth() - 1,
'r2': range.getRow() + range.getHeight() - 1
});
var url = ss.getUrl().replace(/edit$/, '') + 'export?format=pdf' + PDF_OPTS + printRange + "&gid=" + gid;
var htmlTemplate = HtmlService.createTemplateFromFile('js');
htmlTemplate.url = url;
SpreadsheetApp.getUi().showModalDialog(htmlTemplate.evaluate().setHeight(10).setWidth(100), 'Print range');
}
function objectToQueryString(obj) {
return Object.keys(obj).map(function(key) {
return Utilities.formatString('&%s=%s', key, obj[key]);
}).join('');
}
You will also need to create an html file in your project (File>New>HTML File) with the name js, and paste in the following code:
<script>
window.open('<?=url?>', '_blank', 'width=800, height=600');
google.script.host.close();
</script>
This will create a button in your Sheets menu that will open a PDF with the selected range. You can modify some settings such as the print orientation, its size, or whether to show the gridlines or not on top of the script. If you still want to automatically print the ranges without having to manually go through the print dialog, you can either:
Send the document to your printer using GmailApp API class, if your printer supports such functionality.
Use Google Cloud Print. The following blog post may help you with that: https://ctrlq.org/code/20061-google-cloud-print-with-apps-script
I stumbled on your code quite by chance from an "unallowed question to stack overflow which actually seems to be exactly what I want - could not get any detail on how to print from App Script for sheets.
I have been trying it out but it falls over at the line in your sample
"var htmlTemplate = HtmlService.createTemplateFromFile('js');"
where the service cannot find 'js'. Afraid I do not understand what an html template is anyway - are you able to explain?

Resources