How do I program Exponential Backoff into a Google Sheets Script? - google-sheets

How would one program Exponential Backoff into this script so that when I run into an error it will attempt to run again?
Errors being encountered:
There are too many scripts running simultaneously for this Google user account.
Too many simultaneous invocations: Spreadsheets
Exceeded maximum execution time
Exception: Too many simultaneous invocations: Spreadsheets at onEdit(Code:3:9)
Service Spreadsheets failed while accessing document with id __________.
Script:
function onEdit() {
var s = SpreadsheetApp.getActiveSheet();
if( s.getName() == "FORD" ) {
var r = s.getActiveCell();
if( r.getColumn() == 3) {
var nextCell = r.offset(0,19);
if( nextCell.getValue() === '' )
nextCell.setValue(new Date()).setNumberFormat('MM/dd/yyyy HH:mm:ss');
}
if( r.getColumn() == 7) {
var nextCell = r.offset(0,16);
if( nextCell.getValue() === '' )
nextCell.setValue(new Date()).setNumberFormat('MM/dd/yyyy HH:mm:ss');
}
}
}
Any other advice to improve my script is always appreciated! =)

For the errors you are mentioning, exponential backoff would not be useful since they are quota errors:
Check out quota limitations

Related

Stream of millions of objects takes too much memory

I'm generating a load of coordinates (made of 3 numbers) within a geographical area. However, using Streams (which should be much more efficient than Lists), fills up the app's memory very quickly, as can be seen in this screenshot from Observatory.
I need a structure where events can go in, and be read out one by one, and when this happens, removed from the structure. As far as I understand, that is what a Stream is. When you add a value, the old one is removed.
Unfortunatley, this doesn't appear to be happening. Instead, the stream just grows larger and larger - or at least something reading it does, but I just run the .length method on the returned stream, and that's it.
Here's the function that starts the Isolate that returns the stream of coordinate tiles. I'll omit the actual generator, as it's not important: it just sends a Coord to the SendPort.
static Stream<Coords<num>> _generateTilesComputer(
DownloadableRegion region,
) async* {
List<List<double>> serialiseOutline(l) => (l as List)
.cast<LatLng>()
.map((e) => [e.latitude, e.latitude])
.toList();
final port = ReceivePort();
final tilesCalc = await Isolate.spawn(
region.type == RegionType.rectangle
? rectangleTiles
: region.type == RegionType.circle
? circleTiles
: lineTiles,
{
'port': port.sendPort,
'rectOutline': region.type != RegionType.rectangle
? null
: serialiseOutline(region.points),
'circleOutline': region.type != RegionType.circle
? null
: serialiseOutline(region.points),
'lineOutline': region.type != RegionType.line
? null
: (region.points as List<List<LatLng>>)
.chunked(4)
.map((e) => e.map(serialiseOutline)),
'minZoom': region.minZoom,
'maxZoom': region.maxZoom,
'crs': region.crs,
'tileSize': region.options.tileSize,
},
);
await for (final Coords<num>? coord in port
.skip(region.start)
.take(((region.end ?? double.maxFinite) - region.start).toInt())
.cast()) {
if (coord == null) {
port.close();
tilesCalc.kill();
return;
}
yield coord;
}
}
}
How can I prevent this memory leak? Happy to add more info if needed, but the full source code can be found at https://github.com/JaffaKetchup/flutter_map_tile_caching.
Does this help a bit? It's your bottom bit. The .batch method is used to read values in batches of 500, which can be changed to a different value if it is needed. The count variable is used to keep track of the number of values processed, and when it reaches the limit, the port is closed and the isolate is killed.
int count = 0;
final limit = ((region.end ?? double.maxFinite) - region.start).toInt();
await for (final Coords<num> coord in port
.skip(region.start)
.batch(500)) {
if (count >= limit) {
port.close();
tilesCalc.kill();
return;
}
count += coord.length;
yield coord;
}
}
}
To force the deletion of values from the stream when they are read out, you can implement a buffer using a StreamController and limit the number of values in the buffer. When the buffer reaches its limit, you can remove the first value in the buffer and add the next one. This will ensure that the memory usage stays under control.
Here's an example implementation:
static Stream<Coords<num>> _generateTilesComputer(
DownloadableRegion region,
) async* {
List<List<double>> serialiseOutline(l) => (l as List)
.cast<LatLng>()
.map((e) => [e.latitude, e.latitude])
.toList();
final port = ReceivePort();
final controller = StreamController<Coords<num>>();
final tilesCalc = await Isolate.spawn(
region.type == RegionType.rectangle
? rectangleTiles
: region.type == RegionType.circle
? circleTiles
: lineTiles,
{
'port': port.sendPort,
'rectOutline': region.type != RegionType.rectangle
? null
: serialiseOutline(region.points),
'circleOutline': region.type != RegionType.circle
? null
: serialiseOutline(region.points),
'lineOutline': region.type != RegionType.line
? null
: (region.points as List<List<LatLng>>)
.chunked(4)
.map((e) => e.map(serialiseOutline)),
'minZoom': region.minZoom,
'maxZoom': region.maxZoom,
'crs': region.crs,
'tileSize': region.options.tileSize,
},
);
final bufferSize = 1000;
int count = 0;
port
.skip(region.start)
.take(((region.end ?? double.maxFinite) - region.start).toInt())
.cast()
.listen((Coords<num> coord) {
if (coord == null) {
controller.close();
port.close();
tilesCalc.kill();
return;
}
if (count >= bufferSize) {
controller.add(coord);
controller.remove(0);
} else {
controller.add(coord);
count++;
}
});
yield* controller.stream;
}

Google Sheets Script Error - Cannot read property '1' of null (line 7)

I'm using the following script to pull data from bulk json files:
function importRegex(url, regexInput) {
var output = '';
var fetchedUrl = UrlFetchApp.fetch(url, {muteHttpExceptions: true});
if (fetchedUrl) {
var html = fetchedUrl.getContentText();
if (html.length && regexInput.length) {
output = html.match(new RegExp(regexInput, 'i'))[1];
}
}
// Grace period to not overload
Utilities.sleep(1000);
return output;
}
Then this formula with the desired URL in E3:
=IMPORTREGEX(E3,"(.*')")
It worked completely fine to begin with, now I'm suddenly getting the error, seemingly without making any changes, any tips?
This error is because of lacking a null check.
You are now using the return value of html.match() whether it is null or not.
So you should check if the return value is null and if it has enough length.
Like this:
if (html.length && regexInput.length) {
let match = html.match(new RegExp(regexInput, 'i'));
if ( match != null && match.length > 1 ){
output = match[1];
}
}

Rate limit exceeded

I ran my code that get some tweets with number = 50000 tweets but after getting some of them i got this error . I reviewed the below links in the error message but couldn't get anything help
[WARNING]
429:Returned in API v1.1 when a request cannot be served due to the application's rate limit having been exhausted for the resource. See Rate Limiting in API v1.1.(https://dev.twitter.com/docs/rate-limiting/1.1)
message - Rate limit exceeded
code - 88
Relevant discussions can be found on the Internet at:
http://www.google.co.jp/search?q=d35baff5 or
http://www.google.co.jp/search?q=12c94134
TwitterException{exceptionCode=[d35baff5-12c94134], statusCode=429,
message=Rate limit exceeded, code=88, retryAfter=-1,
rateLimitStatus=RateLimitStatusJSONImpl{remaining=0, limit=180,
resetTimeInSeconds=1497756414, secondsUntilReset=148}, version=3.0.3}
at
twitter4j.internal.http.HttpClientImpl.request(HttpClientImpl.java:177)
part 2 of error
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-
plugin:1.6.0:java (default-cli) An exception occured while executing the Java class. 429:Returned in
API v1.1 when a request cannot be served due to the application's rate
limit having been exhausted for the resource. See Rate Limiting in API
v1.1.(https://dev.twitter.com/docs/rate-limiting/1.1)
[ERROR] message - Rate limit exceeded
I found some similar posts without solutions except one that i didn't get it well and i'm not able to write a comment due to my reputation !
in twitter4j there is RateLimitStatus object. You can access this object after some api calls. For example:
User user = twitter.showUser(userId);
user.getRateLimitStatus();
//OR
IDs followerIDs = twitter.getFollowersIDs(user.getScreenName(), -1);
followerIDs.getRateLimitStatus();
//OR
QueryResult result = twitter.search(query);
result.getRateLimitStatus();
And maybe you can use a function to handle rate limit like this:
private void handleRateLimit(RateLimitStatus rateLimitStatus) {
//throws NPE here sometimes so I guess it is because rateLimitStatus can be null and add this condition
if (rateLimitStatus != null) {
int remaining = rateLimitStatus.getRemaining();
int resetTime = rateLimitStatus.getSecondsUntilReset();
int sleep = 0;
if (remaining == 0) {
sleep = resetTime + 1; //adding 1 more seconds
} else {
sleep = (resetTime / remaining) + 1; //adding 1 more seconds
}
try {
Thread.sleep(sleep * 1000 > 0 ? sleep * 1000 : 0);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
or maybe this:
private void handleRateLimit(RateLimitStatus rateLimitStatus) {
int remaining = rateLimitStatus.getRemaining();
if (remaining == 0) {
int resetTime = rateLimitStatus.getSecondsUntilReset() + 5;
int sleep = (resetTime * 1000);
try {
Thread.sleep(sleep > 0 ? sleep : 0);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Hope this helps.
Any other/better methods would also be appreciated.

Epicor 10 Linking a BAQ in BPM Workflow Designer (Avoiding Custom Code)

Currently I have been tasked with reviewing a BPM created by Epicor that is not functioning as expected. Currently with the BPM based on the code below its purpose is to reference orders in the system and when its time to ship the orders if there is a price change the order/part will reflect a new price. It seems that the code is causing incorrect price lists to be retrieved from customers not expected. For example a price list is attached to customer #1242 but its updating the price based on customer #1269. (Guessing they share a common part # and the code retrieves the latest value)
Now my problem is I don't have experience in writing code, I have reviewed code before but to a small extent and from what I listed above that was provided to me. Now what I thought may be an easier practice for me to understand is create a BAQ and reference that in the BPM and utilize the BAQ as a reference for the BPM to update prices.
With researching a few forums and Epicors training material I haven't found a definitive answer on how to link a BAQ in a BPM.
(Also if my description makes sense and the code reflects the issue feel free take a guess)
BPM Code:
var ttShipHead_xRow = (from ttShipHead_Row in ttShipHead
where ttShipHead_Row.ReadyToInvoice == true
select ttShipHead_Row).FirstOrDefault();
if (ttShipHead_xRow != null)
{
foreach (var ShipDtl_iterator in (from ShipDtl_Row in Db.ShipDtl
where ttShipHead_xRow.PackNum == ShipDtl_Row.PackNum
&& ttShipHead_xRow.Company == ShipDtl_Row.Company
select ShipDtl_Row))
{
var ShipDtl_xRow = ShipDtl_iterator;
//ShipDtl_xRow.UnitPrice = 1;
var today = DateTime.Today;
var PriceList_xRow = (from PriceLst_Row in Db.PriceLst
from PriceLstParts_Row in Db.PriceLstParts
where ShipDtl_xRow.PartNum == PriceLstParts_Row.PartNum
&& PriceLst_Row.ListCode == PriceLstParts_Row.ListCode
&& PriceLst_Row.Company == PriceLstParts_Row.Company
&& PriceLst_Row.Company == ShipDtl_xRow.Company
&& PriceLst_Row.EndDate >= today
select PriceLstParts_Row).FirstOrDefault();
if (PriceList_xRow != null)
{
var OrderDtl_xRow = (from OrderDtl_Row in Db.OrderDtl
where ShipDtl_xRow.OrderLine == OrderDtl_Row.OrderLine
&& ShipDtl_xRow.PartNum == OrderDtl_Row.PartNum
&& ShipDtl_xRow.OrderNum == OrderDtl_Row.OrderNum
&& ShipDtl_xRow.Company == OrderDtl_Row.Company
select OrderDtl_Row).FirstOrDefault();
{
if (OrderDtl_xRow != null)
{
if (ShipDtl_xRow.UnitPrice != PriceList_xRow.BasePrice)
{
ShipDtl_xRow.UnitPrice = PriceList_xRow.BasePrice;
}
if (ShipDtl_xRow.UnitPrice != OrderDtl_xRow.UnitPrice)
{
OrderDtl_xRow.DocUnitPrice = PriceList_xRow.BasePrice;
OrderDtl_xRow.UnitPrice = PriceList_xRow.BasePrice;
}
}
}
}
}
}
I resolved the code but still could not determine a valid method to link a BAQ in the BPM
The problem was the following code was missing:
&& ttShipHead_xRow.CustNum == ShipDtl_Row. CustNum
to the first foreach statement.

Why is this iteration of an Outlook Folder only processing a maximum of half the number of items in the folder?

Outlook rules are putting all Facebook originating mail into a Facebook folder, an external process is running as detailed here to separate the contents of that folder in a way that was not feasible through Outlook rules process, originally I had this process running in VBA in outlook but it was a pig choking outlook resources. So I decided to throw it out externally and as I want to improve my c# skill set, this would be a conversion at the same time. Anyway the mail processing is working as it should items are going to correct sub-folders but for some reason the temporary constraint to exit after i number of iterations is not doing as it should. If there are 800 mails in the Facebook folder ( I am a member of many groups) it only runs through 400 iterations, if there are 30 it only processes 15 etc.
I cant for the life of me see why - can anyone put me right?
Thanks
private void PassFBMail()
{
//do something
// result = MsgBox("Are you sure you wish to run the 'Allocate to FB Recipient' process", vbOKCancel, "Hold up")
//If result = Cancel Then Exit Sub
var result = MessageBox.Show("Are you sure you wish to run the Are you sure you wish to run the 'Allocate to SubFolders' process","Sure?",MessageBoxButtons.OKCancel,MessageBoxIcon.Question,MessageBoxDefaultButton.Button2);
if (result == DialogResult.Cancel)
{
return;
}
try
{
OutLook._Application outlookObj = new OutLook.Application();
OutLook.MAPIFolder inbox = (OutLook.MAPIFolder)
outlookObj.Session.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderInbox);
OutLook.MAPIFolder fdr = inbox.Folders["facebook"];
OutLook.MAPIFolder fdrForDeletion = inbox.Folders["_ForDeletion"];
// foreach (OutLook.MAPIFolder fdr in inbox.Folders)
// {
// if (fdr.Name == "facebook")
// {
// break;
// }
// }
//openFacebookFolder Loop through mail
//LOOPING THROUGH MAIL ITEMS IN THAT FOLDER.
Redemption.SafeMailItem sMailItem = new Redemption.SafeMailItem();
int i = 0;
foreach ( Microsoft.Office.Interop.Outlook._MailItem mailItem in fdr.Items.Restrict("[MessageClass] = 'IPM.Note'"))
{
//temp only process 500 mails
i++;
if (i == 501)
{
break;
}
// eml.Item = em
// If eml.To <> "" And eml.ReceivedByName <> "" Then
// strNewFolder = DeriveMailFolder(eml.To, eml.ReceivedByName)
// End If
sMailItem.Item = mailItem;
string strTgtFdr = null;
if (sMailItem.To != null && sMailItem.ReceivedByName != null)
{
strTgtFdr = GetTargetFolder(sMailItem.To, sMailItem.ReceivedByName );
}
// If fdr.Name <> strNewFolder Then
// If dDebug Then DebugPrint "c", "fdr.Name <> strNewFolder"
// eml.Move myInbox.Folders(strNewFolder)
// If dDebug Then DebugPrint "w", "myInbox.Folders(strNewFolder) = " & myInbox.Folders(strNewFolder)
// Else
// eml.Move myInbox.Folders("_ForDeletion")
// End If
if (fdr.Name != strTgtFdr)
{
OutLook.MAPIFolder destFolder = inbox.Folders[strTgtFdr];
mailItem.Move(destFolder);
}
else
{
mailItem.Move(fdrForDeletion);
}
}
//allocate to subfolders
//Process othersGroups
//Likes Max 3 per day per user, max 20% of group posts
//Comments one per day per user, max 10% of group posts
//Shares one per day per user, max 10% of group posts
}
catch(System.Exception crap)
{
OutCrap(crap);
MessageBox.Show("MailCamp experienced an issues while processing the run request and aborted - please review the error log","Errors during the process",MessageBoxButtons.OK,MessageBoxIcon.Error,MessageBoxDefaultButton.Button1);
}
}
Do not use a foreach loop it you are modifying the number of items in the collection.
Loop from MAPIFolder.Items.Count down to 1.

Resources