i'm using google sheeet api to update a spreadsheet every day.
For 2 months it worked perfectly, but suddenly it stopped on update.
I tried to run it in my machine and i received the error:
json returned "This document is too large to continue editing."
i tried some things to fix it but without sucesss. As you can see it accuses me of the document being large, but the document was as big as always. I tried to update the document writing only ONE LINE and i received the same error...
Also i tried to create a other spreadsheet to receive the complete update (that should be "too large") but it worked perfectly... ok, my code is working in other spread sheet and not in the first one...
I also tried to edit the first spreadsheet manually, but i can't, the spread sheet stays in the status 'saving' or 'reconnecting', but never saves.
Well... Yes, i can use the second spreadsheet that is working, but probably the problem will come back, so i would like to resolve this permanently.
Somebody can help me?
Related
I have filled a google spreadsheet with around 10 URLs and Xpaths. After discovering that ImportXML has some drawbacks (it is getting perpetual loading errors, even when there are only 1 or so functions running). I am looking for another way to populate the sheet.
My first solution was to delete the importxml and implement a write and repeat macro.
However, after about two months, all the seats suddenly stopped working.
Is there a way to solve this with a script?
There is a sample sheet that has stopped.
https://docs.google.com/spreadsheets/d/10Ljo7SESFdGj1Xc7U_Tg3gljHhfqbuyVV5Whulezyv4/edit?usp=sharing
I use GOOGLEFINANCE() to query the historical USD/GBP exchange rate at a set of fixed dates.
This works fine, except sometimes GOOGLEFINANCE returns #N/A, for whatever temporary upstream reason. When this happens, my spreadsheet becomes filled with #REF's, for all cells that depend on these exchange rates. The sheet is unreadable until the upstream data source is fixed. This can sometimes take hours.
This happens frequently and is especially annoying since I'm not using GOOGLEFINANCE to retrieve time-varying data. I'm just using it as a static dataset of historical exchange rates, so theoretically I have no reason to refresh the data at all.
Is there a way to locally cache the historical exchange rates in my sheet, and to fall back on those values if GOOGLEFINANCE returns #N/A?
(Bonus points for automatically updating the cached values if GOOGLEFINANCE changes its mind about the historical exchange rates.)
I know this is an old post and you probably don't care anymore, but I was having issues with my triggers updating my assets page every night - the totals would fail if any one stock price had an error.
I created a customfunction() which caches the googlefinance results - so it reverts to the last valid data point if googlefinace() fails.
However, this lead to the customfunction achiles heel, 'Loading' - which came up occasionally as well. So I then modified to use Triggers to update, using my new custom function code - which never fails.
I made it an open source project, with one file you need to add to you App Script.
Using it as a custom function would be something like:
=CACHEFINANCE(symbol, attribute, defaultValue)
For example:
=CACHEFINANCE("TSE:ZTL", "price", GOOGLEFINANCE("TSE:ZTL", "price"))
However, if you follow the instructions to create a trigger, it is way more reliable. It also has a built in web screen scraper to track down info on stocks googlefinance refuses to collect data for.
github cachefinance
well, you are working with historical data eg. those data won't change no matter what so you can get the data you need and just hardcode them eg. get rid of the GOOGLEFINANCE for good.
another way would be to wrap any possible #REF! into IFERROR so when the blackout occurs you will get nice blank sheet instead of the sea of #REF! errors
I've been working on a project involving the Watson Retrieve and Rank service and it was acting normally until now. I managed to upload a number of documents and created roughly 50 questions to start off. Normally, I was able to upload the questions just fine, but now I keep getting an error saying "Questions upload Upload failed".
I have attempted to use different browsers and going into incognito mode, yet nothing seems to solve the issue. I either get the error or the upload questions animation plays endlessly.
This is what it looks like as I try to upload the questions
If anyone could give some insight on how to approach this problem, it would be great.
Can you provide the entire error log?
Are you sure the solr cluster and collection are created correctly? The Standard Plan for this service only allow 7 rankers in the free plan.
You can try it with a new instance of the service.
Are you sure your training data meet the requirements?
Training data requirements:
https://www.ibm.com/watson/developercloud/doc/retrieve-rank/training_data.html
Retrieve and Rank wasn't working correctly on Wednesday and Thursday. But today its up and running properly.
I have been developing an iOS app that utilizes the CloudKit feature available for Apple Developers. I've found it to be a wonderful resource, especially since the very day I started designing my backend, the service I was intending to use (Parse) announced it was shutting down. It's very appealing due to it's small learning curve, but I'm starting to notice some annoying little issues here and there so I'm seeking out some experts for advice and help. I posted another CloudKit question a couple days ago, which is still occurring: CloudKit Delete Self Option Not Working. But I want to limit this to a different issue that may be related.
Problem ~ Ever since I started using CloudKit I have noticed that whenever I manually try to edit (delete an entry, remove or add part of a list, even add a DeleteSelf option to a CKReference after creation), and then try to save the change, I get an error message and cannot proceed. Here is a screenshot of the error window that appears:
It's frustrating because anytime I want to manipulate a record to perform some sort of test, I either have to go do it through my app, or just delete the record entirely and create a new one (that I am able to do without issue). I have been just working around this issue for over a month now because it wasn't fatal to my progress. However, I am starting to think that this could be related to my other CloudKit issues, and maybe if I could get some advice on how to fix it I could also solve my other problems. I have file numerous bug reports with Apple, but haven't received a response or seen any changes.
I'd also like to mention that for a very long time now (at least a few days), I've noticed down in the bottom left hand corner of my Dashboard that it is consistently saying that it's "Reindexing Development Data". I remember at first that wasn't an issue, I would get that notification after making a change but it'd go away after the operation is complete. Now it seems to be stuck somewhere inside the process. And this is a chronic issue, it's saying this all the time, even right when I log into my dashboard.
Here is what I'm talking about:
As time goes on I find more small issues with CloudKit, I'm concerned that once I go into production more problems could start manifesting and then I could have a serious issue. I'd love to stick with CloudKit and avoid the learning curve of a different service like Amazon Web Services, but I also don't want to set myself up for failure.
Can anyone help me with this issue, or has anyone else experienced it on a regular basis? Thanks for the advice and help!
Pierce,
I found myself in a similar situation; the issue seemed to be linked to Assets; I had an Asset in my record definition. I and several other I noted reported the re-indexing issue on the apple support website and after about a month it eventually disappeared.
Have you tried resting your database schema completely, snapshot the definition; since you zap it completely and than reset, see inset.
Ultimately I simply created a new project, linked it to cloud kit and use the new container in my original app.
I am having issues with Kimono Labs. Every scrape I run will run indefinitely without throwing an error or completing. Occasionally, the scrapes will randomly start working days in the future without any changes on my behalf - only to fail a few days later. I love Kimono because it is so easy to integrate with Google Sheets for friends to alter the data, but this has become problematic. There doesn't seems to be any related help in the Kimono help data for an issue such as this.
One of my scrapes is not behind a paywall and the other is. One is set to run daily and the one behind the paywall is set to run hourly.
What steps can I take to troubleshoot this error and get the ball rolling again?
I had a very simple API doing the exact same thing for weeks!
I'm only using a free account so I didn't have any support but I ended up sending a bug report at https://www.kimonolabs.com/support .
Strangely enough, the very next day, the API started working normally again (and has ever since). I assume they looked into it and fixed whatever was stopping my crawl from completing.