Since monday my Android app users have been getting a 500 return code from Fusion Tables whenever they try to save their data.
Within the app we convert all their geo data into csv format and POST it to Fusion Tables using an insert command. Before Sunday this appears to have been working fine. Starting on monday we are seeing problems like the the following logcat with 100% of our saves:
10-29 12:18:34.083: W/System.err(3650): java.io.FileNotFoundException: https://www.googleapis.com/upload/fusiontables/v1/tables/[valid table ID redacted]/import?access_token=[valid access token redacted]
Despite the error message, manually checking the Fusion Tables shows no error and all seems fine.
Given the other problems with Drive on monday I'm guessing that the team rolled out some changes over the weekend. Perhaps there was a change that means I need to terminate the stream with a special character or something. Anyone else experiencing a similar problem or have any idea what is going on?
I'm up to about 30 complaints from users now and it's getting a bit old.
We believe this is resolved now -- are you still getting the reports? If it's the problem we identified, the imports were in fact eventually completing even though an error status was returned.
Related
I have a database that has been in use for many years. It was upgraded to .accdb back in 2010. It is currently being accessed using Access for Office 365 on a Windows 10 Pro computer. The database is stored on a network drive.
Everything has been working fine until today. Today I'm getting an error that the query [query name] is corrupt. I created a new query that does the same thing as the old one and am getting the same error message on the new query. I've added the database location to the trusted location, but that hasn't fixed the error.
The query in question is a simple update query.
It looks at one value in one table.
Sets it to 0 if the current value is null.
Any ideas where I should look next?
Thank you for any and all help.
Having an entity with a date field inserted/updated via a SAPUI5 dialog. After some consecutive updates (changes on the date via the datepicker) on the same record, I experience this weird error. I say weird because I don't have this kind of check inside my update entity. In fact the update entity method does nothing more than performing a couple of simple checks in other fields and eventually inserting the record, no calls to BAPIs, standard SAP function modules or anything. Also weird the fact that it doesn't happen always. Spent half the day repeating the same motif, just consecutive updates with debug, sometimes it happens, sometimes not. No exception occurs in my method, error comes from standard SAP after the end of get_entityset which follows the update. If helpful, I attach a snapshot of the batch operations involved.
Batch operations table
And it gets better! After the application crashes, never opens again, it actually produces during the load of the initial worklist (get_entityset method) a variation of the aforementioned error (Message E FK 080 cannot be processed in plugin mode HTTPS). Application stops crashing...as soon as I delete the record from the database.
Record looks just fine in SE16N and gateway client works fine when I test the get_entityset method. Cleared all cache in the system (used this: https://blogs.sap.com/2016/03/02/cache-maintenance-in-fiori/), cleared even the browser cache (yeah, I know that doesn't make sense) but problem persists. I use model.submitChanges to update, no change groups involved.
Does this ring any bells to anyone?
Best regards
Greg
I've been using Fusion Tables for about a year off and on. Today on attempting to upload a 156kb file, it comes up with 'Unable to complete import'.
I tried reducing the size, then realised most of the other files i'd uploaded were larger, so out of curiosity attempted to upload a previously uploaded file (already stored as a fusion table).
This also fails. I've tried uploading on another account and this also fails with the same message.
Is there anyway to check to see if there is an issue with Fusion Tables? I've read previous questions and those unrelated to size mention an issue with Fusion Tables, or alternatively if there is a way of getting better error reporting from the attempted upload?
Resolved by Google. No changes were made/no reboots today, but fusion tables are working again.
So here's the issue, we have a data that the users want displayed. The query, we optimized and indexed to be as fast as I think its going get. We might shave off a second or two but with the amount of data, not much we can do.
Anyway, so the query runs great when limited to a day or two of data, however the users are running it for a week or two of data. So the query for two or three weeks of data takes about 40 seconds, with a Heroku timeout of 30 seconds, that doesn't work. So we need a solution.
So searching here and Google, I see comments that webhooks or Ajax would work as out solution. However, I've been unable to find a real concrete example. I also saw a comment where someone was saying we could send some kind of response that would "reset the clock." But again sounded intriguing but couldn't find an example.
We're kind of under the gun, the users are unhappy, so we need a solution that is fast and simple. Thanks in advance for your help!
I faced a similar problem. I have what is essentially a 'bulk download' page in my Sinatra app, which my client app calls to import data into a local webSQL db in the browser.
The download page (lets call it "GET /download") queries a CouchDB database to get all of the documents in a given view, and it's this part of the process (much like your query) that takes a long time.
I was able to work around the issue by using Sinatra's streaming API. As mentioned above, you can 'reset' the clock in Heroku by sending at least one by in the response before the 30s time is up. This resets the clock for another 55s (and each additional byte you send keeps resetting the clock again), so that while you're still sending data, the connection can be kept open indefinitely.
In Sinatra, I was able to replace this:
get '/download' do
body = db.view 'data/all'
end
..with something like this:
get '/download' do
# stream is a Sinatra helper that effectively does 'chunked transfer encoding'
stream do |out|
# Long query starts here
db.view 'data/all' do |row|
# As each row comes back from the db, stream it to the front-end
out << row
end
end
end
This works well for me, because the 'time to first byte' (ie. the time taken for the db query to return the first row is well under the 30s limit.
The only downside is that previously I was getting all of the results back from the database into my Sinatra app, then calculating an MD5 sum of the entire result to use as the etag header. My client app could use this either to do a conditional HTTP get (ie. if it tried to download again and no data had been modified, I could just send a 302 Not Modified); plus it could be compared against it's own checksum of the data it received (to ensure it hadn't been corrupted/modified in transit).
Once you start streaming data in the response, you lose the ability to calculate an MD5 sum of the content to send as a HTTP header (because you don't have all of the data yet; and you can't send headers after you've started sending body content).
I'm thinking about changing this to some sort of paginated, multiple AJAX call, solution; as Zenph suggested above. That, or use some sort of worker process (eg. Resque, DelayedJob) to offload the query to; but then I'm not sure how I'd notify the client when the data is ready to be fetched.
Hope this helps.
I have a couple users that are getting this CookieStore::CookieOverflow error.
I'm suspicious of nginx/passenger because I just switched to that last week (from nginx/thin) and now these are happening.
It's always a particular action, but it doesn't happen for all users. I checked to see what I'm storing in the session and I'm not saving any large objects, just a couple ids and a couple boolean values.
If I were storing big objects in the session, I'd expect all users to have this error.
Suggestions on how to troubleshoot this would be helpful.
Tracking and debugging a CookieStore::CookieOverflow error it's not simple. You should try to replicate exactly the same user activity on the site.
A couple of suggestions to fix the error:
* switch to a more scalable cookie storage such as ActiveRecord or Memcached
* try to reduce the number of elements stored in session
Also note that flash messages are stored in session. If you send back a really long flash message text you might expect a CookieOverflow error.
Three Date objects stored in the session were causing this. Removing them from the session stopped the error from happening.