Fusion Tables: Unable to complete import - google-fusion-tables

I've been using Fusion Tables for about a year off and on. Today on attempting to upload a 156kb file, it comes up with 'Unable to complete import'.
I tried reducing the size, then realised most of the other files i'd uploaded were larger, so out of curiosity attempted to upload a previously uploaded file (already stored as a fusion table).
This also fails. I've tried uploading on another account and this also fails with the same message.
Is there anyway to check to see if there is an issue with Fusion Tables? I've read previous questions and those unrelated to size mention an issue with Fusion Tables, or alternatively if there is a way of getting better error reporting from the attempted upload?

Resolved by Google. No changes were made/no reboots today, but fusion tables are working again.

Related

Firebird Trace SP's / Steps

I need a simple and free program to monitor all transactions on a Firebird Server.
I'm creating a web API that needs to mimic a user creating a new order in a custom made program.
Since I haven't got access to the source code I guess my best chance to properly insert all the data is to use the existing stored procedures. I can see all the SP's on the Firebird server and the details but I haven't got a clue which ones to use and in what order.
So the plan would be to monitor the activities while creating an order.
Many thanks for the help!
Thank you Mark & Arioch for contributing.
After hours of failed experimenting with the included fbtrace.exe included in Firebird 2.5 installation i've decided to use "FB TraceManager" Trial version.
Found here: https://www.upscene.com/downloads/fbtm

Use case for offline data storage keeping in sync with Parse backend

I am currently working on Parse integration for one of my iOS applications where in I need to pull some records (Customer Feedback from existing table) from Parse and show them in mob-app.With Parse iOS SDK 1.6.1 I realized that I could also use LocalDataStore to provide include the offline support, however going through the following articles appCoda , raywenderlich and Parse documents I could not figure out a solution which could solve the use case I am dealing with.
Step 1: Show all records pulling from server (Initially Sync - but I also realized that I need to enable [Parse enableLocalDatastore]; which is now interfering with initial data pull )
Step 2: Allow user to perform certain modification and sync this data back with server.
Step 3:Keep the local data store in sync with online data all the time (provided I have internet as and when needed).
I was able to implement the ALL-ONLINE version of the app and achieve all features as needed but I would also like to include the Offline support. A few question that raises doubt are
does LocalDataStore only support offline usage of the application that
has to be manually synced with Parse backend ?
The data fetch from Parse localDataStore via [query fromLocalDatastore]; doesn't return anything on first call (I know because there isn't anything on device). Do I need to write the logic to pull down data from backend every time and keep local datastore in sync ?)
Can someone correct me if I am using it the wrong way? or give me some pointers for correct usage, then it will be really helpful.
Yes, you have to query the data online first(without "[query fromLocalDatastore];"). And "pin' it for local usage.
Usefull hint can be to use "UpdatedAt" to get only the new stuff.
Once done, you can get data online and offline. The sync should be automatic.
'Red flag' : Don't forget to update to sdk 1.6.2, as they solve a lot of big bugs related to LocalDataStore.

Fusion Tables 500 response on csv import to private table

Since monday my Android app users have been getting a 500 return code from Fusion Tables whenever they try to save their data.
Within the app we convert all their geo data into csv format and POST it to Fusion Tables using an insert command. Before Sunday this appears to have been working fine. Starting on monday we are seeing problems like the the following logcat with 100% of our saves:
10-29 12:18:34.083: W/System.err(3650): java.io.FileNotFoundException: https://www.googleapis.com/upload/fusiontables/v1/tables/[valid table ID redacted]/import?access_token=[valid access token redacted]
Despite the error message, manually checking the Fusion Tables shows no error and all seems fine.
Given the other problems with Drive on monday I'm guessing that the team rolled out some changes over the weekend. Perhaps there was a change that means I need to terminate the stream with a special character or something. Anyone else experiencing a similar problem or have any idea what is going on?
I'm up to about 30 complaints from users now and it's getting a bit old.
We believe this is resolved now -- are you still getting the reports? If it's the problem we identified, the imports were in fact eventually completing even though an error status was returned.

ios how to manage DB with iphone app version by version?

My question contains two ways:
1. For example I have an app which is on appstore and containing sqlite database. After sometime I want to update app version without changing database schema. what happened when app will be updated on user's device ? would all data in old database removed or just remains with same database and data ?
2.For example I have an app which is on appstore and containing sqlite database. After sometime I want to update app version with changed database schema. what happened when app will be updated on user's device ? its must changed the DB file but how can we save old data entries those are in old DB version. I have read many posts but still confused which approach I should use.
Thanks in advance for Helping
It is quite simple. When updating the application documents folder remains intact, so you can assume that the user data continues to be available.
For case 2 make sure you do not compromise the data in your update routines at the first start after the update. The app should detect that it is in a new version and modify the schema (e.g. via SQL scripts) while taking care of not deleting user data.

WMS layer is empty although mxd shows data

I use ArcGIS Server to serve a map of points from a database. When I create and publish the mxd as a WMS service in AGS everything looks fine. But after a while, the day after or something - the map shows nothing. Every request to the WMSServer for that layer comes back empty. Opening the mxd in ArcMap shows the correct data as expected, only the WMS calls are faulty.
What could be the problem?
Details:
I create an mxd file, and add data to it from a non-spatial database. To create the layers I right-click on the data source and select "Display XY data..." and select the X and Y columns from the data.
In AGS Manager I select "Add new service" and point to that mxd file, using all default settings from that. I have also tried the simpler "Publish GIS resource" and got the same results.
It appears as it was the way I set up the data connections in the mxd file that caused the problem. ArcGIS server uses a system account to run all services ("ArcGISWS" in our instance), and that account didn't have access to all data that I referenced in the mxd. Changing to an mxd that was set up using the ArcGISWS account, everything works as expected. I guess that the solution for anyone doing this is to log in to the ArcGIS Server with the intended account (ArcGISWS) and create the mxd, in that case all problems with data access will be obvious already in ArcMap, and the user can solve those issues before publishing the service.
At least, that is what I'll recommend. :-)
The reason behind the strange behaviour of the map working at first must have been a connection cache or something, so when the AGS recycled the connections or pools during the night, that connection was removed, leaving the ArcGISWS account to do the connection, which it couldn't due to lack of permissions.
Hope I can help someone with this attempt of a solution.

Resources