I am using this google script to transfer data from google web form to fusion table.
http://fusion-tables-api-samples.googlecode.com/svn/trunk/FusionTablesFormSync/src/formsync.js
This works perfectly well, except in cases when any field of my webform includes \ character, and script results in "Exception: Invalid query: Parse error near...."
Example of Web form input field data:-
10^{23}\;atoms
Is it possible to fix (where?) in the script, which probably builds malformed query to fusion table?
I just committed a new version of the script that fixes the unescaped backslashes, as well as correcting the timestamp format so Fusion Tables knows what to do with it.
Related
when I use hive sql with chinese comment to create table on the web(HUE), it pops up 'ascii codec can't encode characters in position', then I tried to chang python default encod into utf8 to fix it, but the exception change into semantic exception.
I double check on hive with beeline connection, it can execute ddl,dql with chinese successfully, such as
'select *, 'chinese中文' as test from tb_name',
'create table tb_name (id string comment 'chinese中文') comment 'chinese中文' row format ....'
'desc formatted tb_name'.
I also change corresponding column with utf8 on desktop_document2 which in the hue's metastore to make shure chinese can show on the ui.
till now I gussing encoding get worng between typing on ui and execut stage. need help, thank you so much.
btw, hue is 4.10 and hive is 2.3.7. same problem when query mysql table, but I added jdbc dialect '?characterEncoding=UTF-8' to fix it. hope it can provide inspiration.
I have some code which gets details of lists in a SharePoint site then later wants to find out if a list with the same name still exists. This works fine except for list names that contain a colon - I find Graph misinterprets the colon and 'corrupts' the URL.
For instance, in Graph Explorer when I give it the following query:
https://graph.microsoft.com/v1.0/sites('mysite.sharepoint.com,aa-aa-aa,bb-bb-bb')/lists('19:abcdef#thread.tacv2_wiki')
The error response contains the following in the 'message' property:
The expression \"sites('mysite.sharepoint.com,aa-aa-aa,bb-bb-bb')/lists('19')/abcdef#thread.tacv2_wiki\" is not valid.
Note that it's split the original URL, thinking the colon is the start of a new segment in the path, even though it's inside a quote.
I've tried all sorts of quoting of the colon (%3A and %253A and %25253A) and different styles of quote characters, but they all either return the same error or give a parsing error.
More information - I specifically want to search by name not by original id (which would be much easier), I'm acutually using Graph Managed API in code but it generates the same error (you'd think it would internally know how to quote), the list is actually a hidden one created in a Teams site to manage channel information.
I was also able to reproduce your issue but as a work around you can use the filter query parameter to get the list by using below query.
https://graph.microsoft.com/v1.0/sites/soaadteam.sharepoint.com,c1178396-d845-46fa-bc0c-453d2951dad5,19ee9a1e-001d-48f1-9ee8-b0adfde54e45/lists?$filter=displayName eq '19:abcdef#thread.tacv2_wiki'
we used to have three different JIRA projects, and merged them all into one new project, resulting in new IssueKeys for all issues involved.
Unfortunately, test automation uses the IssueKey to update the issue about test results (via SQL INSERT statement), and I try to avoid updating the list of IssueKeys in the suite.
I can think of two ways:
Addressing the issues by the old IssueKey. This seems to work in JIRA JQl search (issuekey=ISSUE-OLD finds the same issue as issuekey=ISSUE-NEW ), but not for the SQL INSERT statement.
Getting a list of pairs old-new IssueKey. For example, in JIRA under "activity" and "all", I can see entries that log the changes. Exporting tose worklogs might be a great help, but there might be other ways.
Thanks in advance,
Florian
I assume you're updating a test management db, not the Jira db (which would not be good)
You might also be interested in the moved_issue_key table which is where Jira stores the previous issue keys (e.g. ABC-123) and maps them to an issue id (the id in the jiraissue table)
you can use the curl api:
curl -D- -u user:password -X GET -H "Content-Type: application/json" https://url.com/rest/api/2/issue/ISSUE-OLD
The response will contain the key value as ISSUE-NEW with other details.
I have recently opened a new spreadsheet:
https://docs.google.com/spreadsheets/d/1yapaaaFn0mtJF0CMiIil4Y1uYCqS97jIWL4iBZMPAKo/pubhtml
I want to find 'title' which url=http://www.ettoday.net/news/20140327/339912.htm
I read google api doc and tried this:
spreadsheets.google.com/feeds/list/1yapaaaFn0mtJF0CMiIil4Y1uYCqS97jIWL4iBZMPAKo/0/private/full?sq=url%3D%27http%3A%2F%2Fwww.peoplenews.tw%2Fnews%2F29813808-befa-45b6-9123-8dcef851af45%27
but it didn't work.
I also tried:
docs.google.com/spreadsheets/d/1yapaaaFn0mtJF0CMiIil4Y1uYCqS97jIWL4iBZMPAKo/gviz/tq?tq=SELECT%20topic20WHERE%20url%3D'http%3A%2F%2Fwww.peoplenews.tw%2Fnews%2F29813808-befa-45b6-9123-8dcef851af45'
but it didn't work either.
are there any way to do this kind of query?
I know this is old, but I just worked through a similar issue.
Querying a google spreadsheet via URL params requires the use of their data visualization query language (nearly identical to SQL).
Your query must be encoded then added as a parameter to the end of your URL (google provides an encoder with its document on this here).
Using your example url (notice no "/pubhtml"):
https://docs.google.com/spreadsheets/d/1yapaaaFn0mtJF0CMiIil4Y1uYCqS97jIWL4iBZMPAKo
To query this sheet, you must append this URL with /gviz/tq?tq=YOUR_ENCODED_QUERY_STRING
YOUR_ENCODED_QUERY_STRINGfor your case would be:
SELECT * where B contains "ettoday"
Note #1 - I used "B" and not "url". This is because you must query based on the spreadsheet cell identifier (A-Z), not the label/contents.
Note #2 - I could not get it to work when I queried with a fully quallified URL, so I used contains instead.
After encoding that string we get:
SELECT%20*%20where%20B%20contains%20%22ettoday%22
Slap that onto your URL (with /gviz/tq?tq=) and you have:
https://docs.google.com/spreadsheets/d/1yapaaaFn0mtJF0CMiIil4Y1uYCqS97jIWL4iBZMPAKo/gviz/tq?tq=SELECT%20*%20where%20B%20contains%20%22ettoday%22
Which works for me :)
The spreadsheets.google.com query is the old method of accessing the google spreadsheets.
The new method involves the docs.google.com query.
Here is a working one:
https://docs.google.com/spreadsheets/d/1chFDkz5Fqus1ODgtdEGNt4Mq2nxnkKnuqbEB4LaZF6o/gviz/tq
That was retrieved from:
query to new google spreadsheets
Some of the old query parameters still work, such as "?range=A1:B", however not all of them do. Unfortunately, I have not yet found a good reference for the new API. Google claims that all the features of V1 and v2 of the api are available in this new one, but it sure doesn't feel like it to me.
Note: the old query method still works with the old version of google spreadsheets and you should use it if you haven't converted the sheet you are using. The new method is just for sheets that have been converted.
Note2: Google forms no longer seems to work consistently with the old spreadsheets, so you will probably be forced to delete the old sheet and have the form generate a new one which will be the new version and require the new url to query it.
try this
https://docs.google.com/spreadsheets/u/0/d/1yapaaaFn0mtJF0CMiIil4Y1uYCqS97jIWL4iBZMPAKo/gviz/tq?tqx=out:html&tq=SELECT+*+where+B+contains+"http://www.ettoday.net/news/20140327/339912.htm"
tq=SELECT+*+where+B+contains+"http://www.ettoday.net/news/20140327/339912.htm"
SELECT * where B contains "http://www.ettoday.net/news/20140327/339912.htm"
More info here -> 在這裡閱讀更多
I had a project in Redmine with more than 600 issues. I moved all the issues to a different project. I had no idea that the move deletes all the data for the custom fields!
So all the custom field values are now lost. I did not backup the database before this action as I really did not think that I was going to do any harm by moving issues as moving is a native function in the UI.
What I noticed is though that the production.log contains events for all creation and updates. All my 600 issues are in order in the production log. How can I use these log statements to repeat the actions? If I can import all the log actions, I can migrate the custom fields that it writes to the original Redmine instance and restore my values.
Entries look like this:
Processing IssuesController#update (for XX.XX.XX.X at 2013-02-07 11:19:54) [PUT]
Parameters: {"_method"=>"put", "authenticity_token"=>"nWNSSRYjHhN0BGb+Ya8M4pYWPPgsfdM=", "issue"=>{"assigned_to_id"=>"", "custom_field_values"=>{"10"=>"", "5"=>"Not translated", "1"=>"fi", "8"=>"http://screencast.com/t/ODknR8K", "9"=>"", "3"=>"", "4"=>""}, "done_ratio"=>"0", "due_date"=>"", "priority_id"=>"4", "estimated_hours"=>"", "start_date"=>"2013-02-07", "subject"=>"1\tInstallation in English", "tracker_id"=>"1", "lock_version"=>"0", "description"=>"Steps:\r\nOpen Nitro\r\n\r\nProblem:\r\nNot localized"}, "controller"=>"issues", "time_entry"=>{"hours"=>"", "activity_id"=>"", "comments"=>""}, "attachments"=>{"1"=>{"description"=>""}}, "id"=>"3876", "action"=>"update", "commit"=>"Submit", "notes"=>""}
I am really hoping that there is a way, any help will be greatly appreciated
You could use a decent text editor and/or spreadsheet application and do a massive find and replace and construct a series of UPDATE SQL commands and run them directly on the database (TEST FIRST!!)
Extract from log
Remove unnessary information
Copy into spreadsheet
Split text into columns
Add in columns with necessary SQL commands "UPDATE SET etc" copy into all rows of this column etc.
Join columns to make one text command per row
Export joined data to a text file
Run against test database as sql
If all goes well run against production database as sql
The log entry, following "Parameters:", looks like a regular Ruby hash definition. I'd parse that out and eval it back into a hash variable.
From there you will need to peel off elements and insert them into a database. I'd do that using Sequel, but use what works for you.
Talk to the RedMine support people and get the schema for their tables so you can figure out what data goes where and the database driver needed.