Synapse Serverless - Other than String_agg,for Xml to convert row to column - serverless

I'm using Synapse Serverless and I want to convert row to Column. Use STRING_AGG but due to nvarchar(8000) limitation I was getting error "STRING_AGG aggregation result exceeded the limit of 8000 bytes. Use LOB types to avoid result truncation" due to that I tried to rtecreate the Query with XML path and Stuff but Serverless wont support. Is there any workaround?

The error STRING_AGG aggregation result exceeded the limit of 8000 bytes. Use LOB types to avoid result truncation has a workaround. The STRING_AGG has a limit of 8000 bytes by default, but when it exceeds this limit, you can change the limit to nvarchar(max) or varchar(max) using CONVERT inside STRING_AGG.
Refer to the following link to know how to do the above conversion and understand more information about STRING_AGG with CONVERT.
https://www.mssqltips.com/sqlservertutorial/9371/sql-string-agg-function/
There is a relational operator called PIVOT which conventionally helps to transform rows data into columns (UNPIVOT operator is also available- does the exact opposite of what PIVOT does). The following is a syntax of PIVOT:
SELECT (ColumnNames)
FROM (TableName)
PIVOT
(
AggregateFunction(ColumnToBeAggregated)
FOR PivotColumn IN (PivotColumnValues)
) AS (Alias)
Refer to the following link to understand completely about PIVOT and refer to the second link and check if any provided method can help you achieve the requirement:
https://www.appsloveworld.com/sql-server-simple-way-to-transpose-rows-into-columns/
Efficiently convert rows to columns in sql server

Related

Google Sheets: API To Get Crypto Prices and History?

As GOOGLEFINANCE() seems very limited in the cryptocurrencies it supports, are there any (free?) APIs that I can use to get data from?
Although I use GF() for ETH and BTC, I'm specifically looking for Price and Historical Closing Prices on ADA (Cardano).
I've searched the forum for suggestions, there aren't many and most are old. Binance's API seemed OK, but it gives prices in USDT instead of USD.
If anyone is interested, I found an API that offers a free key, although limited in the number of daily calls you can make: CoinAPI.
It seems very powerful, with quotes available in most currencies. So far, I've managed to get a current price:
Formulas shown in brown.
(1) shows the raw data returned, a two rows delimited by semi-colons.
(2) wraps a QUERY() around IMPORTDATA(), using offset 1 plus param 0, to not return the header row, then wraps all that in SPLIT() to separate the delimited text into columns.
(3) wraps (2) with INDEX() so I can get just the Price in the 4th col.
As this value will not automatically update like GOOGLEFINANCE(), I think I'll need to set a Trigger to do that.
I've also retrieved historical data, but I've yet to figure out how to split multiple rows of delimited text from the IMPORTDATA() function.
[Edit] See the solution to splitting multiple rows by #player0 at https://stackoverflow.com/a/69055990/190925.

Dataflow SQL - unsupported column type NUMERIC

I'm trying to set up a Dataflow-SQL job to run a query in BigQUery and publish the results to a PubSub topic. I'm not using a Dataflow template, I'm using the GCP's Dataflow SQL UI to write a query and configure the output - i.e. PubSub Topic.
The table I'm querying contains String, Date, Timestamp, and Numeric types.
Even if I don't select the column with 'Numeric' data type, I still get a validation error in the editor - unsupported column type NUMERIC.
Is there a way to get around this in Dataflow SQL? Or the source table just can't have columns of Numeric Type?
Numeric types in Dataflow SQL are INT64 and FLOAT64 (8 bytes) but not NUMERIC (16 bytes).
I reproduced your issue on my end and it certainly looks like the table cannot be loaded in the first place, even if you are not selecting the NUMERIC column.

How do you INSERT into influxDB using the SQL-like interface?

Is it possible to INSERT data into series / measurements using the SQL-like interface on InfluxDB?
Yes, you can simply INSERT a Line Protocol string.
An example from Getting Started:
INSERT cpu,host=serverA,region=us_west value=0.64
A point with the measurement name of cpu and tags host and region has now been written to the database, with the measured value of 0.64.

Why does my importrange query fail when I "wrap" with arrayformula

I have the following formula which is currently returning the expected results -
=join(",",query(importrange(vlookup(mid(G4,1,find(",",G4)-1),xref,2,false),vlookup(mid(G4,1,find(",",G4)-1),xref,3,false)),"Select Col3,Col6,Col9 where Col1 = '"&mid(G4,find(",",G4)+1,20)&"' "))
However, I naturally want to make this as dynamic and flexible as possible so I would like to "wrap" it in an arrayformula which ends up like this -
=arrayformula(join(",",query(importrange(vlookup(mid(G4:G,1,find(",",G4:G)-1),xref,2,false),vlookup(mid(G4:G,1,find(",",G4:G)-1),xref,3,false)),"Select Col3,Col6,Col9 where Col1 = '"&mid(G4:G,find(",",G4:G)+1,20)&"' ")))
This formula gives me "Unable to parse query string for Function QUERY parameter 2: NO_COLUMNCol3" error.
I tried to include an iferror to try to trap some error but this made no difference.
I tried various angles to debug and basically focussed on the importrange not providing the data to the query once it was wrapped by the arrayformula. I tried to explicitly reference the external sheet key and range in the importange function, instead of using the lookups, and this did give me a result - but only in the first cell. There should also have been a result returned about 4 rows down.
If I copy the formula down the column, I do get the expected result 4 rows down, but this obviously defeats the purpose of the arrayformula.
In my research in the Google forums there were some suggestions that arrayformula and importrange may not play well together, but no hard and fast facts.
I noticed in this forum that the combination of the two functions has been mentioned but no indication that they did not work together, so I am wondering if there is just some little thing I am missing in my syntax that is causing my ideal scenario not to work ?
I don't think this will work for a couple of reasons.
Firstly, not all the functions in Google Sheets can be automated using an arrayformula, and QUERY is one of them. As far as I know this is because the output of QUERY can be an array itself, so it is not possible to iterate an array output across another array (i.e. your results range).
Secondly, JOIN works across a either a single row or column, whereas your query outputs 3 columns. The arrayformula result would therefore consist of an array with multiple rows and columns, which JOIN cannot use.
I think the best solution is to use the IFERROR as you've described, and copy the single-row formula down the entire column - that way the blank records will not show as errors, but you will be able to add new values to column G and they will be picked up automatically.

division not working with continuous query in influxdb

I am trying to generate a continuous query in influxDB. The query is to fetch the hits per second by doing (1/response time) of the value which i am already getting for another series (say series1).
Here is the query:
select (1000/value) as value from series1 group by time(1s) into api.HPS;
My problem is that the query "select (1000/value) as value from series1 group by time(1s)" works fine and provide me results but as soon as I store the result into continuous query, it starts to give me parse error.
Please help.
Hard to give any concrete advice without the actual parse error returned and perhaps the relevant log lines. Try providing those to the mailing list at influxdb#googlegroups.com or email them to support#influxdb.com.
There's an email on the Google Group that might be relevant, too. https://groups.google.com/d/msgid/influxdb/c99217b3-fdab-4684-b656-a5f5509ed070%40googlegroups.com
Have you tried using whitespace between the values and the operator? E.g. select (1000 / value) AS value....

Resources