I'm trying to link a custom query, but since i'm new in this, i'm getting stuck when Sheets tries to read timestamps
I'm getting the following error:
Something's wrong. Please try again later: Error while parsing the query: Syntax error: Table name contains '-' character. It needs to be quoted: 'xxxxxxxxxx' [at 3:10]
SELECT COUNT(DISTINCT sequence) as aaaaaaaa,bbbbbbb,
EXTRACT (date from CreationDateBR) as data,
FROM xxxxxxxxxxxx
WHERE CreationDateBR BETWEEN '2021-01-01' AND '2022-01-01'
AND loja IN ('Marketplace C')
AND parceiros NOT IN ('partner A')
GROUP BY parceiros,data
ORDER BY data,parceiros asc
The error message states that the issue is in the table name. You can try to put backticks around it (`):
FROM `xxxxxxxxxxxx`
A good trick to ensure the proper formatting is to go to your table in the BigQuery UI and click on "query this table": the proper formatting will open in a new query editor.
A fragment of SQL in the Informix dialect
SELECT INSUREDNAME
FROM sc5100car3gdb#idp_5100_cb:PRPCINSURED P
WHERE P.PROPOSALNO = A.PROPOSALNO
What does this grammar mean?
The SQL fragment is:
SELECT INSUREDNAME
FROM sc5100car3gdb#idp_5100_cb:PRPCINSURED P
WHERE P.PROPOSALNO = A.PROPOSALNO
This means that there is a table PRPCINSURED in database sc5100car3gdb hosted on Informix server idp_5100_cb; inside the query, the table will be referred to by the alias P. It has columns INSUREDNAME and PROPOSALNO. Further, this must be a fragment of an SQL statement. The WHERE clause uses the alias P, but also references another table with the alias (or perhaps name) A. However, the context defining A is not shown; as it stands, the A will trigger an error. (When I ran an analogous query, I got the error SQL -217: Column (a) not found in any table in the query (or SLV is undefined).)
See the Informix Guide to SQL: Syntax manual on database object names for more information about the notation used for the table name.
I am joining a KSQL stream and a KSQL Table. Both are mapped to same key.
But no data is coming to the resulting stream.
create stream kz_yp_loan_join_by_bandid WITH (KAFKA_TOPIC='kz_yp_loan_join_by_bandid',VALUE_FORMAT='AVRO') AS
select ypl.loan_id, ypl.userid ,ypk.name as user_band_id_name
FROM kz_yp_loan_stream_partition_by_bandid ypl
INNER JOIN kz_yp_key_table ypk
ON ypl.user_band_id = ypk.id;
No data is in stream kz_yp_loan_join_by_bandid
But if I do simply :
select ypl.loan_id, ypl.userid ,ypk.name as user_band_id_name
FROM kz_yp_loan_stream_partition_by_bandid ypl
INNER JOIN kz_yp_key_table ypk
ON ypl.user_band_id = ypk.id;
There is data present.
It shows that stream is not written but why is it so?
I have tried doing entire setup again.
A few things to check:
If you want to process all the existing data as well as new data, make sure that before you run your CREATE STREAM … AS SELECT ("CSAS") you have run SET 'auto.offset.reset' = 'earliest';
If the join is returning data when run outside of the CSAS then this may not be relevant, but always good to check your join is going to match all the requirements
Check the KSQL server log in case there's an issue with writing to the target topic, creating the schema on the Schema Registry, etc.
These references will be useful:
https://www.confluent.io/blog/troubleshooting-ksql-part-1
https://www.confluent.io/blog/troubleshooting-ksql-part-2
I'm trying to sort documents of type 'Case' by the 'Name' of the 'Contact' they belong to in Solr. But cases have no 'ContactName' field or similar, only 'ContactId'.
Only examples I could find are iterations of the example on this link: https://wiki.apache.org/solr/Join
But I couldn't apply it to my situation because of the sorting afterwards. The following gives me the cases I want but I can't sort it by the contact name afterwards because it only returns the fields of the cases.
{!join from=Id to=ContactId}*:*
SQL equivalent of what I want would be something like:
SELECT Case.Id, Contact.Name
FROM Case
LEFT JOIN Contact
ON Case.ContactId = Contact.Id
ORDER BY Contact.Name ASC;
So to answer my own question after some digging and a Solr training:
It is not best practice to use joins in a NoSql database like Solr. If you need joins, then your database is structured wrong. You should index everything you need, in the document itself, even if it is redundant. So in my case, I should index 'Contact.Name' field in my 'Case' documents.
Still, it is apparently possible to use SQL queries in Solr in case it is absolutely needed, if you're using SolrCloud but it doesn't support joins. However it is possible to work around that as follows:
SELECT s1.Id
FROM salesforce s1, salesforce s2
WHERE s1._type_ = 'Case' and s2._type_ = 'Contact' AND s1.ContactId = s2.Id
ORDER BY s2.Name ASC
It should be noted that the fields after '.' like the 'Id' in 's1.Id' must have 'docValues' activated in the schema. More info on docValues is here.
My experience is with SQL but I am working on learning parse server data management and in the example below I demonstrate how I would use SQL to represent the data I currently have stored in my parse server classes. I am trying to present all the users, the count of how many images they have uploaded, and a count of how many images they have liked for an app where users can upload images and they can also scroll through and like other people's images. I store the id of the user who uploads the image on the image table and I store an array column in the image table of all the ids that have liked it.
Using SQL I would have normalized this into 3 tables (user, image, user_x_image), joined the tables, and then aggregated that result. But I am trying to learn the right way to do this using parse server where my understanding is that the best practice is to structure the data the way I have below. What I want to do is produce a "leader board" that presents which users have uploaded the most images or liked the most images to inspire engagement. Even justy links to examples of how to join/aggregate parse data sets would be very helpful. If I wasn't clear in what I am trying to achieve please let me know if the comments and I will add updates.
-- SQL approximation of data structured in parse
create volatile table users
( user_id char(10)
, user_name char(50)
) on commit preserve rows;
insert into users values('1a','Tom');
insert into users values('2b','Dick');
insert into users values('3c','Harry');
insert into users values('4d','Simon');
insert into users values('5e','Garfunkel');
insert into users values('6f','Jerry');
create volatile table images
( image_id char(10)
, user_id_owner char(10) -- The object Id for the parse user that uploaded
, UsersWhoLiked varchar(100) -- in Parse class this is array of user ids that clicked like
) on commit preserve rows;
insert into images values('img01','1a','["4d","5e"]');
insert into images values('img02','6f','["1a","2b","3c"]');
insert into images values('img03','6f','["1a","6f",]');
-----------------------------
-- DESIRED RESULTS
-- Tom has 1 uploads and 2 likes
-- Dick has 0 uploads and 1 likes
-- Harry has 0 uploads and 1 likes
-- Simon has 0 uploads and 1 likes
-- Garfunkel has 0 uploads and 1 likes
-- Jerry has 2 uploads and 1 likes
-- How to do with normalized data structure
create volatile table user_x_image
( user_id char(10)
, image_id char(10)
, relationship char(10)
) on commit preserve rows;
insert into user_x_image values('4d','img01','liker');
insert into user_x_image values('5e','img01','liker');
insert into user_x_image values('1a','img02','liker');
insert into user_x_image values('2b','img02','liker');
insert into user_x_image values('3c','img02','liker');
insert into user_x_image values('1a','img03','liker');
insert into user_x_image values('6f','img03','liker');
-- Return the image likers/owners
sel
a.user_name
, a.user_id
, coalesce(c.cnt_owned,0) cnt_owned
, sum(case when b.relationship='liker' then 1 else 0 end) cnt_liked
from
users A
left join
user_x_image B
on a.user_id = b.user_id
left join (
sel user_id_owner, count(*) as cnt_owned
from images
group by 1) C
on a.user_id = c.user_id_owner
group by 1,2,3 order by 2
-- Returns desired results
First, I am assuming you are running Parse Server with a MongoDB database (Parse Server also supports Postgres and it can make things little bit easier for relational queries). Because of this, it is important to note that, besides Parse Server implements relational capabilities in its API, in fact we are talking about a NoSQL database behind the scenes. So, let's go with the options.
Option 1 - Denormalized Data
Since it is a NoSQL database, I'd prefer to have a third collection called LeaderBoard. You could add a afterSave trigger to the UserImage class and make LeaderBoard always updated. When you need the data, you can do a very simple and fast query. I know that it sounds kinda strange for a experienced SQL developer to have a denormalized data, but it is the best option in terms of performance if you have more reads than writes in this collection.
Option 2 - Aggregate
MongoDB supports aggregates (https://docs.mongodb.com/manual/aggregation/) and it has a pipeline stage called $lookup (https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/) that you can use in order to perform your query using a single api call/database operation. Parse Server supports aggregates as well in its API and JS SDK (https://docs.parseplatform.org/js/guide/#aggregate) but unfortunately not directly from client code in Swift because this operation requires a master key in Parse server. Therefore, you will need to write a cloud code function that performs the aggregate query for you and then call this cloud cloud function from your Swift client code.