Is it possible to get a custom string as a query output of SELECT query in influxQL?
> select time, uuid1, uuid2, id from mydb."autogen".data limit 1;
name: measurement1
time uuid1 uuid2 id
---- ------------ ----------- ---------------
1555321822616000000 ### 00000000-0000-0000-0000-000000000001 45337
I would like to get "hai" instead of ### in my query output.
I tried this one:
> select time, uuid1 as "hai", uuid2, id from mydb."autogen".data limit 1;
name: measurement1
time uuid1 uuid2 id
---- ------------ ----------- ---------------
1555321822616000000 00000000-0000-0000-0000-000000000001 45337
My expected result is as follows:
time uuid1 uuid2 id
---- ------------ ----------- ---------------
1555321822616000000 "hai" 00000000-0000-0000-0000-000000000001 45337
Related
I currently have 2 tables in KSQLDB:
CREATE STREAM "source-mysql-person" (
"_uid" STRING,
"_created" TIMESTAMP,
"_updated" TIMESTAMP,
"_disabled" TIMESTAMP,
"name" STRING,
"age" INT
) WITH (
KAFKA_TOPIC = 'source-mysql-person',
VALUE_FORMAT = 'AVRO'
);
/*
Field | Type
--------------------------------------------
_uid | VARCHAR(STRING) (primary key)
_created | TIMESTAMP
_updated | TIMESTAMP
_disabled | TIMESTAMP
name | VARCHAR(STRING)
age | INTEGER
--------------------------------------------
*/
CREATE TABLE "table-mysql-enriched-person_contact" WITH (
KAFKA_TOPIC = 'table-mysql-enriched-person_contact',
VALUE_FORMAT = 'AVRO'
) AS SELECT
"pc"."_uid" AS "_uid",
"pc"."_created" AS "_created",
"pc"."_updated" AS "_updated",
"pc"."_disabled" AS "_disabled",
"pc"."is_default" AS "is_default",
"pc"."value" AS "value",
"pc"."person_uid" AS "person_uid",
AS_MAP(
ARRAY['_uid', 'value'],
ARRAY["ct"."_uid", "ct"."value"]
) AS "contact_type"
FROM "table-mysql-person_contact" "pc"
INNER JOIN "table-mysql-contact_type" "ct" ON
"ct"."_uid" = "pc"."contact_type_uid"
EMIT CHANGES;
/*
Field | Type
-----------------------------------------------
_uid | VARCHAR(STRING) (primary key)
_created | TIMESTAMP
_updated | TIMESTAMP
_disabled | TIMESTAMP
is_default | INTEGER
value | VARCHAR(STRING)
person_uid | VARCHAR(STRING)
contact_type | MAP<STRING, VARCHAR(STRING)>
-----------------------------------------------
*/
I want to create a table table-mysql-enriched-person that has the data of table-mysql-person and for each "person", a list of "person_contact" related to that "person". For this I am trying to use the following querie:
CREATE TABLE "table-mysql-enriched-person" WITH (
KAFKA_TOPIC = 'table-mysql-enriched-person',
VALUE_FORMAT = 'AVRO'
) AS SELECT
"p"."_uid" AS "_uid",
"p"."_created" AS "_created",
"p"."_updated" AS "_updated",
"p"."_disabled" AS "_disabled",
"p"."name" AS "name",
"p"."age" AS "age",
AS_MAP(
ARRAY[
'_uid',
'_created',
'_updated',
'_disabled',
'is_default',
'value',
'contact_type'
],
ARRAY[
"e"."_uid",
"e"."_created",
"e"."_updated",
"e"."_disabled",
"e"."is_default",
"e"."value",
"e"."contact_type"
]
) AS "list_person_contact"
FROM "table-mysql-enriched-person_contact" "e"
INNER JOIN "table-mysql-person" "p" ON
"p"."_uid" = "e"."person_uid"
GROUP BY
"p"."_uid",
"p"."_created",
"p"."_updated",
"p"."_disabled",
"p"."name",
"p"."age"
EMIT CHANGES;
Theoretically the query is correct because I have primary keys in the 2 queries and, thinking that the table "person_contact" is a child table of "person" and that the field person._uid is represented as person_contact.person_uid, I should manage to create the table but ksqldb is returning me the following message:
Could not determine output schema for query due to error: GROUP BY requires aggregate functions
in either the SELECT or HAVING clause.
I am using InfluxDB and I want to divide two fields.
My query which is picking result is working fine.
SELECT "payload-length" ,"in-data" FROM "test50"."autogen"."SYS-LOG"
But when i try to divide these two fields i am getting error,
SELECT "payload-length" / "in-data" FROM "test50"."autogen"."SYS-LOG"
Error i am getting is :
unable to construct transform iterator from *influxql.stringChanIterator and *influxql.stringChanIterator
Not sure what i am missing.
What is the datatype stored in your fields ? What should the result represent ?
I get the same result if I try to do a division on two string fields.
> insert mymeasurement,tag1=tag1,tag2=tag2 fieldA="aaa",fieldB="bbb"
> insert mymeasurement,tag1=tag1,tag2=tag2 field1=500,field2=20
> select * from mymeasurement;
name: mymeasurement
time field1 field2 fieldA fieldB tag1 tag2
---- ------ ------ ------ ------ ---- ----
1505944438559106045 aaa bbb tag1 tag2
1505944483558339332 500 20 tag1 tag2
> show field keys from "mymeasurement"
name: mymeasurement
fieldKey fieldType
-------- ---------
field1 float
field2 float
fieldA string
fieldB string
> select field1 / field2 from mymeasurement
name: mymeasurement
time field1_field2
---- -------------
1505944483558339332 25
> select fieldA , fieldB from mymeasurement
name: mymeasurement
time fieldA fieldB
---- ------ ------
1505944438559106045 aaa bbb
> select fieldA / fieldB from mymeasurement
ERR: unable to construct transform iterator from *influxql.stringChanIterator and *influxql.stringChanIterator
I have a page which consists of a table with two columns.
header | value
----------------
field1 | 1
field2 |
field3 | 1
field4 |
field5 | 1
When I select the values I need to get the same number as there are fields. I get the right number with:
>s = scrapy.Selector(response)
>values = s.xpath('//tr/td[#class="tdMainBottom"][2]').extract() # get the second column
>len(values)
5
But:
>s = scrapy.Selector(response)
>values = s.xpath('//tr/td[#class="tdMainBottom"][2]/text()').extract() # get the values
>len(values)
3
I can clean the first list up afterwards, but is there a one-shot way of doing this in XPath/Scrapy?
This works but is kind of ugly:
values = [v.xpath('text()').extract()
for v in s.xpath('//tr/td[#class="tdMainBottom"][2]')]
I have column field as string for ex. start_age("20") and end_age("40").
Now I want user whose age(24 as integer) is between start_age and age_age. So How can I get this user?
I have tried with this query
User.where("start_age < (?) AND end_age > (?)", age, age)
but getting this error:
PG::UndefinedFunction: ERROR: operator does not exist: character varying < integer
Thanks in advance.
Do write :
User.where("start_age::integer <= :age AND end_age::integer >= :age", age: age)
# or equivalent
User.where(":age BETWEEN start_age::integer AND end_age::integer", age: age)
Here is some demo :
test_dev=# select 20 between '1'::integer and '10'::integer as result;
result
--------
f
(1 row)
test_dev=# select 5 between '1'::integer and '10'::integer as result;
result
--------
t
(1 row)
test_dev=# select '12'::integer as number;
number
--------
12
(1 row)
table name :document_archive_doc_perms
table desc:
- location char(1), document_id number(5), permission_id number(5),
permission_type char(1), date_time_from date, date_time_to date,
ecode number(5), ecode_s number(2), approved_by number(5),
approved_by_ecode number(2), approved_on date, primary
key(location,document_id,permission_id));
my query :
select permission_id,document_id,date_time_from,date_time_to,approved_on,permission_type from document_archive_doc_perms
where document_id=3 and ecode=1695 and approved_on is not null and (sysdate between date_time_from and date_time_to);
my output is
PERMISSION_ID DOCUMENT_ID DATE_TIME DATE_TIME APPROVED_ P
------------- ----------- --------- --------- --------- -
5 3 01-DEC-14 31-DEC-14 08-DEC-14 V
7 3 09-DEC-14 31-DEC-14 09-DEC-14 P
here my need is the latest permission from records(ie max of permission_id)
how to do this.?
You could use an anaylitic rank() call:
SELECT permission_id,
document_id,
date_time_from,
date_time_to,
approved_on,
permission_type
FROM (SELECT permission_id,
document_id,
date_time_from,
date_time_to,
approved_on,
permission_type,
RANK() OVER (ORDER BY perission_id ASC) AS rk
FROM document_archive_doc_perms
WHERE document_id = 3 AND
ecode = 1695 AND
approved_on IS NOT NULL AND
SYSDATE BETWEEN date_time_from AND date_time_to
)
WHERE rk = 1