grep: extract content between prefix and suffix - grep

I've a file content like this:
Listening for transport dt_socket at address: 8000
------------------------------------------------------------
🔥 ^[[1m HAPI FHIR^[[22m 5.4.0 - Command Line Tool
------------------------------------------------------------
Process ID : 21719#psgd
Max configured JVM memory (Xmx) : 3.2GB
Detected Java version : 11.0.7
------------------------------------------------------------
^[[32m2021-07-01^[[0;39m ^[[1;32m12:27:40.79^[[0;39m ^[[37m[main]^[[0;39m ^[[37mWARN ^[[0;39m ^[[1;34mo.f.c.i.s.c.ClassPathScanner^[[0;39m ^[[1;37mUnable to resolve location classpath:db/migration. Note this warning will become an error in Flyway 7.
^[[0;39m^[[32m2021-07-01^[[0;39m ^[[1;32m12:27:42.641^[[0;39m ^[[37m[main]^[[0;39m ^[[37mWARN ^[[0;39m ^[[1;34mo.f.c.i.s.c.ClassPathScanner^[[0;39m ^[[1;37mUnable to resolve location classpath:db/migration. Note this warning will become an error in Flyway 7.
^[[0;39m^[[32m2021-07-01^[[0;39m ^[[1;32m12:27:44.693^[[0;39m ^[[37m[main]^[[0;39m ^[[37mINFO ^[[0;39m ^[[1;34mc.u.f.j.m.t.InitializeSchemaTask^[[0;39m ^[[1;37m3_3_0.20180115.0: Initializing ORACLE_12C schema for HAPI FHIR
^[[0;39m^[[32m2021-07-01^[[0;39m ^[[1;32m12:27:44.848^[[0;39m ^[[37m[main]^[[0;39m ^[[37mINFO ^[[0;39m ^[[1;34mc.u.f.j.m.t.BaseTask^[[0;39m ^[[1;37m3_3_0.20180115.0: SQL "create sequence SEQ_BLKEXCOL_PID start with 1 increment by 50" returned 0
^[[0;39m^[[32m2021-07-01^[[0;39m ^[[1;32m12:27:44.918^[[0;39m ^[[37m[main]^[[0;39m ^[[37mINFO ^[[0;39m ^[[1;34mc.u.f.j.m.t.BaseTask^[[0;39m ^[[1;37m3_3_0.20180115.0: SQL "
create sequence SEQ_BLKEXCOLFILE_PID start with 1 increment by 50" returned 0
^[[0;39m^[[32m2021-07-01^[[0;39m ^[[1;32m12:27:47.573^[[0;39m ^[[37m[main]^[[0;39m ^[[37mINFO ^[[0;39m ^[[1;34mc.u.f.j.m.t.BaseTask^[[0;39m ^[[1;37m3_3_0.20180115.0: SQL "
create table HFJ_BINARY_STORAGE_BLOB (
BLOB_ID varchar2(200 char) not null,
BLOB_DATA blob not null,
CONTENT_TYPE varchar2(100 char) not null,
BLOB_HASH varchar2(128 char),
PUBLISHED_DATE timestamp not null,
RESOURCE_ID varchar2(100 char) not null,
BLOB_SIZE number(10,0),
primary key (BLOB_ID)
)" returned 0
I need to extract only content inside between SQL " and " returned 0 trimming all whitespaces.
Any ideas?
I've tried to reduce problem using:
$ echo 'sdf SQL" sdf sdf" returned 0' | grep 's/SQL"\(.*\)" returned 0/\1/' -
But it's getting empty.
My expected output is:
create sequence SEQ_BLKEXCOL_PID start with 1 increment by 50;
create sequence SEQ_BLKEXCOLFILE_PID start with 1 increment by 50;
create table HFJ_BINARY_STORAGE_BLOB (
BLOB_ID varchar2(200 char) not null,
BLOB_DATA blob not null,
CONTENT_TYPE varchar2(100 char) not null,
BLOB_HASH varchar2(128 char),
PUBLISHED_DATE timestamp not null,
RESOURCE_ID varchar2(100 char) not null,
BLOB_SIZE number(10,0),
primary key (BLOB_ID)
);
I've tried to perform:
cat test.log | sed -E 's/.* SQL"(.*)" returned 0/\1/'
It's returning me all file content...
Using awk, it returns empty:
$ awk -v RS='SQL "[[:space:]]+?\n\n+.*returned 0' '
RT{
gsub(/^SQL "\n+|\n+$/,"",RT)
sub(/" returned 0[[:space:]]+?\n*$/,"",RT)
print RT";"
}
' test.log

This can be done using custom RS in gnu-awk that splits data on SQL "..." text block and then inside action block it extracts text between quotes without leading space.
awk -v RS=' SQL "[^"]+"' 'RT {
gsub(/^[^"]*"[[:space:]]*|"[^"]*$/, "", RT); print RT ";"}' file.sql
create sequence SEQ_BLKEXCOL_PID start with 1 increment by 50;
create sequence SEQ_BLKEXCOLFILE_PID start with 1 increment by 50;
create table HFJ_BINARY_STORAGE_BLOB (
BLOB_ID varchar2(200 char) not null,
BLOB_DATA blob not null,
CONTENT_TYPE varchar2(100 char) not null,
BLOB_HASH varchar2(128 char),
PUBLISHED_DATE timestamp not null,
RESOURCE_ID varchar2(100 char) not null,
BLOB_SIZE number(10,0),
primary key (BLOB_ID)
);

With GNU awk, with your shown samples, please try following code once.
awk -v RS='SQL "[[:space:]]*\n\n+.*returned 0' '
RT{
gsub(/^SQL "\n+|\n+$/,"",RT)
sub(/" returned 0[[:space:]]*\n*$/,"",RT)
print RT";"
}
' Input_file
Explanation: Simple explanation would be, setting RS as SQL "[[:space:]]+?\n\n+.*returned 0 for awk program, removing not required strings like SQL " with new lines and returned 0 at last of value before printing it.
Explanation of regex is as follows: match SQL followed by space " followed by 1 or more spaces(optional) followed by 1 or more new lines till returned 0 here.

If you have gnu grep then you can use this PCRE regex:
grep -oPz ' SQL "\K[^"]+' file.sql
create sequence SEQ_BLKEXCOL_PID start with 1 increment by 50
create sequence SEQ_BLKEXCOLFILE_PID start with 1 increment by 50
create table HFJ_BINARY_STORAGE_BLOB (
BLOB_ID varchar2(200 char) not null,
BLOB_DATA blob not null,
CONTENT_TYPE varchar2(100 char) not null,
BLOB_HASH varchar2(128 char),
PUBLISHED_DATE timestamp not null,
RESOURCE_ID varchar2(100 char) not null,
BLOB_SIZE number(10,0),
primary key (BLOB_ID)
)
Explanation:
' SQL ": Search SQL " text
\K: Reset matched info
[^"]+: Match 1+ of any characters that are not "
To get formatting as desired (in comments) use this grep + sed (gnu) solution:
grep -oPzZ ' SQL "\K[^"]+' file.sql |
sed -E '$s/$/\n/; s/\x0/;/; s/^[[:blank:]]+//'
create sequence SEQ_BLKEXCOL_PID start with 1 increment by 50;
create sequence SEQ_BLKEXCOLFILE_PID start with 1 increment by 50;
create table HFJ_BINARY_STORAGE_BLOB (
BLOB_ID varchar2(200 char) not null,
BLOB_DATA blob not null,
CONTENT_TYPE varchar2(100 char) not null,
BLOB_HASH varchar2(128 char),
PUBLISHED_DATE timestamp not null,
RESOURCE_ID varchar2(100 char) not null,
BLOB_SIZE number(10,0),
primary key (BLOB_ID)
);

Related

Error while reading data, error message: JSON table encountered too many errors, giving up. Rows

I am having two files and doing a inner join using CoGroupByKey in apache-beam.
When I am writing rows to bigquery,iy gives me following error.
RuntimeError: BigQuery job beam_bq_job_LOAD_AUTOMATIC_JOB_NAME_LOAD_STEP_614_c4a563c648634e9dbbf7be3a56578b6d_2f196decc8984a0d83dee92e19054ffb failed. Error Result: <ErrorProto
location: 'gs://dataflow4bigquery/temp/bq_load/06bfafaa9dbb47338ad4f3a9914279fe/dotted-transit-351803.test_dataflow.inner_join/f714c1ac-c234-4a37-bf51-c725a969347a'
message: 'Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more details.'
reason: 'invalid'> [while running 'WriteToBigQuery/BigQueryBatchFileLoads/WaitForDestinationLoadJobs']
-----------------code-----------------------
from apache_beam.io.gcp.internal.clients import bigquery
import apache_beam as beam
def retTuple(element):
thisTuple=element.split(',')
return (thisTuple[0],thisTuple[1:])
def jstr(cstr):
import datetime
left_dict=cstr[1]['dep_data']
right_dict=cstr[1]['loc_data']
for i in left_dict:
for j in right_dict:
id,name,rank,dept,dob,loc,city=([cstr[0]]+i+j)
json_str={ "id":id,"name":name,"rank":rank,"dept":dept,"dob":datetime.datetime.strptime(dob, "%d-%m-%Y").strftime("%Y-%m-%d").strip("'"),"loc":loc,"city":city }
return json_str
table_spec = 'dotted-transit-351803:test_dataflow.inner_join'
table_schema = 'id:INTEGER,name:STRING,rank:INTEGER,dept:STRING,dob:STRING,loc:INTEGER,city:STRING'
gcs='gs://dataflow4bigquery/temp/'
p1 = beam.Pipeline()
# Apply a ParDo to the PCollection "words" to compute lengths for each word.
dep_rows = (
p1
| "Reading File 1" >> beam.io.ReadFromText('dept_data.txt')
| 'Pair each employee with key' >> beam.Map(retTuple) # {149633CM : [Marco,10,Accounts,1-01-2019]}
)
loc_rows = (
p1
| "Reading File 2" >> beam.io.ReadFromText('location.txt')
| 'Pair each loc with key' >> beam.Map(retTuple) # {149633CM : [9876843261,New York]}
)
results = ({'dep_data': dep_rows, 'loc_data': loc_rows}
| beam.CoGroupByKey()
| beam.Map(jstr)
| beam.io.WriteToBigQuery(
custom_gcs_temp_location=gcs,
table=table_spec,
schema=table_schema,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
additional_bq_parameters={'timePartitioning': {'type': 'DAY'}}
)
)
p1.run().wait_until_finish()
I am running it on gcp using dataflow runner.
When printing json_str string the output is a valid json.
Eg:
{'id': '149633CM', 'name': 'Marco', 'rank': '10', 'dept': 'Accounts', 'dob': '2019-01-31', 'loc': '9204232778', 'city': 'New York'}
{'id': '212539MU', 'name': 'Rebekah', 'rank': '10', 'dept': 'Accounts', 'dob': '2019-01-31', 'loc': '9995440673', 'city': 'Denver'}
Schema which I have defined is also correct.
But,getting that error,when loading it to bigquery.
After doing some reaseach,I finally solved it.
It was a schema error.
Id column value is like 149633CM
I had given data type of Id as INTEGER,but when I tried to load json with bq and schema as --autodetect, bq marked datatype of Id as STRING.
after that,I changed datatype of Id column as STRING in my schema in code.
And,it worked.The table is created and got loaded.
But,I am not getting one thing,if starting 6 characters are numbers in Id column,why INTEGER is not wroking and STRING is working?
because the data type is parsed on the whole field value and not only the first 6 characters. if you drop the last 2 characters, you can put INTEGER

unexpected token error in plsql procedure

CREATE PROCEDURE EPS.PROCEDURE_OTE_LTE_BIDDER_REPORT
(
IN P_USERID INTEGER,
IN P_AUCTIONID INTEGER,
IN P_REPORT_FLAG VARCHAR(3),
OUT O_ERROR_CODE INTEGER,
OUT OUTPUT_MESSAGE VARCHAR(100),
IN P_LOG_USERID INTEGER
)
LANGUAGE SQL
P1:BEGIN ATOMIC DECLARE SQLCODE INTEGER DEFAULT 0;
DECLARE V_USERID INTEGER;
DECLARE V_AUCTIONID INTEGER;
DECLARE V_REPORT_FLAG_TECHNOCOMMERCIALQUALIFIEDCUSTOMER VARCHAR(3);
DECLARE V_COUNT_TECHNOCOMMERCIALQUALIFIEDCUSTOMER INTEGER;
DECLARE V_REPORT_FLAG_SELECTIVEUSERWISETENDERREPORT VARCHAR(3);
DECLARE V_COUNT_SELECTIVEUSERWISETENDERREPORT INTEGER;
SELECT COUNT(*) INTO V_COUNT_TECHNOCOMMERCIALQUALIFIEDCUSTOMER FROM EPS.TECHNOCOMMERCIALQUALIFIEDCUSTOMER A
WHERE A.AUCTIONID=P_AUCTIONID AND A.USERID=P_USERID;
SELECT COUNT(*) INTO V_COUNT_SELECTIVEUSERWISETENDERREPORT FROM EPS.SELECTIVEUSERWISETENDERREPORT B
WHERE B.AUCTIONID=P_AUCTIONID AND B.USERID=P_USERID;
IF P_REPORT_FLAG = 'Y' THEN
IF V_COUNT_TECHNOCOMMERCIALQUALIFIEDCUSTOMER < 1 THEN
INSERT INTO EPS.TECHNOCOMMERCIALQUALIFIEDCUSTOMER (A.AUCTIONID,A.USERID,A.QUALIFIED) VALUES (P_AUCTIONID,P_USERID,'Y');
ELSE
SET OUTPUT_MESSEGE = 'DATA ALREADY PRESENT';
END IF;
ELSE
IF V_COUNT_TECHNOCOMMERCIALQUALIFIEDCUSTOMER > 0 THEN
DELETE FROM EPS.TECHNOCOMMERCIALQUALIFIEDCUSTOMER C WHERE C.AUCTIONID=P_AUCTIONID AND C.USERID=P_USERID;
ELSE
SET OUTPUT_MESSAGE = 'NO DATA FOUND';
END IF;
END IF;
IF P_REPORT_FLAG = 'Y' THEN
IF V_COUNT_SELECTIVEUSERWISETENDERREPORT < 1 THEN
INSERT INTO EPS.SELECTIVEUSERWISETENDERREPORT AA
( AA.AUCTIONID,
AA.USERID,
AA.TENDERREPORTTYPEID,
AA.STATUS,
AA.CREATEID,
AA.CREATEDATE,
AA.UPDATEID,
AA.UPDATEDATE
)
VALUES
(
P_AUCTIONID,
P_USERID,
103.
'A',
P_LOG_USERID,
CURRENT TIMESTAMP,
NULL,
NULL
);
ELSE
SET OUTPUT_MESSEGE = 'DATA ALREADY PRESENT';
END IF;
ELSE
IF V_COUNT_SELECTIVEUSERWISETENDERREPORT > 0 THEN
DELETE FROM EPS.SELECTIVEUSERWISETENDERREPORT CC WHERE CC.AUCTIONID=P_AUCTIONID AND CC.USERID=P_USERID;
ELSE
SET OUTPUT_MESSAGE = 'NO DATA FOUND';
END IF;
END IF;
END P1
I am getting this error
SQL Error [42601]: An unexpected token "END-OF-STATEMENT" was found following "Y PRESENT'". Expected tokens may include: "
END IF".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.7.85
SQL Error [42601]: An unexpected token "END-OF-STATEMENT" was found following "Y PRESENT'". Expected tokens may include: "
END IF".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.7.85
An unexpected token "END-OF-STATEMENT" was found following "Y PRESENT'". Expected tokens may include: "
END IF".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.7.85
An unexpected token "END-OF-STATEMENT" was found following "Y PRESENT'". Expected tokens may include: "
END IF".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.7.85
It helps to properly use a syntax editor that understands SQL, and take more care with checking your code. A good SQL editor may highlight your mistakes before you try to compile, as would any code review.
On a separate note, you should understand the difference between ANSI SQL PL and Oracle PL/SQL. Your code seems to use ANSI SQL PL syntax, although your mistakes may be mistakes for any flavour of SQL.
Here are some of the obvious syntax mistakes in your code (there may be others):
On the line INSERT INTO EPS.SELECTIVEUSERWISETENDERREPORT AA , the AA should be omitted.
For the same insert statement you have 103., when you might mean 103,.
For the line INSERT INTO EPS.TECHNOCOMMERCIALQUALIFIEDCUSTOMER (A.AUCTIONID,A.USERID,A.QUALIFIED) you probably mean instead
INSERT INTO EPS.TECHNOCOMMERCIALQUALIFIEDCUSTOMER (AUCTIONID,USERID,QUALIFIED)
The same mistake is present for the line with INSERT INTO EPS.SELECTIVEUSERWISETENDERREPORT (do not qualify the column names).
For the line beginning SET OUTPUT_MESSEGE = you probably mean SET OUTPUT_MESSAGE =, and this typo is present on other lines.

Save last record according to a composite key. ksqlDB 0.6.0

I have a Kafka topic with the following data flow ( ksqldb_topic_01 ):
% Reached end of topic ksqldb_topic_01 [0] at offset 213
{"city":"Sevilla","temperature":20,"sensorId":"sensor03"}
% Reached end of topic ksqldb_topic_01 [0] at offset 214
{"city":"Madrid","temperature":5,"sensorId":"sensor03"}
% Reached end of topic ksqldb_topic_01 [0] at offset 215
{"city":"Sevilla","temperature":10,"sensorId":"sensor01"}
% Reached end of topic ksqldb_topic_01 [0] at offset 216
{"city":"Valencia","temperature":15,"sensorId":"sensor03"}
% Reached end of topic ksqldb_topic_01 [0] at offset 217
{"city":"Sevilla","temperature":15,"sensorId":"sensor01"}
% Reached end of topic ksqldb_topic_01 [0] at offset 218
{"city":"Madrid","temperature":20,"sensorId":"sensor03"}
% Reached end of topic ksqldb_topic_01 [0] at offset 219
{"city":"Valencia","temperature":15,"sensorId":"sensor02"}
% Reached end of topic ksqldb_topic_01 [0] at offset 220
{"city":"Sevilla","temperature":5,"sensorId":"sensor02"}
% Reached end of topic ksqldb_topic_01 [0] at offset 221
{"city":"Sevilla","temperature":5,"sensorId":"sensor01"}
% Reached end of topic ksqldb_topic_01 [0] at offset 222
And I want to save in a table the last value that enters me in the topic, for each city and sensorId
In my ksqldDB I create the following table:
CREATE TABLE ultimo_resgistro(city VARCHAR,sensorId VARCHAR,temperature INTEGER) WITH (KAFKA_TOPIC='ksqldb_topic_01', VALUE_FORMAT='json',KEY = 'sensorId,city');
DESCRIBE EXTENDED ULTIMO_RESGISTRO;
Name : ULTIMO_RESGISTRO
Type : TABLE
Key field : SENSORID
Key format : STRING
Timestamp field : Not set - using <ROWTIME>
Value format : JSON
Kafka topic : ksqldb_topic_01 (partitions: 1, replication: 1)
Field | Type
-----------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
CITY | VARCHAR(STRING)
SENSORID | VARCHAR(STRING)
TEMPERATURE | INTEGER
-----------------------------------------
Seeing that data is processing me
select * from ultimo_resgistro emit changes;
+------------------+------------------+------------------+------------------+------------------+
|ROWTIME |ROWKEY |CITY |SENSORID |TEMPERATURE |
+------------------+------------------+------------------+------------------+------------------+
key cannot be null
Query terminated
The problem is that you need to set the key of the Kafka message correctly. You also cannot specify two fields in the KEY clause. Read more about this here
Here's an example of how to do it.
First up, load test data:
kafkacat -b kafka-1:39092 -P -t ksqldb_topic_01 <<EOF
{"city":"Madrid","temperature":20,"sensorId":"sensor03"}
{"city":"Madrid","temperature":5,"sensorId":"sensor03"}
{"city":"Sevilla","temperature":10,"sensorId":"sensor01"}
{"city":"Sevilla","temperature":15,"sensorId":"sensor01"}
{"city":"Sevilla","temperature":20,"sensorId":"sensor03"}
{"city":"Sevilla","temperature":5,"sensorId":"sensor01"}
{"city":"Sevilla","temperature":5,"sensorId":"sensor02"}
{"city":"Valencia","temperature":15,"sensorId":"sensor02"}
{"city":"Valencia","temperature":15,"sensorId":"sensor03"}
EOF
Now in ksqlDB declare the schema over the topic - as a stream, because we need to repartition the data to add a key. If you control the producer to the topic then maybe you'd do this upstream and save a step.
CREATE STREAM sensor_data_raw (city VARCHAR, temperature DOUBLE, sensorId VARCHAR)
WITH (KAFKA_TOPIC='ksqldb_topic_01', VALUE_FORMAT='JSON');
Repartition the data based on the composite key.
SET 'auto.offset.reset' = 'earliest';
CREATE STREAM sensor_data_repartitioned WITH (VALUE_FORMAT='AVRO') AS
SELECT *
FROM sensor_data_raw
PARTITION BY city+sensorId;
Two things to note:
I'm taking the opportunity to reserialise into Avro - if you'd rather keep JSON throughout then just omit the WITH (VALUE_FORMAT clause.
When the data is repartitioned the ordering guarantees are lost, so in theory you may end up with events out of order after this.
At this point we can inspect the transformed topic:
ksql> PRINT SENSOR_DATA_REPARTITIONED FROM BEGINNING LIMIT 5;
Format:AVRO
1/24/20 9:55:54 AM UTC, Madridsensor03, {"CITY": "Madrid", "TEMPERATURE": 20.0, "SENSORID": "sensor03"}
1/24/20 9:55:54 AM UTC, Madridsensor03, {"CITY": "Madrid", "TEMPERATURE": 5.0, "SENSORID": "sensor03"}
1/24/20 9:55:54 AM UTC, Sevillasensor01, {"CITY": "Sevilla", "TEMPERATURE": 10.0, "SENSORID": "sensor01"}
1/24/20 9:55:54 AM UTC, Sevillasensor01, {"CITY": "Sevilla", "TEMPERATURE": 15.0, "SENSORID": "sensor01"}
1/24/20 9:55:54 AM UTC, Sevillasensor03, {"CITY": "Sevilla", "TEMPERATURE": 20.0, "SENSORID": "sensor03"}
Note that the key in the Kafka message (the second field, after the timestamp) is now set correctly, compared to the original data that had no key:
ksql> PRINT ksqldb_topic_01 FROM BEGINNING LIMIT 5;
Format:JSON
{"ROWTIME":1579859380123,"ROWKEY":"null","city":"Madrid","temperature":20,"sensorId":"sensor03"}
{"ROWTIME":1579859380123,"ROWKEY":"null","city":"Madrid","temperature":5,"sensorId":"sensor03"}
{"ROWTIME":1579859380123,"ROWKEY":"null","city":"Sevilla","temperature":10,"sensorId":"sensor01"}
{"ROWTIME":1579859380123,"ROWKEY":"null","city":"Sevilla","temperature":15,"sensorId":"sensor01"}
{"ROWTIME":1579859380123,"ROWKEY":"null","city":"Sevilla","temperature":20,"sensorId":"sensor03"}
Now we can declare a table over the repartitioned data. Since I'm using Avro now I don't have to reenter the schema. If I was using JSON I would need to enter it again as part of this DDL.
CREATE TABLE ultimo_resgistro WITH (KAFKA_TOPIC='SENSOR_DATA_REPARTITIONED', VALUE_FORMAT='AVRO');
The table's key is implicitly taken from the ROWKEY, which is the key of the Kafka message.
ksql> SELECT ROWKEY, CITY, SENSORID, TEMPERATURE FROM ULTIMO_RESGISTRO EMIT CHANGES;
+------------------+----------+----------+-------------+
|ROWKEY |CITY |SENSORID |TEMPERATURE |
+------------------+----------+----------+-------------+
|Madridsensor03 |Madrid |sensor03 |5.0 |
|Sevillasensor03 |Sevilla |sensor03 |20.0 |
|Sevillasensor01 |Sevilla |sensor01 |5.0 |
|Sevillasensor02 |Sevilla |sensor02 |5.0 |
|Valenciasensor02 |Valencia |sensor02 |15.0 |
|Valenciasensor03 |Valencia |sensor03 |15.0 |
If you want to take advantage of pull queries (in order to get the latest value) then you need to go and upvote (or contribute a PR 😁) this issue.

What does the `type` word do?

Given the following function, (borrowed from Rosetta Code)
: (echo) ( sock buf -- sock buf )
begin
cr ." waiting..."
2dup 2dup size read-socket nip
dup 0>
while
." got: " 2dup type ( <-- HERE )
rot write-socket
repeat
drop drop drop ;
What does type do in,
." got: " 2dup type
type is a word. You can find the list of the words here
type c-addr u – core “type”
If u>0, display u characters from a string starting with the character stored at c-addr.
In this case you have
128 constant size
create buf size allot
Then you set buf with read-socket. This type it to a string and prints it out.
Returns a memory address for the string and the size.
cr s" foo bar " .s
Output:
<2> 94085808947584 8 ok
Here we provide the memory address and size to type and get "foo bar"
cr 94085808947584 8 type
Output:
foo bar ok

mysql stored procedure: using declared vars in a limit statement returns an error

I have the following code:
delimiter ;
DROP PROCEDURE IF EXISTS ufk_test;
delimiter //
CREATE PROCEDURE ufk_test(IN highscoreChallengeId INT UNSIGNED)
BEGIN
DECLARE vLoopOrder INT UNSIGNED DEFAULT 5;
DECLARE vLoopLimit INT UNSIGNED DEFAULT 10;
select * from fb_user LIMIT vLoopOrder,vLoopLimit;
END//
delimiter ;
Mysql returns the following error:
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'vLoopOrder,vLoopLimit;
END' at line 11
it seems that I cannot use declared variables in a LIMIT statement. is there any other way to overcome this ?
of course this is a simple example, here i could just put static numbers but I need to know if it's possible in any way to use any kind of variables with LIMIT.
Thanks
i use something like:
SET #s = CONCAT('SELECT * FROM table limit ', vLoopOrder ', ', vLoopLimit);
PREPARE stmt1 FROM #s;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;

Resources