Is retention period not supported for tumbling windows in ksqldb? - ksqldb
I can't seem to figure out why my create table statement fails:
ksql> create table rst_wind_2 as select id, avg(intensity), min(rowtime) as `from` from rst2 WINDOW TUMBLING (SIZE 5 SECONDS, RETENTION 7 DAYS) group by id emit changes;
line 1:119: mismatched input ',' expecting ')'
Statement: create table rst_wind_2 as select id, avg(intensity), min(rowtime) as `from` from rst2 WINDOW TUMBLING (SIZE 5 SECONDS, RETENTION 7 DAYS) group by id emit changes;
Caused by: line 1:119: mismatched input ',' expecting ')'
Caused by: org.antlr.v4.runtime.InputMismatchException
I've gone and had a look at the grammar, but it looks like it should work.
Removing the RETENTION part makes it work, so somehow it just can't parse.
This is version 5.5 (typed version in the ksql command line), so the latest.
The version of ksqlDB that ships with Confluent Platform v5.5 does not support the RETENTION syntax. See the 5.5 grammer.
It looks like this was introduced in ksqlDB version 0.8. The upcoming Confluent Platform release, v6.0.0, will ship with ksqlDB 0.10 and will support the RETENTION syntax.
If you require RETENTION either wait for CP 6.0.0 or use one of the community releases of ksqlDB v0.8 or later.
Related
Pipe character ignored in SPSS syntax
I am trying to use the pipe character "|" in SPSS syntax with strange results: In the syntax it appears like this: But when I copy this line from the syntax window to here, this is what I get: SELECT IF(SEX = 1 SEX = 2). The pipe just disappears! If I run this line, this is the output: SELECT IF(SEX = 1 SEX = 2). Error # 4007 in column 20. Text: SEX The expression is incomplete. Check for missing operands, invalid operators, unmatched parentheses or excessive string length. Execution of this command stops. So the pipe is invisible to the program too! When I save this syntax and reopen it, the pipe is gone... The only way I found to get SPSS to work with the pipe is when I edited the syntax (adding the pipe) and saved it in an alternative editor (notepad++ in this case). Now, without opening the syntax, I ran it from another syntax using insert command, and it worked. EDIT: some background info: I have spss version 23 (+service pack 3) 64 bit. The same things happens if I use my locale (encoding: windows-1255) or Unicode (Encoding: UTF-8). Suspecting my Hebrew keyboard I tried copying syntax from the web with same results. Can anyone shed any light on this subject?
Turns out (according to SPSS support) that's a version specific (ver. 21) bug and was fixed in later versions.
GNUCobol compiled program counts one more record than expected
I'm learning COBOL programming and using GNUCobol (on Linux) to compile and test some simple programs. In one of those programs I have found an unexpected behavior that I don't understand: when reading a sequential file of records, I'm always getting one extra record and, when writing these records to a report, the last record is duplicated. I have made a very simple program to reproduce this behavior. In this case, I have a text file with a single line of text: "0123456789". The program should count the characters in the file (or 1 chararacter long records) and I expect it to display "10" as a result, but instead I get "11". Also, when displaying the records, as they are read, I get the following output: 0 1 2 3 4 5 6 7 8 9 11 (There are two blank spaces between 9 and 11). This is the relevant part of this program: FD SIMPLE. 01 SIMPLE-RECORD. 05 SMP-NUMBER PIC 9(1). [...] PROCEDURE DIVISION. 000-COUNT-RECORDS. OPEN INPUT SIMPLE. PERFORM UNTIL SIMPLE-EOF READ SIMPLE AT END SET SIMPLE-EOF TO TRUE NOT AT END DISPLAY SMP-NUMBER ADD 1 TO RECORD-COUNT END-READ END-PERFORM DISPLAY RECORD-COUNT. CLOSE SIMPLE. STOP RUN. I'm using the default options for the compiler, and I have tried using 'WITH TEST {BEFORE|AFTER}' but the result is the same. What can be the cause of this behavior or how can I get the expected result? Edit: I tried using an "empty" file as data source, expecting a 0 record count, using two different methods to empty the file: $ echo "" > SIMPLE This way the record count is 1 (ls -l gives a size of 1 byte for the file). $ rm SIMPLE $ touch SIMPLE This way the record count is 0 (ls -l gives a size of 0 bytes for the file). So I guess that somehow the compiled program is detecting an extra character, but I don't know how to avoid this.
I found out that the cause of this behavior is the automatic newline character that vim seems to append when saving the data file. After disabling this in vim this way :set binary :set noeol the program works as expected. Edit: A more elegant way to prevent this problem, when working with data files created from a text editor, is using ORGANIZATION IS LINE SEQUENTIAL in the SELECT clause. Since the problem was caused by the data format, should I delete this question?
Git diff: How to ignore starting space of an empty line?
I'm an iOS developer, when I press enter, xcode automatically indent the new line with 4 spaces, that's convenient for developing. But when it comes to using git diff, every empty line will be marked with red color. This can be annoying in team development. So how to deal with it? Thanks in advance!
Use this when using diff git diff -w // (--ignore-all-space) You can create an alias for this so you will not have to type it every time. git config --global alias.NAME 'diff --ignore-space-change' git diff -b / --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w / --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank.
With Git 2.25 (Q1 2020), three+ years later, the "diff" machinery learned not to lose added/removed blank lines in the context when --ignore-blank-lines and --function-context are used at the same time. So your intermediate 4 empty lines won't show up. But in the context of functions, they might. See commit 0bb313a (05 Dec 2019) by René Scharfe (rscharfe). (Merged by Junio C Hamano -- gitster -- in commit f0070a7, 16 Dec 2019) xdiff: unignore changes in function context Signed-off-by: René Scharfe Changes involving only blank lines are hidden with --ignore-blank-lines, unless they appear in the context lines of other changes. This is handled by xdl_get_hunk() for context added by --inter-hunk-context, -u and -U. Function context for -W and --function-context added by xdl_emit_diff() doesn't pay attention to such ignored changes; it relies fully on xdl_get_hunk() and shows just the post-image of ignored changes appearing in function context. That's inconsistent and confusing. Improve the result of using --ignore-blank-lines and --function-context together by fully showing ignored changes if they happen to fall within function context.
AWK Avoid Reformatting of date-like values
This is the issue: I gave as an input to AWK a comma-delimited table (and speficying FS=","), take the average of the 2-3rd column, the same for 4-5th column and print the first column value \t average1 \t average2 \n BUT the first column have genenames, and some of them looks like dates AND when I print, these names change, for example "Sept15" changed to "15-Sep", and I want to avoid this awk 'BEGIN{FS=",";OFS="\t"}{if(NR==1){next}{print $1,($2+$3)/2,($4+$5)/2}}' DESeqResults.csv | grep Sep Even when using printf(%s) awk 'BEGIN{FS=",";OFS="\t"}{if(NR==1){next}{printf("%s\t%d\t%d\n",$1,($2+$3)/2,($4+$5)/2) }}' DESeqResults.csv | grep Sep I thought that using printf instead of just print could work, but it didn't. And I'm pretty sure something is going on when reading the value and preprocessing it, because I printed it out only that column and anything else (using both print and printf) and the value is already changed. awk 'BEGIN{FS=",";OFS="\t"}{if(NR==1){next}{print $1}}' DESeqResults.csv | grep Sep AWK Version: GNU Awk 3.1.7 Copyright (C) 1989, 1991-2009 Free Software Foundation. Here is a sample of the table: genes,Tet2/3_Ctrtl_A,Tet2/3_Ctrtl_B,Tet2/3_DKO_A,Tet2/3_DKO_B,baseMean,baseMean_Tet2/3_Ctrtl,baseMean_Tet2/3_DKO,foldChange(Tet2/3_DKO/Tet2/3_Ctrtl),log2FoldChange,pval,padj Sep15,187.0874494,213.5411848,289.6434172,338.0423229,1376.196203,926.4220733,1825.970332,1.970991824,0.978921792,5.88E-05,0.003018514 Psmb2,399.4650982,355.9642309,557.3871013,632.1236546,1462.399465,983.7201408,1941.078789,1.973202244,0.980538833,6.00E-05,0.003071175 Sept1,144.2402924,114.9623101,52.39183843,18.11079498,386.2494712,579.8722584,192.6266841,0.332188135,-1.58992755,0.000418681,0.014756367 Psmd8,101.3085151,68.51270408,140.650979,154.2588735,627.727588,396.4360624,859.0191136,2.166854116,1.115602027,0.000421417,0.014825295 Sepw1,388.2193716,337.7605508,209.8232326,155.9087497,639.6596557,787.1262578,492.1930536,0.625303817,-0.677370771,0.004039946,0.080871288 Cks1b,265.8259249,287.954538,337.1108392,408.0547432,865.5821999,642.8510296,1088.31337,1.692948008,0.759537668,0.004049464,0.0809765 Sept2,358.4252141,302.9219723,393.3509343,394.2208442,4218.71214,3392.272118,5045.152161,1.48724866,0.572645878,0.004380269,0.085547008 Tuba1a,19.47153869,11.1692256,40.09945086,28.7539846,142.1610148,75.37000403,208.9520256,2.772349933,1.47110937,0.004381599,0.085547008 Sepx1,14.5941944,15.37680483,53.70015607,105.5523799,157.8475412,40.73526884,274.9598136,6.749920191,2.754870444,0.010199249,0.153896056 Apc,10.90608004,13.56070852,6.445046152,4.536589807,363.4471652,466.2312058,260.6631245,0.559085538,-0.838859068,0.010251083,0.154555416 Sephs2,38.20092337,29.90249614,41.38713976,60.29027195,328.8398211,228.5362706,429.1433717,1.877791086,0.909036565,0.088470061,0.590328676 2310008H04Rik,12.72162335,13.98659226,17.77340283,16.88409867,175.2157133,133.5326829,216.8987437,1.624312033,0.699828804,0.088572283,0.590803249 Sepn1,16.26472482,11.00430796,7.219301889,7.109776037,119.8773488,144.9435253,94.81117235,0.654124923,-0.612361911,0.129473781,0.719395557 Fancc,6.590254663,5.520421849,8.969058939,8.394987722,111.479883,79.97866541,142.9811007,1.787740518,0.838137351,0.129516654,0.719423355 Sept7,170.6589676,187.3808346,185.8091089,158.0134115,1444.411676,1313.631233,1575.192119,1.199112871,0.261967464,0.189661613,0.852792911 Obsl1,1.400612677,0.51329399,0.299847728,0.105245908,10.77805777,17.15978377,4.396331776,0.256199719,-1.964659203,0.189677412,0.852792911 Sepp1,136.2725767,142.7392758,137.5079558,135.5576156,1055.39992,948.5532274,1162.246613,1.225283494,0.293115585,0.193768055,0.862790863 Tom1l2,6.079259794,5.972711213,4.188234003,1.879086398,93.62018078,115.620636,71.61972551,0.619437221,-0.690970019,0.193795263,0.862790863 Sept10,5.07506603,4.240574236,7.415271602,7.245735277,56.38191446,38.04292126,74.72090766,1.964121187,0.973883947,0.202050794,0.874641256 Jag2,0.531592511,1.753353521,0.106692242,0.099863326,7.812876603,14.01922398,1.606529221,0.114594732,-3.125387366,0.202074037,0.874641256 Sept9,25.71885843,9.170659969,29.98187141,23.5519093,333.6707351,231.1780024,436.1634678,1.8866997,0.915864812,0.227916377,0.920255208 Mad2l2,22.00853798,17.42180189,30.74357865,21.99530555,98.71951578,74.31522721,123.1238044,1.656777608,0.72837996,0.227920237,0.920255208 Sept8,3.128945597,4.413675869,1.658838722,1.197769008,38.73123291,52.59586062,24.8666052,0.472786354,-1.080739698,0.237101573,0.929055595 BC018465,1.974718423,2.171073663,0.264221349,0.123654833,5.802858162,10.40514412,1.200572199,0.115382563,-3.115502877,0.237135522,0.929055595 Sept11,51.69299305,57.36531814,51.69117677,51.61623861,915.6234052,837.2625097,993.9843007,1.187183576,0.247543039,0.259718041,0.949870478 Ccnc,11.42168015,13.32308428,14.76060133,12.19352385,173.0536821,146.6301746,199.4771895,1.36041023,0.444041759,0.259794956,0.949870478 Sept12,0,5.10639021,0,0.158638685,5.07217061,9.738384198,0.405957022,0.041686281,-4.584283515,0.388933297,1 Gclc,24.79641294,20.9904856,13.36470176,15.92090715,146.8502169,163.0012707,130.6991632,0.801829106,-0.318633307,0.3890016,1 Sept14,0.15949349,1.753526538,0,0,2.425489894,4.850979788,0,0,#NAME?,0.396160673,1 Slc17a1,0.131471208,1.445439884,0,0,2.425489894,4.850979788,0,0,#NAME?,0.396160673,1 Sept6,34.11050622,30.16102302,28.2562382,14.56889172,602.5658704,661.8163161,543.3154247,0.820945951,-0.284640854,0.416246976,1 Unc119,6.098478253,9.710512531,4.558282355,1.738214353,23.04654843,30.90026472,15.19283214,0.491673203,-1.024228366,0.416259755,1 Sept4,2.305246374,2.534467513,1.18972284,0.618652085,8.87244411,12.13933481,5.605553408,0.461767757,-1.114760654,0.560252893,1 Ddb2,11.25366078,17.32172888,10.50269513,6.025122118,71.81085298,83.53254996,60.089156,0.719350194,-0.475233821,0.560482212,1 Sephs1,20.92060935,15.48240612,15.94132159,11.57137656,288.7538099,298.3521103,279.1555094,0.935657902,-0.095946952,0.568672243,1 BC021785,0.135120133,0.891334456,0.108476095,0.101533002,5.825443635,9.241093439,2.409793832,0.260769339,-1.939153843,0.568713405,1 Sepsecs,7.276880132,6.154194955,5.055549522,3.680417498,35.9322246,39.77711194,32.08733726,0.806678406,-0.309934458,0.673968316,1 Osbpl7,10.51628336,5.69720028,7.157857243,5.382675661,86.65916873,88.67338952,84.64494794,0.954569893,-0.06707726,0.674000752,1 Sept3,0.113880577,0.250408482,0.228561799,0.042786507,2.505996654,2.619498342,2.392494966,0.913340897,-0.13077466,1,1 Sept5,0.126649979,0,0.203352303,0,0.609528347,0.424441516,0.794615178,1.872142914,0.904690571,1,1 Serpina11,0,0,0.14524189,0,0.198653794,0,0.397307589,Inf,Inf,1,1
Transferring extensive comments into an answer No; awk does not convert strings such as Sep15 to a date 15-Sep by default, even on a Mac. At least, not with the standard awk on Mac OS X 10.10.2 Yosemite, which I tested with, nor would I expect it to do so with any other variant of awk I've ever seen on a Mac. […time passed…] Somewhat to my surprise, I have gawk installed and it is GNU Awk 3.1.7 Copyright (C) 1989, 1991-2009 Free Software Foundation. (I knew I had gawk installed, but I was expecting it to be a 4.x version.) Given your data on my Mac, the output of your first awk (gawk) command on the data you gave does no mapping whatsoever on the first column. If you subsequently import the data into a spreadsheet, the spreadsheet could do all sorts of transformations, but that isn't awk's fault. You mentioned Mac in one of your early comments; are you using Mac OS X? I am not expecting to find any problem outside the spreadsheet. If the data is imported to a spreadsheet, then I won't be surprised to find the 'date-like' values in column 1 are reformatted. I tried importing the CSV from the data in the question into LibreOffice (4.4.1.2, or 4.4.1002, depending on where you look for the version number), and no transformation occurred on the data in column 1. Similarly, Numbers 3.5.2 and OpenOffice 4.1.1 both leave the keys starting 'Sep' alone. Unfortunately, MS Excel (for Mac 2011, version 14.4.8 — 150116) translates such column values to a date (so Sep15 becomes 15-Sep, for example). Even embedding the column in double quotes does not help. I don't have a good solution other than "do not use MS Excel". There probably is a way to suppress the behaviour, but you need to ask a question tagged excel and csv rather than awk and printing and printf. Incidentally, a Google search on 'excel csv import force text' turns up Stop Excel from automatically converting certain text values to dates? Some of the techniques outlined there (notably the "rename the file from .csv to .txt" technique) work.
Entity framework with Firebird throws dynamic SQL error
I've got stuck with FbException SQL error code = -104 Token unknown - line 2, column 4 . when trying to run this code var result = from x in _context.Bunts select x; I've checked the query which was produced by EF SELECT "A"."BUNTCODE" AS "BUNTCODE", "A"."BUNTNAME" AS "BUNTNAME", "A"."BUNTDIAM" AS "BUNTDIAM" FROM "BUNTS" AS "A" So server thinks that something is wrong with dot after "A" statement. But this query runs just fine in IBExpert on the same machine. How to fix this problem? I'm using: Firebird server v2.1.6.18547 EntityFramework v6.0.0.0 EntityFramework.Firebird v4.5.2.0 FirebirdSql.Data.FirebirdClient 4.5.2.0
The error suggests you are connecting using dialect 1. Dialect 1 is the old dialect of Interbase 5 and earlier and should be considered deprecated (although unfortunately 15 years on it is still supported by Firebird...). In dialect 1 it is not possible to quote object names, and double quotes are used for strings (instead of single quotes in dialect 3 and the SQL standard). When your query is parsed in dialect 1, Firebird sees "A" as a string constant, and the following dot (.) is not expected by the parser. Switching to dialect 3 should fix this, however if you do that, make sure that your database itself is also dialect 3, otherwise you might get other unexpected behavior like certain datatypes not working, or errors, etc.