Jenkins pipeline failing for sed command - jenkins

I'm running a sed command in pipeline but it is failing with an error
sed: file - line 2: unterminated `s' command
here is my code for sed in pipeline:
sh '''sed '/^[[:blank:]]*$/d;s|^|s/%%|;s|:|%%/|;s|$|/|' key.txt | sed -f - file1.txt > file2.txt'''
I if I will run only first sed command, Jenkins gives very strange output, here it is from Jenkins logs its adding extra lines with characters:
+ sed '/^[[:blank:]]*$/d;s|^|s#%%|;s|:|%%#|;s|$|#|' keys.txt
s#%%sql_server_name%%#test_seqserver_1234
#
s#%%
#
s#%%sql_login_name%%#test_login_name
#
s#%%
#
s#%%password%%#test_password
#
s#%%
#
If run following command this is what Jenkins output looks like
sh '''sed '/^[[:blank:]]*$/d;/:/!d;s|^|s/%%|;s|:|%%/|;s|$|/|'
keys.txt'''
Jenkins output:
+ sed '/^[[:blank:]]*$/d;/:/!d;s|^|s/%%|;s|:|%%/|;s|$|/|' keys.txt
s/%%sql_server_name%%/test_seqserver_1234
/
s/%%sql_login_name%%/test_login_name
/
s/%%password%%/test_password
/
s/%%SID%%/123456
/
I ran the new command:
sh '''sed -e '/:/!d;s|^\\([^:]*\\):\\(.*\\)$|s/%%\\1%%/\\2/|' -e 'N;s|\\n/|/|' keys.txt'''
Here is the output:
Running shell script
+ sed -e '/:/!d;s|^\([^:]*\):\(.*\)$|s/%%\1%%/\2/|' -e 'N;s|\n/|/|'
keys.txt
s/%%sql_server_name%%/test_seqserver_1234
/
s/%%sql_login_name%%/test_login_name
/
s/%%password%%/test_password
/
s/%%SID%%/123456
Here is xxd output for text file:
Running shell script
+ xxd keys.txt
0000000: 7371 6c5f 7365 7276 6572 5f6e 616d 653a sql_server_name:
0000010: 7465 7374 5f73 6571 7365 7276 6572 5f31 test_seqserver_1
0000020: 3233 340d 0a0d 0a73 716c 5f6c 6f67 696e 234....sql_login
0000030: 5f6e 616d 653a 7465 7374 5f6c 6f67 696e _name:test_login
0000040: 5f6e 616d 650d 0a0d 0a70 6173 7377 6f72 _name....passwor
0000050: 643a 7465 7374 5f70 6173 7377 6f72 6420 d:test_password
0000060: 0d0a 0d0a 5349 443a 3132 3334 3536 200d ....SID:123456 .
0000070: 0a0d 0a64 6566 6175 6c74 5f64 6174 6162 ...default_datab
0000080: 6173 653a 7465 6d70 6462 0d0a 0d0a 6465 ase:tempdb....de
0000090: 6661 756c 745f 6c61 6e67 7561 6765 3a75 fault_language:u
00000a0: 735f 656e 676c 6973 680d 0a0d 0a63 6865 s_english....che
00000b0: 636b 5f65 7870 6972 6174 696f 6e3a 4f46 ck_expiration:OF
00000c0: 460d 0a0d 0a63 6865 636b 5f70 6f6c 6963 F....check_polic
00000d0: 793a 4f46 460d 0a0d 0a64 656c 6976 6572 y:OFF....deliver
00000e0: 7974 7970 653a 7363 6865 6475 6c65 640d ytype:scheduled.
00000f0: 0a0d 0a73 6368 6564 756c 6564 5f64 656c ...scheduled_del
0000100: 6976 6572 7964 6174 653a 3035 2d33 302d iverydate:05-30-
0000110: 3230 3939 0d0a 0d0a 7363 6865 6475 6c65 2099....schedule
0000120: 645f 6465 6c69 7665 7279 5f32 3468 725f d_delivery_24hr_
0000130: 6365 6e74 7261 6c5f 7469 6d65 3a31 3135 central_time:115
0000140: 3920 0d0a 0d0a 0d0a 0d0a 0d0a 0d0a 0d0a 9 ..............

It looks like your key.txt file has incorrect value. Judging from the first sed command:
sed '/^[[:blank:]]*$/d; s|^|s/%%|; s|:|%%/|; s|$|/|' key.txt
it expects each line to contain a semicolon. Then it forms sed code for the second sed command:
sed -f - file1.txt > file2.txt
If your key.txt contains non-empty lines without a semicolon, you will get the error unterminated 's' command.
Ensure that key.txt is correct, or at least add /:/!d; into your pipeline. Like this:
sh '''sed '/^[[:blank:]]*$/d;/:/!d;s|^|s/%%|;s|:|%%/|;s|$|/|' key.txt | sed -f - file1.txt > file2.txt'''
For example, correct key.txt contents:
username:server1
Incorrect key.txt:
username server2
There is no semicolon in this line, so it will cause error.
You might try to replace your first sed command with a simpler one:
sed '/:/!d;s|^\([^:]*\):\(.*\)$|s/%%\1%%/\2/|' key.txt
or better:
sed -e '/:/!d;s|^\([^:]*\):\(.*\)$|s/%%\1%%/\2/|' -e 'N;s|\n/|/|' key.txt
If that doesn't help, run xxd key.txt or hexdump -C key.txt and post the output.
After you added hex contents of your key.txt file, I finally could replicate the issue on my machine. The problem could be solved by this command:
sed -e '/:/!d;s|^\([^:]*\):\(.*\)\r|s/%%\1%%/\2/|' key.txt
So the trick is to use \r instead of $ in the first sed command. If it still doesn't work for you (it might, if you use MacOS), you can just remove carriage return from the key.txt file with a tool of your choice (like dos2unix) and then your original code should work.

Related

different behavior of parallel when input is from STDIN

I am using the GNU parallel tool. I have an input file in.txt the looks like this:
export MY_ENV=$1 && echo hi: $MY_ENV
export MY_ENV=$1 && echo hi: $MY_ENV
export MY_ENV=$1 && echo hi: $MY_ENV
export MY_ENV=$1 && echo hi: $MY_ENV
export MY_ENV=$1 && echo hi: $MY_ENV
export MY_ENV=$1 && echo hi: $MY_ENV
I use this command (case 1) to invoke parallel:
parallel -j 4 -a in.txt --link ::: 11 22 33 44
which (as expected) results in this output:
hi: 11
hi: 22
hi: 33
hi: 44
hi: 11
hi: 22
However when i try to send the input via STDIN using the command below (case 2) I get different behavior. In other words this command:
cat in.txt | parallel -j 4 --link ::: 11 22 33 44
results in this error message:
/bin/bash: 11: command not found
/bin/bash: 22: command not found
/bin/bash: 33: command not found
/bin/bash: 44: command not found
Shouldn't the behavior be identical? How can I invoke the parallel program so that when the input is via STDIN I get the same output as in case 1 above?
cat in.txt | parallel -j 4 -a - --link ::: 11 22 33 44
or
cat in.txt | parallel -j 4 --link :::: - ::: 11 22 33 44
or
cat in.txt | parallel -j 4 :::: - :::+ 11 22 33 44
See details on https://doi.org/10.5281/zenodo.1146014 (section 4.2).

Why do "docker run -t" outputs include \r in the command output?

I'm using Docker client Version: 18.09.2.
When I run start a container interactively and run a date command, then pipe its output to hexdump for inspection, I'm seeing a trailing \n as expected:
$ docker run --rm -i -t alpine
/ # date | hexdump -c
0000000 T h u M a r 7 0 0 : 1 5
0000010 : 0 6 U T C 2 0 1 9 \n
000001d
However, when I pass the date command as an entrypoint directly and run the container, I get a \r \n every time there's a new line in the output.
$ docker run --rm -i -t --entrypoint=date alpine | hexdump -c
0000000 T h u M a r 7 0 0 : 1 6
0000010 : 1 9 U T C 2 0 1 9 \r \n
000001e
This is weird.
It totally doesn't happen when I omit -t (not allocating any TTY):
docker run --rm -i --entrypoint=date alpine | hexdump -c
0000000 T h u M a r 7 0 0 : 1 7
0000010 : 3 0 U T C 2 0 1 9 \n
000001d
What's happening here?
This sounds dangerous, as I use docker run command in my scripts, and if I forget to omit -t from my scripts, the output I'll collect from docker run command will have invisible/non-printible \r characters which can cause all sorts of issues.
tldr; This is a tty default behaviour and unrelated to docker. Per the ticket filed on github about your exact issue.
Quoting the relevant comments in that ticket:
Looks like this is indeed TTY by default translates newlines to CRLF
$ docker run -t --rm debian sh -c "echo -n '\n'" | od -c
0000000 \r \n
0000002
disabling "translate newline to carriage return-newline" with stty -onlcr correctly gives;
$ docker run -t --rm debian sh -c "stty -onlcr && echo -n '\n'" | od -c
0000000 \n
0000001
Default TTY options seem to be set by the kernel ... On my linux host it contains:
/*
* Defaults on "first" open.
*/
#define TTYDEF_IFLAG (BRKINT | ISTRIP | ICRNL | IMAXBEL | IXON | IXANY)
#define TTYDEF_OFLAG (OPOST | ONLCR | XTABS)
#define TTYDEF_LFLAG (ECHO | ICANON | ISIG | IEXTEN | ECHOE|ECHOKE|ECHOCTL)
#define TTYDEF_CFLAG (CREAD | CS7 | PARENB | HUPCL)
#define TTYDEF_SPEED (B9600)
ONLCR is indeed there.
When we go looking at the ONLCR flag documentation, we can see that:
[-]onlcr: translate newline to carriage return-newline
To again quote the github ticket:
Moral of the story, don't use -t unless you want a TTY.
TTY line endings are CRLF, this is not Docker's doing.

docker-compose wurstmeister/kafka failing to parse KAFKA_OPTS

I have a basic docker-compose file file for wurstmeister/kafka
I'm trying to configure it to use SASL_PLAIN with SSL
However I keep getting this error no matter how many ways I try to specify my jaas file
This is the error I get
[2018-04-11 10:34:34,545] FATAL [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
These are the vars I have. Last one is where I specify my jaas file
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_HOST_NAME: 10.10.10.1
KAFKA_PORT: 9092
KAFKA_ADVERTISED_PORT: 9093
KAFKA_ADVERTISED_HOST_NAME: 10.10.10.1
KAFKA_LISTENERS: PLAINTEXT://:9092,SASL_SSL://:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.10.10.1:9092,SASL_SSL://10.10.10.1:9093
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SSL_TRUSTSTORE_LOCATION: /kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: password
KAFKA_SSL_KEYSTORE_LOCATION: /kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: password
KAFKA_SSL_KEY_PASSWORD: password
KAFKA_OPTS: '-Djava.security.auth.login.config=/path/kafka_server_jaas.conf'
Also when I try to check the docker logs I see
/usr/bin/start-kafka.sh: line 96: KAFKA_OPTS=-Djava.security.auth.login.config: bad substitution
Any help is greatly appreciated!
equals '=' inside the last value is causing this issue.
KAFKA_OPTS: '-Djava.security.auth.login.config=/path/kafka_server_jaas.conf'
This is what I have got after debugging.
+ for VAR in $(env)
+ [[ KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf =~ ^KAFKA_ ]]
+ [[ ! KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf =~
^KAFKA_HOME ]]
++ echo KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf
++ sed -r 's/KAFKA_(.*)=.*/\1/g'
++ tr '[:upper:]' '[:lower:]'
++ tr _ .
+ kafka_name=opts=-djava.security.auth.login.config
++ echo KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf
++ sed -r 's/(.*)=.*/\1/g'
+ env_var=KAFKA_OPTS=-Djava.security.auth.login.config
+ grep -E -q '(^|^#)opts=-djava.security.auth.login.config='
/opt/kafka/config/server.properties
start-kafka.sh: line 96: KAFKA_OPTS=-Djava.security.auth.login.config: bad
substitution
and this is the piece of code that is performing this operation.
88 for VAR in $(env)
89 do
90 if [[ $VAR =~ ^KAFKA_ && ! $VAR =~ ^KAFKA_HOME ]]; then
91 kafka_name=$(echo "$VAR" | sed -r 's/KAFKA_(.*)=.*/\1/g' | tr '[:upper:]' '[:lower:]' | tr _ .)
92 env_var=$(echo "$VAR" | sed -r 's/(.*)=.*/\1/g')
93 if grep -E -q '(^|^#)'"$kafka_name=" "$KAFKA_HOME/config/server.properties"; then
94 sed -r -i 's#(^|^#)('"$kafka_name"')=(.*)#\2='"${!env_var}"'#g' "$KAFKA_HOME/config/server.properties" #note that no config values may contain an '#' char
95 else
96 echo "$kafka_name=${!env_var}" >> "$KAFKA_HOME/config/server.properties"
97 fi
98 fi
99
100 if [[ $VAR =~ ^LOG4J_ ]]; then
101 log4j_name=$(echo "$VAR" | sed -r 's/(LOG4J_.*)=.*/\1/g' | tr '[:upper:]' '[:lower:]' | tr _ .)
102 log4j_env=$(echo "$VAR" | sed -r 's/(.*)=.*/\1/g')
103 if grep -E -q '(^|^#)'"$log4j_name=" "$KAFKA_HOME/config/log4j.properties"; then
104 sed -r -i 's#(^|^#)('"$log4j_name"')=(.*)#\2='"${!log4j_env}"'#g' "$KAFKA_HOME/config/log4j.properties" #note that no config values may contain an'#' char
105 else
106 echo "$log4j_name=${!log4j_env}" >> "$KAFKA_HOME/config/log4j.properties"
107 fi
108 fi
109 done
Update: They have fixed it and it is merged now!
https://github.com/wurstmeister/kafka-docker/pull/321
There's a bug open now with wurstmeister/kafka but they have gotten back to me with a workaround as follows
I believe his is part of a larger namespace collision problem that
affects multiple elements such as Kubernetes deployments etc (as well
as other KAFKA_ service settings).
Given you are referencing an external file /kafka_server_jaas.conf,
i'm assuming you're OK adding/mounting extra files through; a
work-around is to specify a CUSTOM_INIT_SCRIPT environment var, which
should be a script similar to:
#!/bin/bash
export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka_server_jaas.conf"
This is executed after the substitution part that is failing.
This could have been done inline, however there is currently a bug in
how we process the environment, where we need to specify the input
separator to make this work correctly.
Hopefully this works!

How to find files in ClearCase that are both checkedout and have a specific extension file

In ClearCase I can find CHECKEDOUT files (on my view ) with
cleartool lsco -me -short -cview -all | sort -r
but I want to apply a regexp to filter only those that are c++ (c,h) source codes and apply on the checkedout files. The filter is
$targettedFileFilter="\\.\(c[cxp]*\|h[h]{0,1}\|sig\)\$";
I tried these two alternatives
Alternative 1:
find . -type f -regextype posix-awk -regex ".*$targettedFileFilter" && cleartool lsco -me -short -cview -d /vobs/rbs/hw/ru_fpga/txl/sw | sort -r
Pitfall: but it takes a long time scanning all files.
Alternative 2:
cleartool lsco -me -short -cview -all | sort -r | grep -E '*.cc'
cleartool lsco -me -short -cview -all | sort -r | grep -E '*.h'
....
Pitfall: too much code, and need to save all outputs
Is there a way to list checked out files and apply a filter?
Considering grep -E (--extended-regexp) is able to interpret regexp (without needing to escape its special characters), all you need to type is:
cleartool lsco -me -short -cview -all | sort -r | grep -E '\.(cc|h)'
Pattern or wildcards are not mentioned in cleartool lsco.
As Brian Cowan comments:
cleartool lsco -me -short -cview -all | grep -E '\.(cc|h)$' | sort -r

awk version issue - convert hex to decimal

I usually write scripts on my mac and then once it is ready, I sftp them to my test box at work. The issue I am facing here is that I have a stream of data that is an I.P address in hex format. I am using mix of sed and awk to parse it and convert it into a more readable format.
$echo $content12
cb5c860100000000000000000000000000
[DoD#MBP-13~] echo $content12 |
sed -e 's/../&./g' -e 's/.$//' | sed 's/[0-9a-z][0-9a-z]/0x&/g' |
awk -F"." '{for (i=1;i<NF;i++) printf ("%d\n", $i)}' |
awk '{if (NR<5) printf $0; printf "."}' | sed 's/\.\.*$//'
203.92.134.1
When I ported this to my test box at work, the script did not work as expected.
$echo $content12 |
sed -e 's/../&./g' -e 's/.$//' | sed 's/[0-9a-z][0-9a-z]/0x&/g' |
awk -F"." '{for (i=1;i<NF;i++) printf ("%d\n", $i)}' |
awk '{if (NR<5) printf $0; printf "."}' | sed 's/\.\.*$//'
0.0.0.0
Version of awk and uname on my mac -
[DoD#MBP-13~] awk --version
awk version 20070501
[DoD#MBP-13~] uname -a
Darwin MBP-13.local 11.2.0 Darwin Kernel Version 11.2.0: Tue Aug 9 20:54:00 PDT 2011;
root:xnu-1699.24.8~1/RELEASE_X86_64 x86_64
Version of awk and uname on my test box at work -
$ awk --version
GNU Awk 3.1.5
Copyright (C) 1989, 1991-2005 Free Software Foundation
$uname -a
Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010
x86_64 x86_64 x86_64 GNU/Linux
Is this something I can fix with minor changes. I am still very new to UNIX environment so my one-liner may seem abnormally long to you. Any suggestions would be greatly appreciated.
You can use the --non-decimal-data option of gawk to cause it to handle octal and hex numbers in the input:
$ echo 0x10 | gawk --non-decimal-data '{ printf "%d", $1 }'
16
versus:
$ echo 0x10 | gawk '{ printf "%d", $1 }'
0
In essence this problem boils down to feeding printf a string of parameters.printf is a shell builtin so:
echo "cb5c860100000000000000000000000000" |
sed 's/\(.\{8\}\).*/\1/;s/../"0x&" /g;s/^/printf "%d.%d.%d.%d\n" /'|sh
203.92.134.1
In GNU sed you can evaluate the pattern space, like so:
echo "cb5c860100000000000000000000000000" |
sed 's/\(.\{8\}\).*/\1/;s/../"0x&" /g;s/^/printf "%d.%d.%d.%d" /e'
203.92.134.1
In programming, I've found the hardest thing is not coding but saying what you mean.
Apparently the GNU awk(1) implementation doesn't handle 0x11 as an argument to printf() as you've implemented it:
$ echo cb5c860100000000000000000000000000 | sed -e 's/../&./g' -e 's/.$//' |
sed 's/[0-9a-z][0-9a-z]/0x&/g'
0xcb.0x5c.0x86.0x01.0x00.0x00.0x00.0x00.0x00.0x00.0x00.0x00.0x00.0x00.0x00.0x00.0x00
$ echo cb5c860100000000000000000000000000 | sed -e 's/../&./g' -e 's/.$//' |
sed 's/[0-9a-z][0-9a-z]/0x&/g' |
awk -F"." '{for (i=1;i<NF;i++) printf ("%d\n", $i)}'
0
0
0
...
The mawk(1) installed on my system (by Mike Brennan) -- an alternative to GNU awk(1) that claims to be smaller, faster, and still POSIX 1003.2 (draft 11.3) compliant -- does interpret this as you expected:
$ echo cb5c860100000000000000000000000000 | sed -e 's/../&./g' -e 's/.$//' |
sed 's/[0-9a-z][0-9a-z]/0x&/g' |
mawk -F"." '{for (i=1;i<NF;i++) printf ("%d\n", $i)}' |
mawk '{if (NR<5) printf $0; printf "."}' | sed 's/\.\.*$//'
203.92.134.1$
If you're lucky enough to also have mawk(1) installed and available, this solution may be suitable.

Resources