How to put condition in extensible choice - jenkins

Hi I have recently started learning jenkins. I would like to know
how to put a condition for the extensible choice.
I have
Extensible Choice:
Name: DB
Description: DB to load for option 1 and 2
File1
File2
File3
Choice Provider: Textarea Choice Parameter
Choices: File1
File2
File3
Default Choice : File1
When the OS is Linux it should choose the File1 and when it is Unix it should choose File2
Since the default choice is given as File1 it is taking the same File1 for both OS versions. Is there a way that I can put condition for this type of scenario
I want to choose a file which is related to that particular OS

Related

uci - how to revert all unstaged changes

uci documentation says:
All "uci set", "uci add", "uci rename" and "uci delete" commands are staged into a temporary location and written to flash at once with "uci commit".
If I get it right, you first run some commands like the ones mentioned above and to make the changes be written to the configuration files you run uci commit. For example, let's say I have done the following changes...
root#OpenWrt:~# uci changes
network.vlan15.ifname='eth1.15'
network.vlan15.type='bridge'
network.vlan15.proto='static'
network.vlan15.netmask='255.255.255.0'
network.vlan15.ipaddr='192.168.10.0'
...but I don't want to continue and commit them. Is there an easy way to revert all staged changes and avoid doing it one by one?
This should be possible by the following command:
root#firlefanz:~# rm -rf /tmp/.uci/
There is a command to revert all staged changes
revert <config>[.<section>[.<option>]] Revert the given option, section or configuration file.
So, in your case, it should be
uci revert network.vlan15
See https://openwrt.org/docs/guide-user/base-system/uci
This one-liner should do the trick:
uci changes | sed -rn 's%^[+-]?([^=+-]*)([+-]?=.*|)$%\1%' | xargs -n 1 uci revert
tl;dr The sed command extracts the option names from the staged changes. The xargs command executes the revert command for every extracted option.
Now let's have a deep dive into everything:
uci changes prints the prepared changes which are then piped to the sed command.
The sed opton -r enables extended regular expressions and -n suppress automatic printing of pattern matches.
The sed command s is used to do a search and replace and % is used as separation character for the search and replace term.
The uci change lines have different formats.
Removed configuration options are prefixed with -.
Added configuration options are prefixed with +
Changed options don't have a prefix.
To match the prefixes [+-]? is used. A question mark means, that one of the characters in the square brackets can be matched optional.
The option name will be matched with the pattern [^=+-]*. This regex has the meaning of any number of characters as long as the character is not one of =+-.
It is inside round brackets to mark it as group to reuse it later.
The next pattern ([+-]?=.*|) is also a pattern group. There are two different groups spitted by the pipe.
The second part is the easy one and means no character at all. This happens when a uci option is deleted.
The fist part means that the character = can optional prepended with + or -. After the = can be one or more characters which is indicated by .*. =<value> happens on added configuration. The prepending of - or + indicates the value is removed from the list or added to the list if the option is a list.
In the replace pattern the whole line is replaced with the first group by its reference \1. In other words: only the option name is printed.
All the option names are then send to xargs. With option -n 1 xargs execuse uci revert <option_name> for every option_name send by sed.
This are some examples for the different formats of the uci changes output:
-a
+b='create new option with this value'
c='change an existing option to this value'
d+='appended to list'
e-='removed from list'
The extraced option names will be the following:
a
b
c
d
e
xargs -n 1 will then executed the following commands:
uci revert a
uci revert b
uci revert c
uci revert d
uci revert e
This is the whole magic of the one-liner.
I didn't find a uci command to revert all uncommitted changes, but you can probably parse the output of the uci changes command with some shell scripting to achieve the desired result. Here is an example script:
#!/bin/ash
# uci-revert-all.sh
# Revert all uncommitted uci changes
# Iterate over changed settings
# Each line has the form of an equation, e.g. parameter=value
for setting in $(uci changes); do
# Extract parameter from equation
parameter=$(echo ${setting} | grep -o '^\(\w\|[._-]\)\+')
# Display a status message
echo "Reverting: ${parameter}"
# Revert the setting for the given parameter
uci revert "${parameter}"
done
A simpler alternative might be to use the uci revert <config> syntax, e.g.:
#!/bin/ash
# uci-revert-all.sh
# Revert all uncommitted uci changes
for config in /etc/config/*; do
uci revert $(basename ${config})
done
Both of these approaches worked well for me on a router running LEDE 4.
Here's another short one-liner to revert ALL unstaged changes (as per the question):
for i in /etc/config/* ; do uci revert ${i##*/} ; done
(FYI, this uses posix parameter expansion's "Remove Largest Prefix Pattern".)

How to join 2 files using a pattern

is it possible to join these files based on first column pattern by using awk ?
Thanks
file1
qwex-123d-947774-sm-shebha
qwex-123d-947774-sm-shebhb
qwex-123d-947774-sm-shebhd
qwex-23d-947774-sm-shebha
qwex-23d-947774-sm-shebhb
qwex-235d-947774-sm-shebhd
file2
qwex-235d none1
qwex-23d none2
output
qwex-23d none2 qwex-23d-947774-sm-shebha
qwex-23d none2 qwex-23d-947774-sm-shebhb
qwex-235d none1 qwex-235d-947774-sm-shebhd
this awk one-liner should do:
awk 'NR==FNR{a[$0];next}{for(x in a)if($0~"^"x){print x, $0;break}}' file2 file1
Note that, the line has risk if the lines in your file2 containing special characters, which have special meaning in regex. like qwex$-23d
If that is the case, ~ should not be used, instead, we should compare the string literally.

Trying to output all possible combinations of joining two files

I have a folder of 24 different files that all have the same tab-separated format:
This is an example:
zinc-n with-iodide-n 8.0430 X
zinc-n with-amount-of-supplement-n 12.7774 X
zinc-n with-value-of-horizon-n 14.5585 X
zirconium-n as-valence-n 11.3255 X
zirconium-n for-form-of-norm-n 15.4607 X
I want to join the files in every possible combination of 2.
For instance, I want to join File 1 and File 2, File 1 and File 3, File 1 and File 4... and so on until I have an output of 552 files joining EACH file with EACH other file considering all the UNIQUE combinations
I know this can be done for instance in the Terminal with cat.
i.e.
cat File1 File2 > File1File2
cat File1 File3 > File1File3
... and so on.
But, to do this for each unique combination would be an extremely laborious process.
Is there a possible to automatize this process to join all of the unique combination using a command line in Terminal with grep for instance? or perhaps another suggestion for a more optimized solution than CAT.
You can try with python. I use the combinations() function from the itertools module and join() the contents of each pair of files. Note that I use a cache to avoid reading each file many times, but you could exhaust your memory, so use the best approach for you:
import sys
import itertools
seen = {}
for files in itertools.combinations(sys.argv[1:], 2):
outfile = ''.join(files)
oh = open(outfile, 'w')
if files[0] in seen:
f1_data = seen[files[0]]
else:
f1_data = open(files[0], 'r').read()
seen[files[0]] = f1_data
if files[1] in seen:
f2_data = seen[files[1]]
else:
f2_data = open(files[1], 'r').read()
seen[files[1]] = f2_data
print('\n'.join([f1_data, f2_data]), file=oh)
A test:
Assuming following content of three files:
==> file1 <==
file1 one
f1 two
==> file2 <==
file2 one
file2 two
==> file3 <==
file3 one
f3 two
f3 three
Run the script like:
python3 script.py file[123]
And it will create three new files with content:
==> file1file2 <==
file1 one
f1 two
file2 one
file2 two
==> file1file3 <==
file1 one
f1 two
file3 one
f3 two
f3 three
==> file2file3 <==
file2 one
file2 two
file3 one
f3 two
f3 three

What tools deal with spaces in columnar data well?

Let's start with an example that I ran into recently:
C:\>net user
User accounts for \\SOMESYSTEM
-------------------------------------------------------------------------------
ASPNET user1 AnotherUser123
Guest IUSR_SOMESYSTEM IWAM_SOMESYSTEM
SUPPORT_12345678 test userrrrrrrrrrrr test_userrrrrrrrrrrr
The command completed successfully.
In the third row, second column there is a login with a space. This causes many of the tools that separate fields based on white space to treat this field as two fields.
How would you deal with data formatted this way using today's tools?
Here is an example in pure** Windows batch language on the command prompt that I would like to have replicated in other modern cross-platform text processing tool sets:
C:\>cmd /v:on
Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.
C:\>echo off
for /f "skip=4 tokens=*" %g in ('net user ^| findstr /v /c:"The command completed successfully."') do (
More? set record=%g
More? echo !record:~0,20!
More? echo !record:~25,20!
More? echo !record:~50,20!
More? )
ASPNET
user1
AnotherUser123
Guest
IUSR_SOMESYSTEM
IWAM_SOMESYSTEM
SUPPORT_12345678
test userrrrrrrrrrrr
test_userrrrrrrrrrrr
echo on
C:\>
** Using variable delayed expansion (cmd /v:on or setlocal enabledelayedexpansion in a batch file), the for /f command output parser, and variable substring syntax... none of which are well documented except for at the wonderful website http://ss64.com/nt/syntax.html
Looking into AWK, I didn't see a way to deal with the 'test userrrrrrrrrrrr' login field without using substr() in a similar method to the variable substring syntax above. Is there another language that makes text wrangling easy and is not write-only like sed?
PowerShell:
Native user list example, no text matching needed
Get-WmiObject Win32_UserAccount | Format-Table -Property Caption -HideTableHeaders
Or, if you want to use "NET USER":
$out = net user # Send stdout to $out
$out = $out[4..($out.Length-3)] # Skip header/tail
[regex]::split($out, "\s{2}") | where { $_.Length -ne 0 }
# Split on double-space and skip empty lines
Just do a direct query for user accounts, using vbscript (or powershell if your system supports)
strComputer = "."
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set colItems = objWMIService.ExecQuery("Select * from Win32_UserAccount",,48)
For Each objItem in colItems
Wscript.Echo objItem.Name
Next
This will show you a list of users, one per line. If your objective is just to show user names, there is no need to use other tools to process thee data.
Awk isn't so great for that problem because awk is focused on lines as records with a recognizable field separator, while the example file uses fixed-width fields. You could, e.g., try to use a regular expression for the field separator, but that can go wrong. The right way would be to use that fixed width to clean the file up into something easier to work with; awk can do this, but it is inelegant.
Essentially, the example is difficult because it doesn't follow any clear rules. The best approach is a quite general one: write data to files in a well-defined format with a library function, read files by using a complementary library function. Specific language doesn't matter so much with this strategy. Not that that helps when you already have a file like the example.
TEST
printf "
User accounts for \\SOMESYSTEM
-------------------------------------------------------------------------------
ASPNET user1 AnotherUser123
Guest IUSR_SOMESYSTEM IWAM_SOMESYSTEM
SUPPORT_12345678 test userrrrrrrrrrrr test_userrrrrrrrrrrr
The command completed successfully.
\n" | awk 'BEGIN{
colWidth=25
}
/-----/ {next}
/^[[:space:]]*$/{next}
/^User accounts/{next}
/^The command completed/{next}
{
col1=substr($0,1,colWidth)
col2=substr($0,1+colWidth,colWidth)
col3=substr($0,1+(colWidth*2),colWidth)
printf("%s\n%s\n%s\n", col1, col2, col3)
}'
There's probably a better way than the 1+(colWidth*2) but I'm out of time for right now.
If you try to execute code as is, you'll have to remove the leading spaces at the front of each line in the printf statement.
I hope this helps.
For this part:
set record=%g
More? echo !record:~0,20!
More? echo !record:~25,20!
More? echo !record:~50,20!
I would use:
for /f "tokens=1-26 delims= " %a in (%g%) do (
if not "%a" = "" echo %a
if not "%b" = "" echo %b
if not "%c" = "" echo %c
rem ... and so on...
if not "%y" = "" echo %y
if not "%z" = "" echo %z
)
That is if I had to do this using batch. But I wouldn't dare to call this "modern" as per your question.
perl is really the best choice for your case, and millions of others. It is very common and the web is ripe with examples and documentation. Yes it is cross platform, extremely stable, and nearly perfectly consistent across platforms. I say nearly because nothing is perfect and I doubt in your lifetime that you would encounter an inconsistency.
It is a language interpreter but supports a rich command-line interface as well.

Addressing a specific occurrence of a character in sed

How do I remove or address a specific occurrence of a character in sed?
I'm editing a CSV file and I want to remove all text between the third and the fifth occurrence of the comma (that is, dropping fields four and five) . Is there any way to achieve this using sed?
E.g:
% cat myfile
one,two,three,dropthis,dropthat,six,...
% sed -i 's/someregex//' myfile
% cat myfile
one,two,three,,six,...
If it is okay to consider cut command then:
$ cut -d, -f1-3,6- file
awk or any other tools that are able to split strings on delimiters are better for the job than sed
$ cat file
1,2,3,4,5,6,7,8,9,10
Ruby(1.9+)
$ ruby -ne 's=$_.split(","); s[2,3]=nil ;puts s.compact.join(",") ' file
1,2,6,7,8,9,10
using awk
$ awk 'BEGIN{FS=OFS=","}{$3=$4=$5="";}{gsub(/,,*/,",")}1' file
1,2,6,7,8,9,10
A real parser in action
#!/usr/bin/python
import csv
import sys
cr = csv.reader(open('my-data.csv', 'rb'))
cw = csv.writer(open('stripped-data.csv', 'wb'))
for row in cr:
cw.writerow(row[0:3] + row[5:])
But do note the preface to the csv module:
The so-called CSV (Comma Separated
Values) format is the most common
import and export format for
spreadsheets and databases. There is
no “CSV standard”, so the format is
operationally defined by the many
applications which read and write it.
The lack of a standard means that
subtle differences often exist in the
data produced and consumed by
different applications. These
differences can make it annoying to
process CSV files from multiple
sources. Still, while the delimiters
and quoting characters vary, the
overall format is similar enough that
it is possible to write a single
module which can efficiently
manipulate such data, hiding the
details of reading and writing the
data from the programmer.
$ cat my-data.csv
1
1,2
1,2,3
1,2,3,4,
1,2,3,4,5
1,2,3,4,5,6
1,2,3,4,5,6,
1,2,,4,5,6
1,2,"3,3",4,5,6
1,"2,2",3,4,5,6
,,3,4,5
,,,4,5
,,,,5
$ python csvdrop.py
$ cat stripped-data.csv
1
1,2
1,2,3
1,2,3
1,2,3
1,2,3,6
1,2,3,6,
1,2,,6
1,2,"3,3",6
1,"2,2",3,6
,,3
,,
,,

Resources