I am stuck with the following situation in Jenkins.
A job needs to build multiple make targets. This will happen through multiple invocations of make per run as it allows only 1 target at a time. I want to allow users to select which targets to build with each run.
I tried with extended-choice parameter plugin (Multi-select), but could not figure out how to parse multiple values from it, and how to structure my call to make
Can someone help me with this
Extended-choice parameter will always list its selected values as TARGET=value1,value2. At best, you can enforce the values to be quoted like this TARGET="value1,value2"
You have to parse this TARGET value to get it into the format you want.
If you could pass targets to make in sequence, like make value1 value2, all you would need is to change that comma , in value of TARGET to a space . You didn't provide your OS, so I will assume *nix. You can use the following to quickly do that ${TARGET//,/ }
Finally, since make does not appear to support multiple targets (according to OP), we need a loop.
So, in your Jenkins Execute Shell build step, type:
for currentTarget in ${TARGET//,/ }; do
make $currentTarget
done
This will be equivalent to:
make value1
make value2
As for the order of thing: the order of these values will always be same as they are defined in job configuration. It doesn't matter in what order the user chooses these.
REQUIREMENT:
The parameter $TARGETS had 26 values (Values might be added/deleted in future). The user is interested in selecting which parameters he wants to build this time.
$TARGETS=make1,make2,make3.......
TRIAL:
Jenkins "extended choice parameter". Could assign multiple values to a parameter and user gets the option to select multiple values.
During build with make $TARGETS, I got error, could not "make make1 make2".
I needed
make make1
make make2
SOLUTION:
In the build step, use "for loop"
for currentTarget in ${TARGET//,/ }; do
make $currentTarget
done
Thanks to #Slav
Hope this helps people who might face similar situation.
Thanks
Related
I should check existence of values based on some conditions.
i.e. i have 3 variables, varA, varB and varC. varC should not be empty only if varA>varB (condition).
i normally use some syntax to check any of the variables and run a frequency of any of them to see if there are errors:
if missing(varC) and (varA>varB) ck_varC=1.
if not(missing(varC)) and not(varA>varB) ck_varC=2.
exe.
fre ck_varC.
exe.
I had some errors when the condition became complex and when in the condition there are missing() or other functions but i could have made a mistake.
do you think there is an easier way of doing this checks?
thanks in advance
EDIT: here an example of what i mean, think at a questionnaire with some routing, you ask age to anyone, if they are between 17 and 44 ask them if they work, if they work ask them how many hours.
i have an excel tool where i put down all variables with all conditions, then it will generate the syntax in the example, all with the same structure for all variables, considering both situations, we have a value that shouldn't be there or we don't have a value that should be there.
is there an easier way of doing that? is this structure always valid no matter what is the condition?
In SPSS, missing values are not numbers. You need to explicitly program those scenarios as well. you got varC covered (partially), but no scenario where varA or varB have missing data is covered.
(As good practice, maybe you should initialize your check variable as sysmis or 0, using syntax):
numeric ck_varC (f1.0).
compute ck_varC=0.
if missing(varC) and (varA>varB) ck_varC=1.
if not(missing(varC)) and not(varA>varB) ck_varC=2.
***additional conditional scenarios go here:.
if missing(varA) or missing(varB) ck_varC=3.
...
fre ck_varC.
By the way - you do not need any of the exe. commands if you are going to run your syntax as a whole.
Later Edit, after the poster updated the question:
Your syntax would be something like this. Note the use of the range function, which is not mandatory, but might be useful for you in the future.
I am also assuming that work is a string variable, so its values need to be referenced using quotation signs.
if missing(age) ck_age=1.
if missing(work) and range(age,17,44) ck_work=1.
if missing(hours) and work="yes" ck_hours=1.
if not (missing (age)) and not(1>0) ck_age=2. /*this will never happen because of the not(1>0).
if not(missing(work)) and (not range(age,17,44)) ck_work=2. /*note that if age is missing, this ck_work won't be set here.
if not(missing(hours)) and (not(work="yes")) ck_hours=2.
EXECUTE.
String variables are case sensitive
There is no missing equivalent in strings; an empty blank string ("") is still a string. not(work="yes") is True when work is blank ("").
I'm using a parameterized build on a Jenkins 2.14 installation, meaning the user can trigger the build with the desired parameters. I have a choice parameter with the name - let's say MYPARAMETER. Now I can of course use this string variable in Post-build action input fields like ${MYPARAMETER}.
In this choice there are lower case values for the user to choose from. In one specific input field (in a Post-build action) I need the same variable value, but with the difference that the first letter needs to be in upper case (due to case sensitivity in a path).
Is there a way to manipulate the existing variable? As a dirty fix I have currently a second choice parameter with the same values only different in capitalization.
MYPARAMETER="abc"
echo ${MYPARAMETER^}
Abc
Using ^ will convert the first character to upper case. ^^ will convert all characters
I have 48 .sav data sets containing results of a monthly survey. I need to merge the cases of all common variables from them, in order to come up with a 4 years aggregate. As I'm new to SPSS and I'm not very proficient with syntax (although i can follow it) I would normally do this using Data - Merge files - Add Cases but most of these common variables have different variable names on each data set as the questions are not always formulated in the same order and some questions only appear on one or two data sets.
However, the variable labels do not change from one data set to another. It would be great if someone knows a way to merge this data sets by variable label instead of variable name. Swapping variable names and variable labels would also do as then I could use Data - Merge files - Add Cases without problems.
Many thanks beforehand!
The merge procedures such as ADD FILES (Data > Merge Files > Add Cases) provide a capability to rename variables in the input files before merging. However, if there are a lot of variables to merge, this would get pretty tedious and error prone. Also, the dialog box supports only merging two files, while syntax allows up to 50.
Variable labels are generally not valid as variable names due to the typical presence of characters such as blanks and punctuation and length restrictions. If you have a rule that could be used to turn labels into valid variable names, that could be automated, or if the variables are always in the same order and are present in all the files, they could be renamed something like V1, V2, ...
The renaming could be done manually in syntax that you would craft for each file, or this could be done with a short Python program that you run on each file. I can write that for you if you provide details and, preferably, a sample dataset to test with (jkpeck AT gmail.com).
The Python code could loop over all the sav files in a directory and apply the renaming logic to each in one step.
How do I run multiple sets of regressions in SPSS without having to retype the command each time or without having to change the dependent variable every single time manually?
I need to run a lot of regressions with the same independent variables but I need to change the dependent variable. Is there a possibility to make this process easier?
Thank you very much for your help.
Note also that if you have to repeat this process for each country, you can use SPLIT FILES with the country id, and statistical procedures, including REGRESSION, will automatically iterate over each country.
Let's say you have 50 dependent variables, each of which needs to be regressed on the same predictors using the same regression options. Paste your list of dependent variables into Excel as a vertical list (cells b1:b50). Into cells a1:a:50 paste that part of your regression syntax that comes before the name of the dependent variable, right up to and including "/dependent ". Into cells c1:c50 paste the part of your syntax that follows the name of the dependent variable. Then in cell d1 type "=concatenate(a1,b1,c1)". Paste that formula down through cells d2:d50 and you'll have your 50 commands to paste into a syntax window. It may show gridlines; SPSS will not have any problem with these.
btw, What sort of research context is it that requires these identically-configured regressions for a large number of outcomes?
I maintain a 3rd party Informix driver that's written with ESQL-style (Informix API) calls. I'm working on a bug where, for TEXT fields, INSERTs work fine and UPDATEs fail. Stepping through the code, what I've found is that we're checking our sqlda structure to tell us whether and how to bind, and after the call to sqli_describe_statement, the sqlda.sqld variable contains 2, the correct number of bound parameters for this insert call, and the parameters appear to be set up correctly whereas in the update case, the number returned is 0, with no parameter information (it should be 1, for the one param in: "UPDATE TESTTAB SET COLNAME = ? WHERE OTHERCOLNAME = 1 ").
Using the sqlda information, we correctly set up the required locator structure for the INSERT, but we can't for the update because the information isn't there. If I fake it out in the debugger and run the set-up-the-locator code for the update, it updates fine.
The statement certainly appears correct, and the same variable is being used for the INSERT as the UPDATE bind. Moreover sqli_prep has no problem with the update. For the describe, sqsla.code returns different non-negative numbers 4 and 6, representing the different types of statements being described, as documeneted (i.e., not an error code), so there's no obvious problem there.
Is there something else I should be checking in the code ahead of this that might cause this weird behavior (other than special case handling for the different queries -- nothing there)
Am I missing something fundamental here about how one does UPDATEs on TEXT fields, such as you have to create a locator object, find the row, and click your heels together three times and say "There's no place like IBM?"
So far Google Fu has turned up little in the documentation, but if you know of docs or samples that point the way, that's cool too.
This is one of the murky areas of Informix behaviour. The behaviour of DESCRIBE is supposed to describe output parameters (it is a shorthand for DESCRIBE OUTPUT stmt INTO ...); to describe the input parameters, you would use DESCRIBE INPUT stmt INTO ... instead.
However, for various reasons extending back to the dawn of time (well, 1985, anyway), the INSERT statement got a special case exemption and plain DESCRIBE described its input parameters - unlike UPDATE or DELETE (or, these days, MERGE).
So, your code was probably written before DESCRIBE INPUT and DESCRIBE OUTPUT became feasible (that was circa 2000±3 years). In principle, using the directed DESCRIBE statements should fix the issue. There may be an ONCONFIG parameter to be set to get this behaviour.
I remember being grateful that the feature arrived, but also I remember thinking "Damn, I'm not going to be able to use that for a while - until the old versions without it are all retired". I think that has basically happened now - IDS 7.31 in particular is now obsolete, and so indeed are the IDS 9.x versions, so all available versions of IDS support the feature. OnLine 5.20 - a minority interest - still doesn't and won't ever support it. So, I need to review how to update my programs such as SQLCMD to exploit this. The code there includes what I call 'vignettes'; they're complete little programs that illustrate how to work with BYTE and TEXT blobs. You might find UPDBLOB or APPBLOB, for example, of some use.