Error in process execution from Nolio - devops

I'm new to nolio, and i've configured my first process.
After publishing it successfully i tried running the process but it failed.
The following error appeared:
"Parameter value check failed: Parameter [tar.gz] with scope [RELEASE] is missing parameter value. Process run will be stopped."
What does this error mean? what should i check?
Thanks

The Parameter tar.gz is probably defined with a Release scope (Other options are internal, user-input and environment). This means that if it is not set with a default value, it will only get its value in the context of a Deployment. If you try to run the process directly, not in the context of a deployment it would have no value and could lead for the process getting stuck or failing.
In general the process is the logic and the deployment gives it context. A deployment is a series of steps (processes) with specific values that are set to their release parameters.
If you want to use this parameter in a process that is not executed within a deployment, define it is as internal (or as user-input if you want to set it manually during execution)

Related

Post-build Actions part does not see injected environment variable

A script generates a properties file to work-space in an Execute shell block in the Build section. This file is available at work-space after script execution and in case of Failed build (Conditional steps (multiple) block in the Build section) this properties file will be injected. My Jenkins job sends an E-mail (Editable Email Notification block) in case if Failed build and it should contains the variable from properties file but it doesn't see this variable. FYI: This block can use other environment variables.
I have cross-checked the properties file and it contains the required variable in every case.
Properties file in work-space:
Environment variable injection from properties file:
This Steps to to run if condition is met block contains more other actions and these work fine. It means the running can reach this block.
Editable Email Notification block in Post-build:
If I check the Environment Variables option in a build, I can see the variable:
But when I get the mail, it doesn't contain the variable:
Any idea how can I solve it or what should I change?
NOTE: The variable is unique and not really related to Gerrit so I cannot use another variable which comes form Gerrit. Just the name of var is a little tricky.
I have found the answer for my question. The Jenkins or the plugin has limitation. It cannot handle the Failure state. If the previous execute shell block is failed then the running won't reach the Conditional steps (multiple) block.
On the other hand, I have found a "workaround" for this problem.
1. step
You need to exit from the Execute shell block with a specific return code. Eg.: 111
2. step
You need to set the Exit code to set build unstable filed to your specific exit code. (You can find this field in advanced option of Execute shell block.) As you can see in the below picture.
3. step
Set the Conditional steps (multiple) block to handle the Unstable state. With this solution the running is able to run into Conditional steps (multiple) block.
4. step
Create an Execute shell block inside the Conditional steps (multiple) block after you prepare everything what you want in case of job failed. It means after this block your job status changes to Failed from Unstable.
Whit this solution you can handle the failed job and in the end you will get a real failed job (not unstable).
Not the most elegant solution but it works.

Robot Framework: How to do "Distributed Testing"?

for our End-to-End-Tests, we want to set up a distributed testing environment. That means, that we want a docker hub container, that distributes tests of a test suit by the first in, first serve way to it's docker container workers.
How can we achieve that in Robot Framework. For a better example of what we want to implement, here a short illustration:
Thank you very much!
Following up on #A.Kootstra's Comment.
Pybot allows us to run parrallel execution of suites.
Pabot will split test execution from suite files and not from
individual test level.
In general case you can't count on tests that haven't designed to be
executed parallely to work out of the box when executing parallely.
For example if the tests manipulate or use the same data, you might
get yourself in trouble (one test suite logs in to the system while
another logs the same session out etc.). PabotLib can help you solve
these problems of concurrency.
Example:
test.robot
*** Settings ***
Library pabot.PabotLib
*** Test Case ***
Testing PabotLib
Acquire Lock MyLock
Log This part is critical section
Release Lock MyLock
${valuesetname}= Acquire Value Set
${host}= Get Value From Set host
${username}= Get Value From Set username
${password}= Get Value From Set password
Log Do something with the values (for example access host with username and password)
Release Value Set
Log After value set release others can obtain the variable values
valueset.dat
[Server1]
HOST=123.123.123.123
USERNAME=user1
PASSWORD=password1
[Server2]
HOST=121.121.121.121
USERNAME=user2
PASSWORD=password2
pabot call
pabot --pabotlib --resourcefile valueset.dat test.robot
You can find more info here https://github.com/mkorpela/pabot

Dataflow/Beam Templates, Productionization, Initialization, and ValueProviders

I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.

How do I "map" certain return value of a script to "yellow" status in Jenkins?

In Jenkins there is a possibility to create free project which can contain a script execution. The build fails (becomes red) when the return level of the script is not 0.
Is there a possibility to make it "yellow"?
(Yellow usually indicates successful build with failed tests)
The system runs on Linux.
Give the Log Parser Plugin a try. That should do the trick for you.
One slightly hacky way do do it, is to alter the job to publish test results and supply fake results.
I've got a job that is publishing the test results from a file called "results.xml". The last step in my build script checks the return value of the build, copies eihter "results-good.xml" or "results-unstable.xml" to "results.xml" and then returns a zero.
Thus if the script fails on one of the early steps, the build is red. But if the build succeeds its green or yellow based on the return code it would have retunred without this hack.

How do I specify build arguments when starting a TFS build from the command line with tfsbuild.exe?

For my Team Build process, I have created Work Flow activities that control deployment. I want to choose at runtime whether to deploy the build.
So, I need to send Deploy=true or false as an input to the work flow runtime initiation.
I can do this by defining a Work Flow custom metadata value with an internal argument. I can then set the Deploy value at runtime via the Queue Build dialog under the Parameters tabs.
My question is: How do I specify my custom variable when starting a TFS build from the command line with tfsbuild.exe start?
The command line parameter is called /msBuildArguments
TfsBuild start teamProjectCollectionUrl teamProject definitionName
[/dropLocation:dropLocation] [/getOption:getOption]
[/priority:priority] [/customGetVersion:versionSpec]
[/requestedFor:userName] [/msBuildArguments:args] [/queue]
[/shelveset:name [/checkin]] [/silent]
You can use: tfsbuild start http://yourserver:8080/tfs/ YourProject YourBuild Definition /msBuildArguments:"Deploy=true"

Resources