Jenkins is replacing text in a build step with other text - jenkins

I would not have believed this if I had not seen it with my own eyes. In any sort of build step (we have tried multiple types) the term http(s)://api.DOMAIN.com is replaced with https://licensing.domain.com.
We have verified the behavior on two different computers (one Mac, one Windows 10).
Both licensing.domain.com and api.domain.com are valid domain names. licensing.domain.com is used in other jobs.
It occurs at the time of saving with any job that we create or edit. If we create a job and add a build step that should print https://api.domain.com it prints https://licensing.domain.com.
It occurs with any type of build step.
The text is modified in the configuration of the job build step.
The job does not have to be executed. It occurs when it is saved at the time of creation or when it is edited.
https://api.domain.com => https://licensing.domain.com
But when we join the protocol (HTTP or HTTPS) to the domain name, we do not experience the issue.
'https://' + 'api.domain.com' => https://api.domain.com
Changing the domain extension does not change the string.
http://api.domain.net => https://api.domain.net
I was intimately involved in setting up the system and I have not created anything that to my knowledge would cause this to occur.

Related

Change Jenkins concurrent job workspace naming

Can I change the way Jankins names the workspace for concurrent jobs? Currently it uses the #2, #3 when running concurrent builds. I would like to change the "#" to another character. Is this possible. It is causing problems further down in my jobs.
Workspace created for concurrent job #2:
in workspace /devsrc/jenkins/workspace/CKPT_vw5.2_ubuntu#2
Further down in build script:
The environment variable II_SYSTEM contains characters that are not
allowed. The path should only contain alphabetic, digit, period,
underscore and hyphen characters.
+ [ ! -f /devsrc/jenkins/workspace/CKPT_vw5.2_ubuntu#2/ingres/files/config.dat ]
+ exit 1
I didn't test this before posting, but I've used these types of parameters in the past with no problems. See features controlled by system properties. In there, there's one to change the # to something else:
"hudson.slaves.WorkspaceList" (default value: #)
When concurrent builds is enabled, a unique workspace directory name is required for each concurrent build. To create this name, this token is placed between project name and a unique ID, e.g. "my-project#123".
In Ubuntu, I would edit /etc/default/jenkins and add this to the "JAVA_ARGS" property and say use "A" instead of "#". And of course you'll need to restart Jenkins.
-Dhudson.slaves.WorkspaceList=A

Trigger nested Jenkins job from VSTS build

I've configured a service hook in VSTS to connect to Jenkins. I'm able to use the Jenkins extension to trigger a Jenkins job if it's not in a subfolder. E.g. http://myhost/job/Always%20succeed/
In that case, I can simply connect like this and run my job:
If my job is nested, however, I can't figure out how to connect. Here's an example: http://myhost/view/Production/job/Automation/job/Test/job/My%20Job
I've tried using just the name (e.g. "My Job"), the whole url, and a dot notation (Production.Automation.Test.My Job). How can I make this run and where can I find more documentation?
It's pretty nuanced and one could argue buggy. First off, I can reach the same job with two urls:
http://myhost/view/Production/job/Automation/job/Test/job/My%20Job
http://myhost/job/Automation/job/Test/job/My%20Job
Turns out the latter is the way to go.
I tried the following name, and it tried reaching the corresponding endpoint:
Automation/job/Test/job/My%20Job <- name used in VSTS "Job name" field
/job/Automation/job/job/Test/job/My%20Job/build <- url attempted, failed (404)
Note the double job/. Then I tried the following with better results:
Automation/Test/job/My%20Job <- name used
/job/Automation/job/Test/job/My%20Job/build <- url tried, success
It's a bit concerning that the pattern isn't consistent regarding the double "job/" part but whatever.

Dataflow/Beam Templates, Productionization, Initialization, and ValueProviders

I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.

TFS Build error - "The specified path, file name, or both are too long..."

I'm writing the Custom Activities of build process template. I got the below issue when build the activity.
>XamlBuildTask : error XC1043: Extension 'Microsoft.Activities.Build.BeforeInitializeComponentExtension' threw an exception of type 'System.IO.PathTooLongException' : 'The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.'.
Do you have any ideas? Please help!
I find one tip here. Hope it might be helpful to you.
Currently there are a two workarounds:
Reduce the namespace in workflow x:Class property. This makes the
generated file have a shorter name.
Use the subst or mklink command to
create mapping so that the path the solution is located in becomes a
lot smaller. In team build, the workspace mapping needs to be modified
equally.
This still happens in 2015 TFS
This is the best answer I got changing build agent properties
Properties to save path space
The build agent properties dialog defines the "Working directory” for the build agent, defaulting to
“$(SystemDrive)\Builds\$(BuildAgentId)\$(BuildDefinitionPath)”. Based
on the above link, I’m going to go with
“$(SystemDrive)\B\$(BuildDefinitionId)” – that should take the
'”uilds” off the base directory, the TFS project name (19 characters),
a backslash, and the build name (7 characters) out, and replace them
with just a 32-bit number (which should be at most 10 digits, but
since it starts from 1, it’s much more likely to be 3-4 digits),
saving me 23 characters minimum
I may not have been able to shorten $(SourceDir), but it’s just “$(BuildDir)\Sources”, right? I can just configure the build to pull
the code to “$(BuildDir)\S” instead of “$(SourceDir)”, and I
should save another 6 characters, getting me to 29 characters saved,
which should be enough

One of configurations from matrix gets cancelled every time

I have two projects and with dependencies so that project A is started, it updates files from git and then runs a multi-configuration project B, which:
has three axes: "foo", "bar" and "baz" with 11 x 4 x 2 items
I'm going to call the values like fooN for item N from axis foo, etc.
has a configuration filter, ruling out the last axis by running only when
baz=="baz1" (maybe in a later phase we'll want to run also tests with baz2
for baz)
runs a shell script that only cds and calls python interpreter with a script
cd /path/to/scripts
python test_${bar}.py
So when the project is run, I expect 44 configurations to be tested. But only 43 are.
It's always the same configuration (which happens to be the last one triggered, as Jenkins seems to remember the order(?)) that does never run at all:
in the final matrix looks as a gray dot with "Disabled" tooltip
in Console output, after saying "Triggering bazN,barN,fooN" for all 44 combinations,
then "bazN,barN,fooN completed with result SUCCESS" for all except the last one, but
the last one seems to be always cancelled/aborted:
baz1,bar7,foo3 appears to be cancelled
baz1,bar7,foo3 completed with result ABORTED
Console output for the single combination is not available---it looks like it never
has been built
Jenkins log does not show anything interesting about "baz1,bar7,foo3"
What does this mean? Any other pointers how to troubleshoot this?
Edit: I tried adding a "HTTP ping" script to the repo and called it from above script,
just before the python test_${bar}.py part. Which proved that for the affected
configuration, Jenkins does not even run those lines.
Without knowing how you got here to begin with (probably a bug):
Append configure to the URL of the disabled configuration, and in the resulting form, uncheck Disabled and Save.
Not really an answer, but as a workaround to the problem, cloning whole project to a new one helped: with the new project, all configurations ran normally.
This is a solved Jenkins issue:
https://issues.jenkins-ci.org/browse/JENKINS-19179
By the Matrix Project plug-in, version 1.4:
https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin

Resources