SCP Jenkins Plugin does not copy selectively - jenkins

I want to copy only specific files in a directory to remote server using Jenkins SCP Plugin.
I have folder structure /X/Y/...Under Y, I need only the files a b c among a b c d e f. Is this possible...?
Of course, to copy all files all you need is X/Y/**. But what about copying selectively.
I was reading somewhere that this is a kind of bug in the plugin.
I have string parameter, $FILES=x,y,z highlighted in "BUILD WITH PARAMETERS"
SCP Configuration:
Source: some/path/$FILES (relative to $WORKSPACE)
Destination: /var/lib/some/path

You should be able to say X/Y/a; X/Y/b; X/Y/c
Also remember that these files have to be under the job's ${WORKSPACE}
Alternatively, you can have another build step in-between that copies only the files that you want into a staging folder, and then supplying the staging folder to SCP plugin
Edit after OP clarification:
Your $FILES variable contains x,y,z When you supply this as Source to SCP plugin, it becomes:
some/path/x,y,z
Or if we break this one item per line:
some/path/x
y
z
The first item is valid, the next two are not complete paths, therefore are not found.
Several ways to fix it (chose either one):
Full path in parameter variable.
Under your FILES string parameter, list the full path, like:
some/path/x, some/path/y, some/path/z
Under SCP Source, use only $FILES
pros: quick and stable.
cons: looks ugly with long paths.
Wildcard path in parameter variables.
Under your FILES string parameter, list the global wildcard path (files will be found under any directory), like:
**/x, **/y, **/z
Under SCP Source, use only $FILES
pros: quick and looks better than long paths.
cons: only works if files x, y and z are unique in your whole workspace. If there is $WORKSPACE/x and $WORKSPACE/some/path/x, one will end up overwriting the other.
Prepare MYFILES variable and inject it.
You need an Execute Shell build step added. In there write the following:
mypath=some/path/
echo MYFILES=${mypath}${files//,/,$mypath} > myfiles.props
Then add Inject environment variables build step (get the plugin in the link). Under Properties File Path, specify myfiles.props.
Under SCP Source, use only $MYFILES (note you are reading modified and injected variable, not the original $FILES)
pros: looks good in UI, proper and further customizable.
cons: more build steps to maintain in configuration.
p.s.
In all these cases, a multi select Extended Choice Parameter will probably look better than a string parameter.

Related

Can I get the arguments passed to bazel itself?

I want to create some "build traceability" functionality, and include the actual bazel command that was run to produce one of my build artifacts. So if the user did this:
bazel run //foo/bar:baz --config=blah
I want to actually get the string "bazel run //foo/bar:baz --config=blah" and write that to a file during the build. Is this possible?
Stamping is the "correct" way to get information like that into a Bazel build. Note the implications around caching though. You could also just write a wrapper script that puts the command line into a file or environment variable. Details of each approach below.
I can think of three ways to get the information you want via stamping, each with differing tradeoffs.
First way: Hijack --embed_label. This shows up in the BUILD_EMBED_LABEL stamping key. You'd add a line like build:blah --embed_label blah in your .bazelrc. This is easy, but that label is often used for things like release_50, which you might want to preserve.
Second way: hijack the hostname or username. These show up in the BUILD_HOST and BUILD_USER stamping keys. On Linux, you can write a shell script at tools/bazel which will automatically be used to wrap bazel invocations. In that shell script, you can use unshare --uts --map-root-user, which will work if the machine is set up to enable bazel's sandboxing. Inside that new namespace, you can easily change the hostname and then exec the real bazel binary, like the default /usr/bin/bazel shell script does. That shell script has full access to the command line, so it can encode any information you want.
Third way: put it into an environment variable and have a custom --workspace_status_command that extracts it into a stamping key. Add a line like build:blah --action_env=MY_BUILD_STYLE=blah to your .bazelrc, and then do echo STABLE_MY_BUILD_STYLE ${MY_BUILD_STYLE} in your workspace status script. If you want the full command line, you could have a tools/bazel wrapper script put that into an environment variable, and then use build --action_env=MY_BUILD_STYLE to preserve the value and pass it to all the actions.
Once you pick a stamping key to use, src/test/shell/integration/stamping_test.sh in the Bazel source tree is a good example of writing stamp information to a file. Something like this:
genrule(
name = "stamped",
outs = ["stamped.txt"],
cmd = "grep BUILD_EMBED_LABEL bazel-out/volatile-status.txt | cut -d ' ' -f 2 >\$#",
stamp = True,
)
If you want to do it without stamping, just write the information to a file in the source tree in a tools/bazel wrapper. You'd want to put that file in your .gitignore, of course. echo "$#" > cli_args is all it takes to dump them to a file, and then you can use that as a source file like normal in your build. This approach is simplest, but interacts the most poorly with Bazel's caching, because everything that depends on that file will be rebuilt every time with no way to control it.

how to find and deploy the correct files with Bazel's pkg_tar() in Windows?

please take a look at the bin-win target in my repository here:
https://github.com/thinlizzy/bazelexample/blob/master/demo/BUILD#L28
it seems to be properly packing the executable inside a file named bin-win.tar.gz, but I still have some questions:
1- in my machine, the file is being generated at this directory:
C:\Users\John\AppData\Local\Temp_bazel_John\aS4O8v3V\execroot__main__\bazel-out\x64_windows-fastbuild\bin\demo
which makes finding the tar.gz file a cumbersome task.
The question is how can I make my bin-win target to move the file from there to a "better location"? (perhaps defined by an environment variable or a cmd line parameter/flag)
2- how can I include more files with my executable? My actual use case is I want to supply data files and some DLLs together with the executable. Should I use a filegroup() rule and refer its name in the "srcs" attribute as well?
2a- for the DLLs, is there a way to make a filegroup() rule to interpret environment variables? (e.g: the directories of the DLLs)
Thanks!
Look for the bazel-bin and bazel-genfiles directories in your workspace. These are actually junctions (directory symlinks) that Bazel updates after every build. If you bazel build //:demo, you can access its output as bazel-bin\demo.
(a) You can also set TMP and TEMP in your environment to point to e.g. c:\tmp. Bazel will pick those up instead of C:\Users\John\AppData\Local\Temp, so the full path for the output directory (that bazel-bin points to) will be c:\tmp\aS4O8v3V\execroot\__main__\bazel-out\x64_windows-fastbuild\bin.
(b) Or you can pass the --output_user_root startup flag, e.g. bazel--output_user_root=c:\tmp build //:demo. That will have the same effect as (a).
There's currently no way to get rid of the _bazel_John\aS4O8v3V\execroot part of the path.
Yes, I think you need to put those files in pkg_tar.srcs. Whether you use a filegroup() rule is irrelevant; filegroup just lets you group files together, so you can refer to the group by name, which is useful when you need to refer to the same files in multiple rules.
2.a. I don't think so.

Is it possible to get $pwd value from Bazel .bzl rule?

I am trying to get a value of the full current directory path from within .bzl rule. I have tried following:
ctx.host_configuration.default_shell_env.PATH returns "/Users/[user_name]/.rbenv/shims:/usr/local/bin:/usr/bin:/bin:...
ctx.bin_dir.path returns bazel-out/local-fastbuild/bin
pwd = ctx.expand_make_variables("cmd", "$$PWD", {}) returns string $PWD - I don't think this rule is helpful for me, but may be just using it wrong.
What I need is the directory under which the cmd that runs Bazel .bzl rule is running. For example: /Users/[user_name]/git/workspace/path/to/bazel/rule.bzl or at least first part of the path prior to the WORKSPACE directory.
I can't use pwd because I need this value before I call ctx.actions.run_shell()
Are there no attributes in Bazel configurations that hold this value?
The goal is to have hermetic builds, so you shouldn't depend on the absolute path.
Feel free to use pwd inside the command of ctx.actions.run_shell() (for reproducible builds, be careful, avoid putting the absolute path in the generated files).
Edit.
Technically, there are some workarounds. For example, you can pass the path via the --define flag:
bazel build :all --define=path=$(pwd)
Then the value will be available using ctx.var["path"].
Based on your comment below, you want the path to declare an output. Let me repeat: You shouldn't use an absolute path to declare the output file. Declare an output in your package. Then ask the tool you call to use that output.
For example, when you call gcc, you can use -o to specify the output. When a tool writes to stdout, use the shell to redirect it. If the tool is really not flexible, you may want to wrap it with your own script (e.g. call the tool and copy the output file).
Using an absolute path here is not the right solution. For example, it should be possible to execute the action on a remote machine (where your absolute path won't make sense.
Zip may be a reasonable solution. It's useful when you cannot know in advance the number or the names of the output files.

Skylark - How to execute a jar from a repository rule

Context
I am writing a repository rule that invokes another Bazel project. My current approach is to build the additional project as a deploy jar. I would like a user to be able to instantiate the rule like:
jar_path = some/relative/path
my_rule(name = "something", p_arg="m_arg", binary=jar_path)
and then given the jar_path and the arguments, I would like the repository rule to execute the following command in the shell:
java -jar $(SOME_JAR) $(ARGUMENTS_PROVIDED_BY_RULE)
Problem
First, it's unclear how best to accomplish the deploy jar approach. So far, I have attempt two different approaches, with varying levels of success. For examples, I have skimmed through the scala_rules, the maven_rules, and the skylark cookbook.
Second, and more importantly, I am not sure whether the deploy jar is the best route to accomplishing my goals. Again, my interest is to invoke a target from an external Bazel project, that is currently hosted on github. (So feasibly, I could try to fetch the project using the http_archive rule).
Below, I describe the attempts I have made.
Approach 1
My first approach involved trying to execute the command using the command field in ctx.action. I tried various enumerations of
java -jar {computed_absolute_path_of_deploy_jar} {args_passed_from_instantiation}.
My biggest issue here was with determining the absolute path of the deploy jar. The file's root path, would contain some additional information. For example, it would like something like this.
/abs/olute/path[ something ]/rela/tive/path
As a side note, I'm not sure if this is a bug/nit, but the File.root.path, evaluated to None, despite File.none not being None.
My first approach involved was to was to try to use skylark [ctx.binary]
Approach 2
Next thing I tried was to mimic the input binary example from the docs. This was also unsuccessful. The issue was that the actual binary could not be found. Here is how I configured it.
First, I relaxed the repository rule into a regular skylark rule.
def _test_binary(ctx):
ctx.action(
....
arguments = [ctx.attr.p_arg],
executable = ctx.executable.binary)
test_binary = rule(
...
attrs = {
"binary":attr.label(mandatory=True, cfg="host", allow_files=True, executable=True),
...
}
Then, in my external project, I loaded the skylark rule into the WORKSPACE file. Finally, I called the macro from one of my BUILD files as follows:
load("#something_rule//:something_rule.bzl", "test_binary")
test_binary(name = "hello", p_arg = "hello", binary = "script.sh")
The script is a one line java -jar something_deploy.jar -- -arg:$1, and is in the same directory as the BUILD file.
Bazel complains that src/script.sh does not exist. I presume because it is looking for the file in /private/var/tmp/-bazel_username/somehash/relative_path. In response, I tried to pass the absolute path, which is not allowed.
Cheers.
It looks like you're mixing up repository rules with build extensions ("normal" rules). A good rule of thumb is:
Repository rules are for getting sources onto your system or symlinking them to a place Bazel can see them.
Build extension are for everything else: compiling, copying files, running binaries, etc.
I don't actually think you need to use either, for this. You say that the other project is on GitHub, so you can add the following to your WORKSPACE file:
http_archive(
name = "other_project",
...
)
Then, in your BUILD file:
genrule(
name = "run-a-jar",
srcs = ["#other_project//some/relative:path"],
cmd = "java -jar $(location #other_project//some/relative:path) -- arg1 arg2 > $#",
outs = ["jar-output"],
)
You shouldn't need to use the _deploy.jar target, since you're not moving the jar out of its project (_deploy.jar is useful when you need to relocate it).
Other things from your question:
I'm not sure if this is a bug/nit, but the File.root.path, evaluated to None,
Are you sure it didn't evaluate to ""? The path is relative to the execution root, so for sources, it will always be "" (for outputs, it'll be bazel-out/local-fastbuild/bin or similar).
Bazel complains that src/script.sh does not exist.
Passing -s to Bazel can really help debugging Skylark rules. You can see exactly where it is looking.

Require dll using string path in F#

Is there a way to do this in F#:
let fakeToolsPath = "D:\tools\FAKE\tools\FakeLib.dll"
#r fakeToolsPath
the fake tools are on a different path depending on the build agent that builds the code, so I need to be able to set it dynamically, from an environment variable or some config file.
Three ideas, in order of increasing hackiness - you'll be the judge which one makes most sense in your scenario:
In .fsx script, you can use __SOURCE_DIRECTORY__ to get the directory where the script is located. If your dll is always located in the same directory relative to the script, you can use that as a "hook" to get to it.
There's a command-line --reference argument to fsi.exe that should do what you want. If you're using fake.exe instead, you can use --fsiargs to pass it in (take a look at the link for details).
If everything else fails, create a symlink as a separate build step in your CI job configuration and just hardcode the path in the script.

Resources