Is there a mechanism to select a default config based on the platform or the environment variable? For example:
common --config=$DEFAULT_CONFIG
I'm working on a project which uses bazelrc configurations to specify default build and test flags for separate platforms. It looks something like this:
build:macos --compiler=clang
...
build:linux --compiler=gcc
...
build:windows --compiler=msvc
...
This would be really useful for a multi-platform CI setup and in general would avoid having to type bazel <command> --config=<some config> <target> every time.
Related question: Default, platform specific, Bazel flags in bazel.rc
There is now! If you add
common --enable_platform_specific_config to your .bazelrc, then --config=windows will automatically apply on windows hosts, --config=macos on mac, --config=linux on linux, etc.
You can then add lines to your .bazelrc like:
build:windows --windows-flags
build:linux --linux-flags
There is one downside, though. This works based on the host rather than the target. So if you're cross-compiling, e.g. to mobile, and want different flags there, you could wrap all the library rules you use in macros, or (probably better) add transitions into your graph targets. (See discussion here and here. "Flagless builds" are still under development, but there are usable hacks in the meantime.) You could also use the temporary platform_mappings API.
References:
Commit that added this functionality.
Where it appears in the Bazel docs.
Yes, that's what toolchains are for.
Related
Is it possible to integrate Fastlane with Bazel (or vice versa)? The non-mobile part of our org uses Bazel for build, and I'd like to be consistent on mobile. However Fastlane provides a lot of stuff aimed at mobile that Bazellane does not. Bazel is for built + test, whereas Fastlane also provides solutions for release/deployment.
Is it possible (or advisable) to call Bazel build from within Fastlane? Or perhaps call Fastlane from within Bazel for deployment?
Bazel is like an interpreter for a language, which allows you to define rules - functions which may have a set of inputs, an action, and a set of outputs.
I am not familiar with Fastlane, but it is surely possible to write a rule which will produce you an artifact. The only requirement is that your set of outputs must be clearly defined (hardcoded in a rule) - in other words, you can not write a rule which will "unzip whatever is in this archive to this folder", because you have to define a set of outputs.
Rules doc page is the best place to start.
When debugging Bazel BUILD files, it can be useful to know what values the system has on by default or has determined from the host/target setup (eg. debugging select rules).
Is there a way, either through a query or directly in the BUILD file, to view platform and other configuration values (eg. cpu, os, etc)?
Is it also possible to produce a list of all of these variables? With many different configuration options available, it's easy to lose track of what's there and what they may be set to(eg. crosstool_top, host_crosstool_top, etc).
Not a perfect answer, but adding the --toolchain_resolution_debug flag to the command line gives some visibility into a couple of these values.
https://docs.bazel.build/versions/master/command-line-reference.html#flag--toolchain_resolution_debug
We use Delphi 10 Seattle. Inside our code we use different compiler directives to produce different exe files, like debug, release version, or versions with different features sets (disable some functions for different customers ...)
Current way of getting the exe files:
change compile options by hand manually typing inside DELPHI IDE ...
compile and copy *.exe to a new location by hand
Any way to get this with one button click (faster approach ...)
Create a build configuration for each of your different feature sets.
Documentation for build configurations is here: Build Configurations Overview. This is the mechanism that the IDE provides for switching between debug and release builds, and there is no reason at all for you not to use the same mechanism to switch between your own configuration sets.
The option set feature allows you to extract certain sets or groups of options into separate files which can then be applied to configurations. You can apply the same option set multiple times, for instance once on top of a base release configuration, and then again on top of a base debug configuration. The build configuration functionality supports inheritance which makes it possible, with a bit of up-front design, to develop a clean hierarchy of configurations.
This configuration functionality is built on top of msbuild which means that you can use the same configurations in your command line builds. In fact, the fact that you mention changing configurations manually in the IDE is a concern. Building your product is not something that should require manual intervention. It is critically important that you address this and arrange that your build process is automated. You can use tools for this, although it is very easy to write your own simple tools that will invoke msbuild for all the configurations that you need to build. Please don't use the IDE to build for release.
As an aside, build configuration is one area of the product where the functionality of RAD Studio surpasses that offered by Visual Studio, in my view.
So, at work, I frequently have to create virtually identical ant scripts. Basically the application we provide to our clients is designed to be easily extensible, and we offer a service of designing and creating custom modules for it. Because of the complexity of our application, with lots of cross dependencies, I tend to develop the module within our core dev environment, compile it using IntelliJ, and then run a basic ant script that does the following tasks:
1) Clean build directory
2) Create build directory and directory hierarchy based on package paths.
3) Copy class files (and source files to a separate sources directory).
4) Jar it up.
The thing is, to do this I need to go through the script line by line and change a bunch of property names, so it works for the new use case. I also save all the scripts in case I need to go back to them.
This isn't the worst thing in the world, but I'm always looking for a better way to do things. Hence my idea:
For each specific implementation I would provide an ant script (or other file) of just properties. Key-value pairs, which would have specific prefixes for each key based on what it's used for. I would then want my ant script to run the various tasks, executing each one for the key-value pairs that are appropriate.
For example, copying the class files. I would have a property with a name like "classFile.filePath". I would want the script to call the task for every property it detects that starts with "classFile...".
Honestly, from my current research so far, I'm not confident that this is possible. But... I'm super stubborn, and always looking for new creative options. So, what options do I have? Or are there none?
It's possible to dynamically generate ANT scripts, for example the following does this using an XML input file:
Use pure Ant to search if list of files exists and take action based on condition
Personally I would always try and avoid this level of complexity. Ant is not a programming language.
Looking at what you're trying to achieve it does appear you could benefit from packaging your dependencies as jars and using a Maven repository manager like Nexus or Artifactory for storage. This would simplify each sub-project build. When building projects that depend on these published libraries you can use a dependency management tool like Apache ivy to download them.
Hope that helps your question is fairly broad.
Our team is sharing a Jenkins server with other teams, and this currently means that we are sharing the same OS-level build-user account. The different teams' OS-level build-user settings (Maven settings, bash settings, user-level Ant libraries, etc...) have collided a few times--"fixing" the settings for one team's jobs inadvertently "breaks" another team's jobs. The easiest sol'n that occurs to me is giving each team its own OS-level build-user account with which to execute its Jenkins jobs--but I cannot find a way to do this.
I have checked with Google, and also here
https://wiki.jenkins-ci.org/display/JENKINS/Use+Jenkins
and here
https://wiki.jenkins-ci.org/display/JENKINS/Plugins
to no avail.
Is there a way to do this? If not, can you recommend any best practices for segregating sets of builds from one another?
Maven Specific
You have two options that come to mind,
Add additional installations of Maven into your Jenkins global configuration, each using their own Home directory, and thus settings files. This will allow you to use totally different version of Maven, and selected based on Job requirements (You are given the option to select which "version" of maven you wish to use on the job itself.
Similar to (1), but specify specific settings configurations using Maven command line arguments. Its a little less "obvious" but may be quicker to implement
Multi-slave
You could possibly make use of multiple slaves on each machine. It increases the overheads of the builds quite significantly, and the implementation is such that you'd have multiple user accounts on a machine, each setup as needed, and then one slave instance for each user.
I'm not sure these solutions will totally answer your problem, I'll have a think and see if anything else pops into mind, but it might give some starting points
Key builds to a specific team directory that contains that team's settings. For example, provide a parameter 'TEAM' to every build, set its default value to the appropriate team name, and use that parameter as a key to a directory that contains the team's settings (so instead of using ${HOME} as in what you want to do, you'll use something like ${TEAM_SETTINGS}/${TEAM}).
You can set per-job users (who has access to/can build a particular job).
Under "Manage Jenkins" > "Configure System" >
Click on Enable Security
Check Project-based Matrix Authorization Strategy
However, I do not think there is a "per-build" option for a single job.
If you have the same project that you are sharing between teams, you could (and probably should) create two jobs for this project, and have different libraries/scripts be used in each.
You could also parametrize the build (On the Job Page, "Configure" > This build is parametrized) and supply the library versions, etc via string parameters.
You could also use a parameter to be the team's name, and in your build script change libraries based on the parameter:
For example, have a parameter called "TEAM", with choices: TEAM_A and TEAM_B, and in your script, have
if [ $TEAM == "TEAM_A" ]
then
ANT_HOME=/opt/ant/libA
else
ANT_HOME=/opt/ant/libB
fi
======================================================================
Have you considered sourcing your settings? In Linux, you could do this by saving your OS settings in a script file (for example paths, etc), and using source /path/to/settings/file, in Windows it would be call /path/to/settings/batch/file.
Can you give examples of OS level settings that you would require and per-build user for?
You problem is a common one.
Whenever something nonstandard is installed on a build server, something will break for someone.
The only solutions I know are
Set up a separate build slave for each team or product. Then they can install whatever they want on the build slave and any mess they create is all their own fault.
Any dependencies required by a job need to come with the job. This is my preferred way of working. For example: If a job needs a library or a tool, the library or tool is not installed on the build server but in the source tree and the build uses it from the source tree.
Sometimes the latter way is more work. You need to set up the tools or library so it works when it is installed in the source tree. Some tools have hard-coded paths and they do not work. In that case you can install the source of the tool and compile the tool during the build.
An even better solution is to set up separate Jenkins jobs for all the tools and libraries and the jobs that need a library or tool will download them from the Jenkins jobs.
This way you can control all your dependencies and different jobs do not conflict when e.g. one needs an older version of a library and one a newer version. And if someone upgrades the library, it is immediately visible in the version control who did what.