I need to specify the full path to the compiler executable for building with maven-nar.
The aol.properties file seems to only accept certain predefined values for the compiler name.
How do I tell the nar plugin exactly where my compiler and linker executables are. In this case I am trying to compile for ios from macosx.
Looks like the only way is to add the compiler to CppTasks and install the 'hacked' version on your build host.
Here is the version I would use as a starting point: http://duns.github.com/maven-nar-plugin/cpptasks.html
There is starting to be some effort to merge the NAR branches
https://github.com/maven-nar/maven-nar-plugin
Would be worthwhile raising issues there.
For Windows 32/64 bit support different compiler paths where needed if wanting to run maven once without changing environment variables to build both platforms.
There is a work in progress, been using for a while, but not really published it. I'm using windows only and never tried it on mac.
https://github.com/GregDomjan/maven-nar-plugin/tree/multi
I was aiming to merge it with trunk and have not had a chance yet to load the matching cpptasks changes required to allow provision of the path and some other settings.
Unfortunatly there where also a bunch of other changes that may be uncessary around configuration.
Related
Is there a list of tools that are assumed to be always in the PATH when a Bazel target runs a shell command?
This is relevant for creating isolated build environments. AFAIU (see https://github.com/NixOS/nixpkgs/pull/50765#issuecomment-440009735) by default Bazel picks up tools from /bin and /usr/bin when in strict mode.
But what can ultimately be assumed about the minimal content of those? For example, I saw awk to be used liberally. But then git as well, which sounds border-line.
I imagine the exact set might correspond to whatever Google-internal Bazel expects to find in Google's build images bin directories. At least for BUILD rules open-sourced by Google.
Is there such a definitive list? Thank you.
As far as I can tell, your assessment of the tool usage is correct, and unfortunately I'm not aware of such a list.
There should be one, and Bazel should treat the shell as a toolchain. Alas nobody is working on that at the moment. See https://github.com/bazelbuild/bazel/issues/5265.
I am just starting to get to grips with Nix, so apologies if I missed the answer to my question in the docs.
I want to use Nix to setup a secure production machine with the minimal set of libraries and executables. I don't want any compilers or other build tools present because these can be security risks.
When I install some packages, it seems that they depend on only the minimum set of runtime dependencies. For example if I install apache-tomcat-8.0.23 then I get a Java runtime (JRE) and the pre-built JAR files comprising Tomcat.
On the other hand, some packages seem to include a full build toolchain as dependencies. Taking another Java-based example, when I install spark-1.4.0 Nix pulls down the Java development kit (JDK) which includes a compiler, and it also pulls the Maven build tool etc.
So, my questions are as follows:
Do Nix packages make any distinction between build and runtime dependencies?
Why do some packages appear to depend on build tools whereas others only need runtime? Is this all down to how the package author wrapped up the application?
If a package contains build dependencies that I don't want, is there anything that I, as the operator, can do about it except design my own alternative packaging for the same application?
Many thanks.
The runtime dependencies are a subset of the build-time dependencies that Nix determines automatically by scanning the generated output for the hash part of each build-time dependencies' store path. For example, if you build a package using the compiler /nix/store/abcdef...-foo-1.20, then Nix will scan all files in the generated output for the hash bit abcdef.... If that hash is found, then the output is assumed to reference the compiler in some way, so it's kepts as a runtime dependency. If that hash does not occur, however, then the generated output has no reference to the compiler and therefore cannot access it at runtime, so foo-1.20 is treated as a build-time-only dependency.
Some packages record large parts of their build environment for informational/debugging purposes. Perl, for example, stores every little detail about the tools used to compile it, so all those store paths end up being treated as runtime dependencies despite the fact that Perl doesn't actually need them at runtime, but Nix can't know: it just knows that the Perl store path references those tools. Now, Nixpkgs maintainers usually make an effort to clean that up, i.e. by pruning the logfile that contains all those store paths from the installation, etc., but for sure there are plenty of packages in the database still that haven't been optimized to that end yet.
Let's assume that you'd like to compile a version of openssh that does not depend on PAM. Then you can remove the build input from the expression by means of an override, i.e. you replace the pam argument that's normally passed to the openssh build function with null. To do that, store the following file in ~/.nixpkgs/config.nix
{
packageOverrides = super: let self = super.pkgs; in {
openssh-without-pam = super.openssh.override {
pam = null;
};
};
}
and now install that package by running:
$ nix-env -f "<nixpkgs>" -iA openssh-without-pam
What does specifying tools under "Tool Locations" in Jenkins Node configuration, exactly do?
I wanted to run a mvn command in free style project. I specified the Maven tool and its home under "Tool Locations" and that didn't help. The job failed for not being able to find mvn. I followed this solution and modified the PATH variable to include maven path. That did the trick.
So how exactly does specifying Tool Locations help? Is it just to help users see the tool paths when they visit Jenkins node configuration site?
In Global Configuration, you can configure different tools, such as Maven, JDK, Ant, etc. When you do that, you also provide an installation method (usually an automatic online installer).
Now, on the Job Configuration, when you configure a buildstep using one of the tools, for example Invoke Ant, you will see a drop down. The first option is Default, the next is whatever tool installations that you've configured in Global Configuration.
If you choose Default it will use whatever is already installed on the system by simply executing a command like java, ant, or mvn. Obviously, if those commands are not in the path, it will fail.
If you choose one of the configured options, Jenkins will install it to the system (if not already done), and use that particular version (as opposed to the one you have under path).
This installation will happen on both the master and the slave nodes. If you do not want to use this automatic installation on a particular node, you can manually specify the location of the particular tool in the screenshot you provided.
The above is only useful if in your Job Configuration, you are selecting a specific version of the tool, other than Default, because as already explained: Default will use whatever is already installed on the system (not by Jenkins).
To answer your queries:
So how exactly does specifying Tool Locations help?
Well, not really i would say. It's meant to specify your custom installation directory path for any program that's already setup in your global config page in a different location. If you have configured mvn 2.0 in global settings but wish to use mvn 3.0 on this specific node, then you can use Tool Locations. However, it will work only if the path to the executable is already present in the PATH env variable of the user with which the slave process is running.
Is it just to help users see the tool paths when they visit Jenkins
node configuration site?
Well, yes, you can say that. I am saying so because whenever you specify a custom path, it generally isn't in the PATH of the user with which the process is running. That's true in most of the cases because such binaries don't run independently. They need other dependent modules to work properly otherwise simply dropping a binary in one of the directories that's already present in the PATH would have sufficed. So, it's safe to say that you explicitly have to add the path to executable in PATH to make it work.
If you have noticed the Help (?) section in the Tool Locations, even that doesn't sound very excited about using this feature :P It says:
You can specify the location of certain tools on this node, overriding the global configuration. (You may prefer to use automatic tool installers instead, removing the need to configure each node separately.)
I'm using a CMake build in a Jenkins environment and want to build the protobuf compiler from source.
This all works but in the next step I'm trying to use the compiler to translate my proto files which doesn't work, cause it cannot find it's own shared objects. I've tried defining the search path in the CMakeLists.txt file but it won't detect the shared object location in my repository tree $PRJ_MIDDLEWARE/protobuf/lib. I've tried telling cmake or the system where to search by defining:
set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib")
set(ENV{LD_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib:$ENV{LD_LIBRARY_PATH}")
But it always fails when trying to invoke the protoc compiler I just build. I tried invoking ´´ldconfig´´ from CMake but that doesn't work cause the jenkins user doesn't have the right to do this. Currently my only solution is to login to the build server an do this manually as root. But that is not how I want to do this... the next release moves to a new directory—this has to be done again. Do I have other options? Preferably directly from CMake, from Jenkins or maybe even Protocol Buffers?
Thank you
Two ideas come to mind:
Build protobuf compiler as a static binary (I don't know if that's possible but it usually is.)
Set LD_LIBRARY_PATH environment variable before invoking cmake to point to the location of protoc shared libraries.
We have a very large product that compiles just fine on our 32 bit build controllers and mostly on our 64 bit build controllers with one exception. We have a precompiled web forms front end. When it builds on the 64 bit machine it shows a command line in the log file of
*C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_compiler.exe -v /ourwebapp -p ..\ourwebapp\ -u -f "........\Binaries\Mixed Platforms_PublishedWebsites\ourwebapp\"*
This causes an error because of some 32 bit third part dll's we have:
ASPNETCOMPILER : error ASPCONFIG: Could not load file or assembly 'Leadtools.Codecs' or one of its dependencies. An attempt was made to load a program with an incorrect format. [C:\Builds\5\OurApp\OurApp Build\Sources\Product\Solutions\ourwebapp_ourwebapp.metaproj]
Does anyone know the correct solution file syntax to force a 32 bit build regardless.
Thank you.
Edit you Build Definition and select this option.
Process Tab-> Advance->MSBuild Platform->X86
Save your build and trigger the build ...
Erik, if you edit your build definition, under the process tab there is the option to set the msBuild compilation x86 or x64 or Auto, setting this to x86 would force the build to perform the compilation in 32 bit.
In the Configurations to Build box, you can specify which platforms and configurations you want to build. (To show this box, expand the Required node, and then expand the Items to Build node.) For example, you can specify that this build should build only the release configuration of the 32-bit version of your C++ project by including Release|x86 in this box. If you leave the Configurations to Build box empty, the default configuration and platform that is defined in each solution or project is built.
See the steps here. http://msdn.microsoft.com/en-us/library/dd647547.aspx