Does Bazel actually require Python? - bazel

The Bazel install instructions say that Python is required. However, I used the Linux installer without Python and it seems to work.
Does Bazel actually require Python for non-Python builds, such as C++ and Go?

I believe it doesn't, and your success to build without Python proves that.

Related

Build Beam pipelines using Bazel (with DataflowRunner)

I use Bazel to build my Beam pipeline. The pipeline works well using the DirectRunner, however, I have some trouble managing dependencies when I use DataflowRunner, Python can not find local dependencies (e.g. generated by py_library) in DataflowRunner. Is there any way to hint Dataflow to use the python binary (py_binray zip file) in the worker container to resolve the issue?
Thanks,
Please see here for more details on setting up dependencies for Python SDK on Dataflow. If you are using a local dependency, you should probably look into developing a Python package and using the extra_package option or developing a custom container.

Bazel list of tools in-scope for targets running shell commands

Is there a list of tools that are assumed to be always in the PATH when a Bazel target runs a shell command?
This is relevant for creating isolated build environments. AFAIU (see https://github.com/NixOS/nixpkgs/pull/50765#issuecomment-440009735) by default Bazel picks up tools from /bin and /usr/bin when in strict mode.
But what can ultimately be assumed about the minimal content of those? For example, I saw awk to be used liberally. But then git as well, which sounds border-line.
I imagine the exact set might correspond to whatever Google-internal Bazel expects to find in Google's build images bin directories. At least for BUILD rules open-sourced by Google.
Is there such a definitive list? Thank you.
As far as I can tell, your assessment of the tool usage is correct, and unfortunately I'm not aware of such a list.
There should be one, and Bazel should treat the shell as a toolchain. Alas nobody is working on that at the moment. See https://github.com/bazelbuild/bazel/issues/5265.

Is there a platform-independent way to run Perl/Python/Ruby scripts in Jenkins?

See title. Currently, I do either run the scripts on Windows with a CMD or by using a shell on Linux. This seems a bit unflexible to me since it limits the jobs to be run either on Windows or on Linux.
How would I define a job which simply does run my test scripts written in Perl/Python/Ruby/WhateverOtherScriptingLanguage regardless on which OS the Jenkins instance is running.
The correct script interpreter can be assumed as installed, of course.
The cross-platform shell plugin solves exactly this problem -- however, the last release was done in 2014, and there's a couple of open issues that might prevent you from using it in a production environment.
As an alternative, there is plugins for wrapping the actual call to the interpreter into plugin code. With a plugin, you become independent from OS-specific shells. However, the plugin will be language-specific, e.g.,
For Python, the ShiningPanda plugin is popular
For Ruby, the Ruby plugin will do that (albeit only for embedded scripts)
For Perl, there is no such thing
In any case, you'd depend on availability and maintenance of an additional plugin. Plus, adding fat plugins to Jenkins just to wrap a system call is somewhat awkward.
I reckon that the most versatile and compact solution is to introduce a Groovy build step, and to start the interpreter with a one-liner from there, e.g.:
"perl myscript.pl".execute();
This will work on any platform, as long as you ensure that Groovy is available, e.g. by using the Groovy plugin.
If you prefer Python, you can probably do the same with short wrapper that you start via the ShiningPanda plugin.
Shell scripts in Jenkins provide crossplatform she-bang.
#!perl
print "hi"
will work both on linux and windows, perl should be in PATH.
UPD. Actually we had shell installed on windows hosts so it might be not working without it.

How can I run custom tools from a premake build script?

I'm using protocol buffers for data serialization in my C++ application. I would like to add the invokation of the protoc code generator in my premake build script (thus ensure the up-to-date state of the generated classes and avoid the need to store generated source under version control).
Even their FAQ has a question and answer about this, but the answer is very incomplete for me. Having the ability to call any lua function is great, but where exactly do I put that call? I need to run the protoc compiler before building either the application or the unit tests.
You can certainly call outside code from Premake scripts. But remember: Premake scripts are used to generate build files: Makefiles, C++ projects, etc. The Premake script is run before building the project.
If you want this preprocess to be run outside of the actual build files (and not by make, VC++, Code::Blocks, etc), then it's easy. Lua's os.execute will execute a command-line.
Premake scripts are still Lua scripts. All of the Premake commands are just Lua calls into functions that Premake defines. Premake executes the scripts, then uses the data from them to generate the build files. So all of your Lua code is run during the execution of the script. Where you put this command in your script is irrelevant; wherever it is, it will execute before your build files are generated.
And if you want to run the protoc step during the build (from VC++, makefile, etc.) you can set up a prebuild command. See http://industriousone.com/prebuildcommands for more info and an example.

Erlang and Toolchains

Has anyone worked out an ultra efficient workflow and toolset for Erlang? Debugging, prototyping, browsing, version control, etc.
I'm extremely impressed with Smalltalk's integrated image system, but was wondering if something could be even approaching it with Erlang.
Erlang has a very robust development chain, especially if you are an EMACS maven. There is an Erlang specific build system, there is robust support for packaging your application and its dependencies for deployment and don't forget OTP.
As for tools, there is Dialyzer, real time tracing on running systems, hot code loading ( you can enable and disable or add logging to a running system without restarting it, for example ), remote code execution, there is so much to learn it is dizzying when you start out.
Editor: you can use whatever you want. I used emacs for my first year of erlang, but I'm currently using gedit.
Version Control: I like git. It seems that most of the erlang community agrees (most projects are hosted on github).
Workflow: I'd recommend getting familiar with rebar.
Here is an example of a rebar-flavored Makefile:
REBAR := ./rebar
.PHONY: all deps doc test clean release
all: deps
$(REBAR) compile
deps:
$(REBAR) get-deps
doc:
$(REBAR) doc skip_deps=true
test:
$(REBAR) eunit skip_deps=true
clean:
$(REBAR) clean
release: all test
dialyzer --src src/*.erl deps/*/src/*.erl
Here are some basic pointers:
Put your unit tests in the same modules as the code they are testing. See the rebar wiki for details.
Add {cover_enabled, true} to your rebar.config file. Every time you run make test you get a coverage report in HTML!
Add your project's dependencies to your rebar.config and you can fetch and build them when you run make deps.
Make sure to comment your code with edoc. If you do, rebar can build all of your docs when your run make doc.
I wonder about the difference between Sinan/Faxien and Rebar, too. From my notes, I remember that Sinan/Faxien was more about creating a project template, and dependency management, while Rebar was more useful for creating a module template... My notes are here, are several years old, and are aimed at bootstrapping erlang newbies (like me).
-Todd

Resources