Decrease bazel memory usage - bazel

I'm using bazel on a computer with 4 GB RAM (to compile the tensorflow project). Bazel does however not take into account the amount of memory I have and spawns too many jobs causing my machine to swap and leading to a longer build time.
I already tried setting the ram_utilization_factor flag through the following lines in my ~/.bazelrc
build --ram_utilization_factor 30
test --ram_utilization_factor 30
but that did not help. How are these factors to be understood anyway? Should I just randomly try out some others?

Some other flags that might help:
--host_jvm_args can be used to set how much memory the JVM should use by setting -Xms and/or -Xmx, e.g., bazel --host_jvm_args=-Xmx4g --host_jvm_args=-Xms512m build //foo:bar (docs).
--local_resources in conjunction with the --ram_utilization_factor flag (docs).
--jobs=10 (or some other low number, it defaults to 200), e.g. bazel build --jobs=2 //foo:bar (docs).
Note that --host_jvm_args is a startup option so it goes before the command (build) and --jobs is a "normal" build option so it goes after the command.

For me, the --jobs argument from #kristina's answer worked:
bazel build --jobs=1 tensorflow:libtensorflow_all.so
Note: --jobs=1 must follow, not precede build, otherwise bazel will not recognize it. If you were to type bazel --jobs=1 build tensorflow:libtensorflow_all.so, you would get this error message:
Unknown Bazel startup option: '--jobs=1'.

Just wanted to second #sashoalm's comment that the --jobs=1 flag was what made bazel build finally work.
For reference, I'm running bazel on Lubuntu 17.04, running as a VirtualBox guest with about 1.5 GB RAM and two cores of an Intel i3 (I'm running a Thinkpad T460). I was following the O'Reilly tutorial on TensorFlow (https://www.oreilly.com/learning/dive-into-tensorflow-with-linux), and ran into trouble at the following step:
$ bazel build tensorflow/examples/label_image:label_image
Changing this to bazel build --jobs=1 tensorflow/... did the trick.

i ran into quite a few unstability that bazel build failed in my k8s cluster.
Besides --jobs=1, try this:
https://docs.bazel.build/versions/master/command-line-reference.html#flag--local_resources
E.g. --local_resources=4096,2.0,1.0

Related

how should I persistently save Julia packages in a Docker container

I'm running Julia on the raspberry pi 4. For what I'm doing, I need Julia 1.5 and thankfully there is a docker image of it here: https://github.com/Julia-Embedded/jlcross
My challenge is that, because this is a work-in-progress development I find myself adding packages here and there as I work. What is the best way to persistently save the updated environment?
Here are my problems:
I'm having a hard time wrapping my mind around volumes that will save packages from Julia's package manager and keep them around the next time I run the container
It seems kludgy to commit my docker container somehow every time I install a package.
Is there a consensus on the best way or maybe there's another way to do what I'm trying to do?
You can persist the state of downloaded & precompiled packages by mounting a dedicated volume into /home/your_user/.julia inside the container:
$ docker run --mount source=dot-julia,target=/home/your_user/.julia [OTHER_OPTIONS]
Depending on how (and by which user) julia is run inside the container, you might have to adjust the target path above to point to the first entry in Julia's DEPOT_PATH.
You can control this path by setting it yourself via the JULIA_DEPOT_PATH environment variable. Alternatively, you can check whether it is in a nonstandard location by running the following command in a Julia REPL in the container:
julia> println(first(DEPOT_PATH))
/home/francois/.julia
You can manage the package and their versions via a Julia Project.toml file.
This file can keep both the list of your dependencies.
Here is a sample Julia session:
julia> using Pkg
julia> pkg"generate MyProject"
Generating project MyProject:
MyProject\Project.toml
MyProject\src/MyProject.jl
julia> cd("MyProject")
julia> pkg"activate ."
Activating environment at `C:\Users\pszufe\myp\MyProject\Project.toml`
julia> pkg"add DataFrames"
Now the last step is to provide package version information to your Project.toml file. We start by checking the version number that "works good":
julia> pkg"st DataFrames"
Project MyProject v0.1.0
Status `C:\Users\pszufe\myp\MyProject\Project.toml`
[a93c6f00] DataFrames v0.21.7
Now you want to edit Project.toml file [compat] to fix that version number to always be v0.21.7:
name = "MyProject"
uuid = "5fe874ab-e862-465c-89f9-b6882972cba7"
authors = ["pszufe <pszufe#******.com>"]
version = "0.1.0"
[deps]
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
[compat]
DataFrames = "= 0.21.7"
Note that in the last line the equality operator is twice to fix the exact version number see also https://julialang.github.io/Pkg.jl/v1/compatibility/.
Now in order to reuse that structure (e.g. different docker, moving between systems etc.) all you do is
cd("MyProject")
using Pkg
pkg"activate ."
pkg"instantiate"
Additional note
Also have a look at the JULIA_DEPOT_PATH variable (https://docs.julialang.org/en/v1/manual/environment-variables/).
When moving installations between dockers here and there it might be also sometimes convenient to have control where all your packages are actually installed. For an example you might want to copy JULIA_DEPOT_PATH folder between 2 dockers having the same Julia installations to avoid the time spent in installing packages or you could be building the Docker image having no internet connection etc.
In my Dockerfile I simply install the packages just like you would do with pip:
FROM jupyter/datascience-notebook
RUN julia -e 'using Pkg; Pkg.add.(["CSV", "DataFrames", "DataFramesMeta", "Gadfly"])'
Here I start with a base datascience notebook which includes Julia, and then call Julia from the commandline instructing it to execute the code needed to install the packages. Only downside for now is that package precompilation is triggered each time I load the container in VS Code.
If I need new packages, I simply add them to the list.

After Drake Source installation on macOS, how to run a example?

After using "Source installation on macOS" to install drake, "Bazel built//..." and " Bazel test//..." are done. The question is: how I run an example , for examples/acrobot/run_swing_up ? Should I input a command like: Bazel-bin/examples/acrobot/run_swing_up ?
Yup, you can either run it via bazel run or ./bazel-bin (the latter being better for running multiple processes, having stdin access, etc.):
https://drake.mit.edu/bazel.html
Some of the examples also have brief READMEs or docs on how to run it; e.g.:
jaco arm
inclined plane

How to run a build in Travis when the build is in an infinite loop

I currently have a build of an application that is set to run infinitely. It is designed to run on a Raspberry Pi as a service, so it will continuously be running.
Whenever I try to test it on Travis-CI, the infinite loop portion draws an error even though the file builds correctly since it is running infinitely. Is there any way to stop this error, or do I have to remove the ability to run the build from the .travis.yml?
language: cpp
compiler:
- clang
- g++
script:
- make
- cd main
- ./jsonWeatherPrediction
I would expect it to error, I'm just not sure of a current way to stop it without removing - ./jsonWeatherPrediction
I don't know if this will help, but the build is located at https://travis-ci.org/DMoore12/json-weather-prediction
Thanks in advance :)
In most any reasonable CI workflow, the job should have well-defined start and finish. Your software you are testing may run forever, but your tests should not. So, first, I suggest re-thinking how you run your build.
Looking at build such as https://travis-ci.org/DMoore12/json-weather-prediction/jobs/474719832, I see that you are simply running your command (which raises a different question: The command is printing the same output forever in a tight loop. Is this the desired behavior?).
For testing, you need a different kind of behavior, one that can be tested (e.g., take input from STDIN or a command-line flag, print, and terminate).

Jenkins + Ant and parallel scp/sshexec

Have Jenkins build that use Ant to do heavy lifting.
First it fetch code, tar it, scp, sshexec to extract it, sshexec it again to install it.
There are 2 production servers right now. So I used for from ant-contrib to run scp/sshexec in parallel. For param is used to set property which is then used in scp/sshexec - to avoid issues with # vs $ notations.
However that's not working as expected.
I either get:
connection reset
ssh-agaent not present (from production server sshd logs)
Windows sockets not found
scp doulbe write server to which it's connecting (but that transfer succeds)
Build always fail at second scp/sshexec, which is strange since second connection should happen to different server.
Questions:
What am I doing wrong?
Or alternatively how to write that ant script differently, while still achieving parallelism?
This is root cause:
For param is used to set property which is then used in scp/sshexec - to avoid issues with # vs $ notations.
Ant properties are IMMUTABLE, so if set at first itteration to X, it would stay X for all the iterations of that loop!
So I either had to stick to serial execution and unsetting each parameter at the end of sequential or use # syntax and parallel loop if possible. sshexec did accept #syntax.

How do I change the kernel config for a specific machine in Yocto?

I am building core-image-minimal with "beaglebone" as the target machine.
I'd like to edit the kernel config to remove some features to improve boot time. I've learned I can do a bitbake -c menuconfig virtual/kernel to launch the ncurses editor, but I don't really understand what configuration I'm editing. Is it the one for beablebone, or just a generic kernel?
How do I take the base beablebone kernel config, edit it, and then have bitbake use it when I build core-image-minimal?
Thanks.
To make sure that the beaglebone is using which kernel. You have to find its machine Configuration. For example, beaglebone.conf
In there, you will see PREFERRED_PROVIDER_virtual/kernel = "linux-mainline"
To determine which kernel for beaglebone, you need to find it within recipes-kernel. For example, linux-mainline
after that, to do configuration, we have 2 ways to get to the Kernel's graphical configuration utility.
bitbake -c menuconfig linux-mainline
bitbake -c devshell linux-mainline
make nconfig
There is a tutorial on installing drivers HERE

Resources