rebar dependency without repository - erlang

I have rebar project with dependencies, so after clean when I run rebar compile, it downloads dependencies (for git runs git clone, looks like), runs configure for them and then compiles everything. Can I somehow make those dependencies local? I mean to skip downloading them and directly run configure there?

Try to use
rsync option and specify the file path
{rsync, "file:///foo/bar/baz"}
is the shape of it as long as I remember

Related

How do I pass a ROS package to another person?

I have made a ROS workspace and inside a package.
I did catkin_make and everything is working well.
I would like to give this package (or should I give the entire workspace?) to another person.
I am thinking to give him a zip file of the files and folders (it contains launch files, python scripts, rviz files etc) so I am expecting he will unzip it in his machine
I would like he can run the launch files without problems
What is what he needs to do for this? (of course he will have ROS installed, that is no problem)
I am thinking perhaps he should do source devel/setup.bash but is this enough?
When sharing a workspace with somebody only the source space src has to be shared. It should contain all our packages with their launch files (*.launch), Python (*.py) and C++ nodes (*.cpp, *.hpp), YAML configuration files (*.yaml), RViz configurations (*.rviz), robot descriptions (*.urdf, *.xacro) and describe how each node should be compiled in a CMakeLists.txt. Additionally you are supposed to keep track of all the Debian packages you install inside the package.xml file of each package.
If for some obscure reason there are things that I have to do that can't be accommodated in the standard installation instructions given above, I will actually write a bash script that performs these steps for me and add it either to the package itself or the workspace. This way I can automate also more complex steps such as installing OpenCV or modifying the .bashrc. Here a small example of what such a minimal script (I generally name them install_dependencies.sh) might look like:
#!/bin/bash
# Get current workspace
WS_DIR="$(dirname "$(dirname "$(readlink -fm "$0")")")"
# Check if script is run as root '$ sudo ...'
if ["$EUID" -ne 0]
then
echo "Error: This script has to be run as root '$ sudo ./install_dependencies.sh'
exit 1
fi
echo "Installing dependencies..."
# Modify .bashrc
echo "- Modifying '~/.bashrc'..."
echo "source ${WS_DIR}/devel/setup.bash" >> ~/.bashrc
echo ""
echo "Dependencies installed."
If for some reason even that is not possible I make always sure to document it properly either in a Markdown *.md read-me either in a /doc folder inside your package, in the read-me.md inside the base folder of your repository or inside the root folder of your workspace.
The receiver then only has to
Create a new workspace
Copy or clone the package files to its src folder
Install all the Debian package dependencies listed in the package.xml files with $ rosdep install
(If any: Execute the bash scripts I created by hand $ sudo ./install_dependencies.sh or perform the steps given in the documentation)
Build the workspace with $ catkin_make or $ catkin build from catkin-tools
Source the current environment variables with $ source devel/setup.bash
Make sure that the Python nodes are executable either by $ chmod +x <filename> or right-clicking the corresponding Python nodes (located in src or scripts of your package), selecting Properties/Permissions and enabling Allow executing file as program.
Run the desired Python or C++ nodes ($ rosrun <package_name> <executable_name>) and launch files ($ roslaunch <package_name> <launch_file_name>)
It is up to you to share the code as a compressed file, in form of a Git repository or a more advanced way (see below) but I will introduce some best practices in the following paragraphs that will pay off in the long run with larger packages.
Sharing a package or sharing a workspace?
One can either share a single package or an entire workspace. I personally think that most of the time one should share the entire workspace instead of the package alone even if you only cloned the other packages from a public Github repo. This might save the receiver a lot of headache e.g. when checking out the wrong branch.
Version control with Git
Arguably the best way to arrange your packages is by using Git. I'd actually make a repository for every package you create (if a couple of packages are virtually inseparable you can also bundled them to a single Git repo or better use metapackages). Then create an additional repository for your workspace and include your own packages and packages from other sources as submodules. This allows your code to be modular and re-usable: You can share only a package or the entire workspace!
As a first step I always add a .gitignore file to each package repository which excludes *.pyc files and another one to the workspace repository that ignores the build, devel and install folders.
You can add a particular repository as submodule to your workspace Git repository by opening a console inside the src folder of your workspace repository and typing
$ git submodule add -b <branch_name> <git_url_to_package> <optional_directory_rename>
Note that you can actually track a particular branch of a repository that you include as a submodule. In case you need a submodule at some point follow this guide.
If you share the workspace repository with someone they will have to have access to each individual submodule repository and they will have to not only pull the repository but also update the submodules with
$ git clone --recurse-submodules <git_url_to_workspace_repository>
and potentially update them to the latest commit with
$ git submodule update --remote
After these two steps they should have a full version of the repository with submodules and they should be able to progress with the steps listed in the section above.
1.1 Unit-testing and continuous integration
Before sharing a repository you will have to verify that everything is working correctly. This can take a decent amount of time, in particular if the code base is large and you are modifying it frequently. In the ideal case you would have to install it on a brand new machine or inside a virtual box in order to make sure that the set-up works which would take quite some time. This is where unit testing comes into play: For every class and function you program you will write a test. This way you can simply run these tests and make sure everything is working correctly. Generally these unit tests will be performed automatically and the working branches merged continuously. Generally the test routines are written with the libraries Boost::Test (C++), GoogleTest (generally used in ROS with C++), unittest (for Python) and QtTest (for GUIs). For ROS launch files there is additionally rostest. How this can be done in ROS is described here and here.
ROSjects
If you do not even want the person you are sending the code to to go through the hassle to set it up you might consider sending them a ROSject. A ROSject is an online virtual ROS environment (by the guys behind The Construct, the main source of ROS courses and of ROS tutorials on Youtube) that can be created and shared very easily from your existing Git repository as can be seen here. The simulation runs entirely in the cloud on a virtual machine. This way the potential of failure is very low but it is not a choice if your code is supposed to run on hardware and not only in simulation.
Docker
If your installation procedure is complex you might as well use a container such as a Docker.
More information about using Docker in combination with ROS can be found here. The Docker container might introduce though a bit of overhead and it is probably no choice for code which should have real-time priority in combination with a real-time patched operating system.
Debian or snap package
Another way of sending somebody a ROS package is by packing it into a Debian or snap package. This process takes a while and is in particular favourable if you want to give your code to a large number of users that should use the code out of the box. Instructions on how this can be done for Debian packages can be found here and here, while a guide for snap can be found here.

Access hash of input files in genrule to pass to command in Bazel

I am looking for a way run command in genrule with hash of input files.
I want to start replacing Maven with Bazel in my projects. It is a multi-repo setup building selected product from source from different repositories.
ProjectA
- moduleA1
- moduleA2
ProjectB
- moduleB1
- moduleB2
Maven builds can be executed like this:
cd ProjectA
mvn versions:set -DnewVersion=A_HASH
mvn clean install
cd ../ProjectB
mvn versions:set -DnewVersion=B_HASH
mvn clean install -DprojectA-version=A_HASH
I use versions:set to not rely on snapshots and get reliable builds even locally. I could use hash from GIT but it is not enough because 1) I want to have build working locally without committed changed 2) B_HASH should change when ProjectA changes
Bazel will let me to re-run maven only when files change but it is not enough to integrate it with maven repository.
Is there a way to implement genrule calling "mvn versions:set -DnewVersion=HASH" with hash of input files? Bazel calculates hash of input files but I cannot find how to expose this hash to genrule.
With Bazel, you can forget about the hacky hash you used with Maven. Bazel maintains hashes for you, and will recompile everything that is needed.
That's the reliable part of {reliable, fast}: Choose two

Solutions for installing libraries without a prebuilt file in bower

Some libraries don't have an already build JavaScript file in their Github repository because the authors of these libraries are against keeping build artifacts around (Sinon.JS for example). Is there a preferred way to deal with this using Bower?
I know that I could fork the repository and register my fork with the prebuilt file with Bower. I'm just not sure if this is the best/correct way to handle this.
If there's not a proper Bower package registeres, you can install from any git repo (you can specify versions if there are proper git tags), and even from a .zip or .tar.gz files if you provide an url.
This is from http://bower.io/
bower install <package>
Where <package> can be any one of the following:
A name that maps to a package registered with Bower, e.g, jquery.
A remote Git endpoint, e.g., git://github.com/someone/some-package.git. Can be public or private.
A local Git endpoint, i.e., a folder that's a Git repository.
A shorthand endpoint, e.g., someone/some-package (defaults to GitHub).
A URL to a file, including zip and tar.gz files. It's contents will be extracted.
Of course you won't get any dependency resolution this way, but you can take care of that manually adding any dependency explicitly to your bower.json file
Currently that is the best way. You can also keep it locally and reference it in 'dependencies' with full path. We're working on adding ability for author to publish components, like npm.

How to execute sbt tasks inside Play projects cloned from git to a single directory under Jenkins?

Currently the HudsonPluginForPlay doesn't support Play 2.x and the author hasn't updated the plugin for quite some time. So I'm trying to figure out a way to get the build automated and tested on my own using sbt-launcher plugin as highlighted by Play framework 2.0 continuous integration setup.
However, I've run into a problem where my git checkout structure is like this,
project/
project1/ (Play project 1)
project2/ (Play project 2)
Now sbt seems to run on the root under project/ and it doesn't do anything.
Is there a way to get it to say, run the sbt commands under project1/ and then project2/? I tried using shell command to cd into the directories but that doesn't seem to do anything in particular.
You may be quite successful doing the following:
(cd project1; sbt package); (cd project2; sbt package)
to execute package command inside the projects. It has in fact nothing to do with sbt since I've only leveraged the fact you build the projects on Unix and just cd to each project.
There's not much you can do with sbt since defining the projects as submodules or a parent project would disable project-part of each Play project's build, i.e. sbt can also be defined via a Scala code inside project/ directory inside a sbt project, but it can only be inside the root project. If you've defined the directory project as the root project the submodules may be damaged.

How can I load a Rebar plugin for 'pre-compile' from a dependency?

I have a number of applications that need a header file to be generated before compilation. This seemed to be a perfect candidate for a Rebar plugin, so I created a plugin with a pre_compile function, put it in a Git repository, and listed it as a dependency in rebar.config in the other applications.
However, the plugin must be compiled before it can be loaded, so when I run rebar compile -v I find that rebar complains about not finding the plugin, then compiles the dependency, and then fails to compile my application because the header file has not been generated.
Is there a way to accomplish what I'm trying to achieve with a Rebar plugin, or do I need to find another way to do it?
The plugin_dir option is your friend:
{plugin_dir, "deps/my_plugin/src"}.
That makes Rebar try to compile the plugin from that source directory if it can't find it in the code path already.

Resources