How do I pass a ROS package to another person? - ros

I have made a ROS workspace and inside a package.
I did catkin_make and everything is working well.
I would like to give this package (or should I give the entire workspace?) to another person.
I am thinking to give him a zip file of the files and folders (it contains launch files, python scripts, rviz files etc) so I am expecting he will unzip it in his machine
I would like he can run the launch files without problems
What is what he needs to do for this? (of course he will have ROS installed, that is no problem)
I am thinking perhaps he should do source devel/setup.bash but is this enough?

When sharing a workspace with somebody only the source space src has to be shared. It should contain all our packages with their launch files (*.launch), Python (*.py) and C++ nodes (*.cpp, *.hpp), YAML configuration files (*.yaml), RViz configurations (*.rviz), robot descriptions (*.urdf, *.xacro) and describe how each node should be compiled in a CMakeLists.txt. Additionally you are supposed to keep track of all the Debian packages you install inside the package.xml file of each package.
If for some obscure reason there are things that I have to do that can't be accommodated in the standard installation instructions given above, I will actually write a bash script that performs these steps for me and add it either to the package itself or the workspace. This way I can automate also more complex steps such as installing OpenCV or modifying the .bashrc. Here a small example of what such a minimal script (I generally name them install_dependencies.sh) might look like:
#!/bin/bash
# Get current workspace
WS_DIR="$(dirname "$(dirname "$(readlink -fm "$0")")")"
# Check if script is run as root '$ sudo ...'
if ["$EUID" -ne 0]
then
echo "Error: This script has to be run as root '$ sudo ./install_dependencies.sh'
exit 1
fi
echo "Installing dependencies..."
# Modify .bashrc
echo "- Modifying '~/.bashrc'..."
echo "source ${WS_DIR}/devel/setup.bash" >> ~/.bashrc
echo ""
echo "Dependencies installed."
If for some reason even that is not possible I make always sure to document it properly either in a Markdown *.md read-me either in a /doc folder inside your package, in the read-me.md inside the base folder of your repository or inside the root folder of your workspace.
The receiver then only has to
Create a new workspace
Copy or clone the package files to its src folder
Install all the Debian package dependencies listed in the package.xml files with $ rosdep install
(If any: Execute the bash scripts I created by hand $ sudo ./install_dependencies.sh or perform the steps given in the documentation)
Build the workspace with $ catkin_make or $ catkin build from catkin-tools
Source the current environment variables with $ source devel/setup.bash
Make sure that the Python nodes are executable either by $ chmod +x <filename> or right-clicking the corresponding Python nodes (located in src or scripts of your package), selecting Properties/Permissions and enabling Allow executing file as program.
Run the desired Python or C++ nodes ($ rosrun <package_name> <executable_name>) and launch files ($ roslaunch <package_name> <launch_file_name>)
It is up to you to share the code as a compressed file, in form of a Git repository or a more advanced way (see below) but I will introduce some best practices in the following paragraphs that will pay off in the long run with larger packages.
Sharing a package or sharing a workspace?
One can either share a single package or an entire workspace. I personally think that most of the time one should share the entire workspace instead of the package alone even if you only cloned the other packages from a public Github repo. This might save the receiver a lot of headache e.g. when checking out the wrong branch.
Version control with Git
Arguably the best way to arrange your packages is by using Git. I'd actually make a repository for every package you create (if a couple of packages are virtually inseparable you can also bundled them to a single Git repo or better use metapackages). Then create an additional repository for your workspace and include your own packages and packages from other sources as submodules. This allows your code to be modular and re-usable: You can share only a package or the entire workspace!
As a first step I always add a .gitignore file to each package repository which excludes *.pyc files and another one to the workspace repository that ignores the build, devel and install folders.
You can add a particular repository as submodule to your workspace Git repository by opening a console inside the src folder of your workspace repository and typing
$ git submodule add -b <branch_name> <git_url_to_package> <optional_directory_rename>
Note that you can actually track a particular branch of a repository that you include as a submodule. In case you need a submodule at some point follow this guide.
If you share the workspace repository with someone they will have to have access to each individual submodule repository and they will have to not only pull the repository but also update the submodules with
$ git clone --recurse-submodules <git_url_to_workspace_repository>
and potentially update them to the latest commit with
$ git submodule update --remote
After these two steps they should have a full version of the repository with submodules and they should be able to progress with the steps listed in the section above.
1.1 Unit-testing and continuous integration
Before sharing a repository you will have to verify that everything is working correctly. This can take a decent amount of time, in particular if the code base is large and you are modifying it frequently. In the ideal case you would have to install it on a brand new machine or inside a virtual box in order to make sure that the set-up works which would take quite some time. This is where unit testing comes into play: For every class and function you program you will write a test. This way you can simply run these tests and make sure everything is working correctly. Generally these unit tests will be performed automatically and the working branches merged continuously. Generally the test routines are written with the libraries Boost::Test (C++), GoogleTest (generally used in ROS with C++), unittest (for Python) and QtTest (for GUIs). For ROS launch files there is additionally rostest. How this can be done in ROS is described here and here.
ROSjects
If you do not even want the person you are sending the code to to go through the hassle to set it up you might consider sending them a ROSject. A ROSject is an online virtual ROS environment (by the guys behind The Construct, the main source of ROS courses and of ROS tutorials on Youtube) that can be created and shared very easily from your existing Git repository as can be seen here. The simulation runs entirely in the cloud on a virtual machine. This way the potential of failure is very low but it is not a choice if your code is supposed to run on hardware and not only in simulation.
Docker
If your installation procedure is complex you might as well use a container such as a Docker.
More information about using Docker in combination with ROS can be found here. The Docker container might introduce though a bit of overhead and it is probably no choice for code which should have real-time priority in combination with a real-time patched operating system.
Debian or snap package
Another way of sending somebody a ROS package is by packing it into a Debian or snap package. This process takes a while and is in particular favourable if you want to give your code to a large number of users that should use the code out of the box. Instructions on how this can be done for Debian packages can be found here and here, while a guide for snap can be found here.

Related

Build .deb package in one Docker build stage, install in another stage, but without the .deb package itself taking up space in the final image?

I have a multistage Docker build file. The first stage creates an installable package (e.g. a .deb file). The second stage needs only to install the .deb package.
Here's an example:
FROM debian:buster AS build_layer
COPY src/ /src/
WORKDIR /src
RUN ./build_deb.sh
# A lot of stuff happens here and a big huge .deb package pops out.
# Suppose the package is 300MB.
FROM debian:buster AS app_layer
COPY --from=build_layer /src/myapp.deb /
RUN dpkg -i /myapp.deb && rm /myapp.deb
ENTRYPOINT ["/usr/bin/myapp"]
# This image will be well over 600MB, since it contains both the
# installed package as well as the deleted copy of the .deb file.
The problem with this is that the COPY stage runs in its own layer, and drops the large .deb package into the final build context. Then, the next step installs the package and removes the .deb file. However, since the COPY stage has to execute independently, the .deb package still takes up room in the final image. If it were a small package you might just deal with it, but in my case the package file is hundreds of MB, so its presence in the final build layers does increase the image size appreciably with no benefit.
There are related posts on SO, such as this one which discusses files containing secrets, and this one which is for copying a large installer from outside the container into it (and the solution for that one is still kinda janky, requiring you to run a temporary local http server). However neither of these address the situation of needing to copy from another build stage but not retain the copied file in the final package.
The only way I could think of to do this would be to extend the idea of a web server and make available an SFTP or similar server so that the build layer can upload the package somewhere. But this also requires extra infrastructure, and now you're also dealing with SSH secrets and such, and this starts to get real complex and is a lot less reproducible on another developer's system or in a CI/CD environment.
Alternatively I could use the --squash option in BuildKit, but this ends up killing the advantages of the layer system. I can't then reuse similar layers across multiple images (e.g. the image now can't take advantage of the fact that the Debian base image might exist on the end user's system already). This would minimize space usage, but wouldn't be ideal for a lot of other reasons.
What's the recommended way to approach this?

How to make a Bazel TypeScript monorepo with individually deployable packages

I've been trying to get a bazel monorepo with typescript to work. I have a couple of requirements in mind.
I should be able to import local packages using #myworkspace/ instead of ../../../ and so on, without needing Bazel. This is mostly so I get autocomplete while I'm writing.
The #myworkspace/ package should be the same during development and build time but only Bazel-managed dependencies should be resolved on imports when running sandboxed. Just so I know if I've messed up the name of the package in the js_library rule.
There should only be one lock file for the whole project. All dependencies should be located at root/node_modules.
It should be possible to individually deploy node packages i.e. #myworkspace/myCloudFunction.
It should be possible to include local dependencies in packages that will deployed.
I'm new to Bazel and it seems like it requires some mentality changes when coming from the NPM ecosystem. After googling, I've managed to find something that works for points 1 and 2 (But I might be wrong). I've published the playground repo at https://github.com/vitorelourenco/bazelmono-ts (pretty much a copy from https://github.com/lokshunhung/bazel-ts-monorepo with some ideas I took from https://github.com/angular/angular)
My questions about points 3 and 4:
Say I want the lib Lodash available on package #myworkspace/cloudFunction that will be deployed to Google Cloud Functions. If I install Lodash in the #myworkspace/cloudFunction folder, then Lodash will be added to package.json but I'll have a second node_modules folder and a second yarn.lock file, I don't want that. But if I install it in root/, then Lodash will not be added to the dependencies listed on package.json located at #myworkspace/cloudFunction, and when I deploy it, it won't install. Is there a smart way to handle this issue?
Point 5 is very similar. Ideally, the final Bazel output would have the local dependencies bundled in and ready to use but I can't seem to figure out a way to do it yet. I've tried adding a pkg_npm rule to //packages/app in the playgroup repo but couldn't get it to include //packages/common in it.

ROS setup.bash issue, how to copy ROS workspace correctly

I am new to ROS, now I am taking over one old ROS workspace which is really disordered, it is almost impossilbe for me to fix all compile errors in short time, so I created one new ROS workspace, then copy some related packages(folders) from old ROS workspace to my new workspace as a baseline. then I did below steps:
1, source /opt/ros/$version/setup.bash
2, then echo $ROS_PACKAGE_PATH. --it is so good, only /opt/ros/$version some built-in packages are involved in ROS_PACKAGE_PATH
3,in my new workspace/devel, run source setup.bash. --now something I am not sure/understand happens
after step #3, ROS_PACKAGE_PATH included ROS build-in pacakge, old workspace and my new workspace, and when I type 'catkin build $nodename' in my new workspace, some dependencies from old workspace involved
and still causing issue. my way of copying ROS node is OK or not? what is proper way to create my subset workspace. really appreciate
It is related to workspace overlaying. Here it goes an official reference: (http://wiki.ros.org/catkin/Tutorials/workspace_overlaying)
Why does it happen?
The thing is that, as #JWCS mentioned, you might have the old workspace in your ~/.bashrc file at the moment you have compiled the new_workspace. (Check item 3 of the reference: Chaining catkin workspaces)
How to get rid of it?
Even if you remove it from ~/.bashrc it will keep appearing on $ROS_PACKAGE_PATH, because you have compiled your new_workspace with the old_workspace on the $ROS_PACKAGE_PATH.
The compilation process takes the current $ROS_PACKAEGE_PATH into account and "attach" it to the workspace you are compiling.
Solution
Go to your new_workspace: cd ~/new_workspace
Remove build and devel folders: rm -rf build devel
Source only the ROS installation folder: source /opt/ros/<distro>/setup.bash
Check your $ROS_PACKAGE_PATH, it will contain only the ROS distro installation path
Recompile your new workspace: catkin_make
Source your new workspace: source ~/new_workspace/devel/setup.bash
Check your $ROS_PACKAGE_PATH, it will contain /opt/ros/<distro> + new_workspace
About ~/.bashrc
You can keep the old_workspace source, since you have the new_workspace source just after that. It will not consider the old anymore. But for the sake of simplicity and organization, I recommend you to keep in your .bashrc file only the workspaces you are working on.
Hope it can help you!
Regards
Two things to check. First of all, make sure that you're not sourcing the old repo in your ~/.bashrc. It's common practice to source /opt/ros/VERSION/setup.bash in the bashrc, and when you only have one workspace, to also source ~/MY_WS/devel/setup.bash.
Second, if you already sourced the old workspace, you should just close the terminal before sourcing the new one, otherwise the old one will show up.
As for your method, of cleaning up a workspace, that's a good method. I would start moving one package at a time from the old workspace to the new one, and building everything each time. If you look at the package CMakeLists.txt/package.xml, all the dependencies should be listed. Make sure, if they're from the old workspace, that you copy them over. That should reduce your problems. If there is a package that is truly broken, you'll be able to find and isolate it quickly.
As suggested by JWCS, the best thing you can do is to move one package at a time from the old workspace to the new workspace and recompile it every time, and verify if your CMakelist is the same as the old one.
As for the third step I will use catkin_make to compile the packages.

how can I change a software of openwrt and rebuild it in Bin file

... $make menuconfig
select some package
... $make
...
there are many bin files in the bin folder.:
My question is , I want to change some software source code of openwrt and rebuild again.
I have try to edit some source code of build_dir. But want I rebuild openwrt My code with be refresh with the newest code of svn.
Does any one how to do that?
Writing your code and synchronizing it:
1) Clone the official linino repository from Arduino on your machine using git (install it using sudo apt-get install git):
git clone https://github.com/linino/linino_distro.git
2) Do your own changes in the relevant code files, Makefiles or whatever that you need to change.
3) Whenever you want to synchronize your work with the latest changes from the remote master branch in the linino repository, you need to first fetch the changes, meaning retrieving them without yet merging them, then in a 2nd time merge them and then resolve conflicts if any:
Preliminary: if you created a local branch with your own changes make sur you get back to the master branch, you need to check out the master:
git checkout master
a) fetching the latest changes:
git fetch master
b) Merging them with your changes on your local repository (normally called origin):
git merge origin/master
Note: alternatively you can do it in one command:
git pull
It essentially does a fetch and a merge at the same time but it's important to understand the process using fetch first. From experience it can be confusing for beginners, plus it can cause automerge if not explicitely specified otherwise, causing more work to undo them.
4) Now you're ready to resolve conflicts if any, for that you can use:
git mergetool
This will allow you to resolve conflicts using a graphical tool such as tkdiff (2 way merge tool), or meld (3 way merge tool, diff your changes, the changes from the remote master, and the original file).
Compiling your code:
5) Open a terminal in your linino buildroot directory, make sure you get to update the config if you added any new packages, then recompile the image i.e.
cd ~/myLininoBuildRoot/trunk
make menuconfig
#now select your new package, that you added in trunk/package
# Make sure you save the configuration before exiting
make
Note: Alternatively you can recompile packages one by one. Instead of doing a simple make do:
Preliminary step:
Make sure to have compiled the linino toolchain that allows you to compile packages separately:
cd trunk/
make tools/install
make toolchain/install
make target/compile
Then compile your package:
make package/myPackage
Or Alternatively, you can be more specific by selecting the any target from your package Makefile say for instance install or compile or build targets:
make package/myPackage/install
make package/myPackage/compile
make package/myPackage/build
Finally, recompile the index target common to all packages that will allow you to have an bin directory trunk/bin/yourArchitecture/packages that contains an up to date index of packages including your freshly compiled one:
make package/index
More info at: http://wiki.openwrt.org/doc/howto/build.a.package
Checking that everything is alright:
Now go have a look at trunk/bin/yourArchitecture/packages/Packages, do a grep to make sure it is listed in Packages (the actual packages index file) and is up to date:
grep Packages | myPackage
You probably want to:
Get the latest version of the source code.
Make whatever changes you want.
Use diff to make a patch recording your changes.
Update the source code (in the future)
Use patch to apply the patch
Manually perform any changes that could not be patched.
This is example build motion to newer version.
edit:
../package/feeds/packages/motion
Original Makefile
PKG_NAME:=motion
PKG_VERSION:=20110826-051001
PKG_RELEASE:=2
PKG_SOURCE:=$(PKG_NAME)-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=http://www.lavrsen.dk/sources/motion-daily \
#SF/motion
PKG_MD5SUM:=e703fce57ae2215cb05f25e3027f5818
Edited Makefile
PKG_NAME:=motion
PKG_VERSION:=20120605-224837
PKG_RELEASE:=2
PKG_SOURCE:=$(PKG_NAME)-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=http://www.lavrsen.dk/sources/motion-daily \
#SF/motion
PKG_MD5SUM:=145fffcb99aed311a9c1d93b838db66f
You can also change Package Source URL (PKG_SOURCE_URL) if necessary
Rebuild newer motion application with:
make package/feeds/packages/motion/compile

How to execute sbt tasks inside Play projects cloned from git to a single directory under Jenkins?

Currently the HudsonPluginForPlay doesn't support Play 2.x and the author hasn't updated the plugin for quite some time. So I'm trying to figure out a way to get the build automated and tested on my own using sbt-launcher plugin as highlighted by Play framework 2.0 continuous integration setup.
However, I've run into a problem where my git checkout structure is like this,
project/
project1/ (Play project 1)
project2/ (Play project 2)
Now sbt seems to run on the root under project/ and it doesn't do anything.
Is there a way to get it to say, run the sbt commands under project1/ and then project2/? I tried using shell command to cd into the directories but that doesn't seem to do anything in particular.
You may be quite successful doing the following:
(cd project1; sbt package); (cd project2; sbt package)
to execute package command inside the projects. It has in fact nothing to do with sbt since I've only leveraged the fact you build the projects on Unix and just cd to each project.
There's not much you can do with sbt since defining the projects as submodules or a parent project would disable project-part of each Play project's build, i.e. sbt can also be defined via a Scala code inside project/ directory inside a sbt project, but it can only be inside the root project. If you've defined the directory project as the root project the submodules may be damaged.

Resources