I have created my release. The output is a fairly complicated directory structure, and many files. It has a script that I can use to execute the program ('script start', 'script console', etc.)
How do I distribute this to customers? Do I need to give them the entire directory structure? Directory bin, with the script, is not at the top level which seems to imply that perhaps only it is needed.
You have to release it in production mode. $ rebar3 as prod release tar....
This will allow them to run ./bin/appname start from the top directory. You can also choose not to include source files by configuring your rebar.config file accordingly.
Related
I have made a ROS workspace and inside a package.
I did catkin_make and everything is working well.
I would like to give this package (or should I give the entire workspace?) to another person.
I am thinking to give him a zip file of the files and folders (it contains launch files, python scripts, rviz files etc) so I am expecting he will unzip it in his machine
I would like he can run the launch files without problems
What is what he needs to do for this? (of course he will have ROS installed, that is no problem)
I am thinking perhaps he should do source devel/setup.bash but is this enough?
When sharing a workspace with somebody only the source space src has to be shared. It should contain all our packages with their launch files (*.launch), Python (*.py) and C++ nodes (*.cpp, *.hpp), YAML configuration files (*.yaml), RViz configurations (*.rviz), robot descriptions (*.urdf, *.xacro) and describe how each node should be compiled in a CMakeLists.txt. Additionally you are supposed to keep track of all the Debian packages you install inside the package.xml file of each package.
If for some obscure reason there are things that I have to do that can't be accommodated in the standard installation instructions given above, I will actually write a bash script that performs these steps for me and add it either to the package itself or the workspace. This way I can automate also more complex steps such as installing OpenCV or modifying the .bashrc. Here a small example of what such a minimal script (I generally name them install_dependencies.sh) might look like:
#!/bin/bash
# Get current workspace
WS_DIR="$(dirname "$(dirname "$(readlink -fm "$0")")")"
# Check if script is run as root '$ sudo ...'
if ["$EUID" -ne 0]
then
echo "Error: This script has to be run as root '$ sudo ./install_dependencies.sh'
exit 1
fi
echo "Installing dependencies..."
# Modify .bashrc
echo "- Modifying '~/.bashrc'..."
echo "source ${WS_DIR}/devel/setup.bash" >> ~/.bashrc
echo ""
echo "Dependencies installed."
If for some reason even that is not possible I make always sure to document it properly either in a Markdown *.md read-me either in a /doc folder inside your package, in the read-me.md inside the base folder of your repository or inside the root folder of your workspace.
The receiver then only has to
Create a new workspace
Copy or clone the package files to its src folder
Install all the Debian package dependencies listed in the package.xml files with $ rosdep install
(If any: Execute the bash scripts I created by hand $ sudo ./install_dependencies.sh or perform the steps given in the documentation)
Build the workspace with $ catkin_make or $ catkin build from catkin-tools
Source the current environment variables with $ source devel/setup.bash
Make sure that the Python nodes are executable either by $ chmod +x <filename> or right-clicking the corresponding Python nodes (located in src or scripts of your package), selecting Properties/Permissions and enabling Allow executing file as program.
Run the desired Python or C++ nodes ($ rosrun <package_name> <executable_name>) and launch files ($ roslaunch <package_name> <launch_file_name>)
It is up to you to share the code as a compressed file, in form of a Git repository or a more advanced way (see below) but I will introduce some best practices in the following paragraphs that will pay off in the long run with larger packages.
Sharing a package or sharing a workspace?
One can either share a single package or an entire workspace. I personally think that most of the time one should share the entire workspace instead of the package alone even if you only cloned the other packages from a public Github repo. This might save the receiver a lot of headache e.g. when checking out the wrong branch.
Version control with Git
Arguably the best way to arrange your packages is by using Git. I'd actually make a repository for every package you create (if a couple of packages are virtually inseparable you can also bundled them to a single Git repo or better use metapackages). Then create an additional repository for your workspace and include your own packages and packages from other sources as submodules. This allows your code to be modular and re-usable: You can share only a package or the entire workspace!
As a first step I always add a .gitignore file to each package repository which excludes *.pyc files and another one to the workspace repository that ignores the build, devel and install folders.
You can add a particular repository as submodule to your workspace Git repository by opening a console inside the src folder of your workspace repository and typing
$ git submodule add -b <branch_name> <git_url_to_package> <optional_directory_rename>
Note that you can actually track a particular branch of a repository that you include as a submodule. In case you need a submodule at some point follow this guide.
If you share the workspace repository with someone they will have to have access to each individual submodule repository and they will have to not only pull the repository but also update the submodules with
$ git clone --recurse-submodules <git_url_to_workspace_repository>
and potentially update them to the latest commit with
$ git submodule update --remote
After these two steps they should have a full version of the repository with submodules and they should be able to progress with the steps listed in the section above.
1.1 Unit-testing and continuous integration
Before sharing a repository you will have to verify that everything is working correctly. This can take a decent amount of time, in particular if the code base is large and you are modifying it frequently. In the ideal case you would have to install it on a brand new machine or inside a virtual box in order to make sure that the set-up works which would take quite some time. This is where unit testing comes into play: For every class and function you program you will write a test. This way you can simply run these tests and make sure everything is working correctly. Generally these unit tests will be performed automatically and the working branches merged continuously. Generally the test routines are written with the libraries Boost::Test (C++), GoogleTest (generally used in ROS with C++), unittest (for Python) and QtTest (for GUIs). For ROS launch files there is additionally rostest. How this can be done in ROS is described here and here.
ROSjects
If you do not even want the person you are sending the code to to go through the hassle to set it up you might consider sending them a ROSject. A ROSject is an online virtual ROS environment (by the guys behind The Construct, the main source of ROS courses and of ROS tutorials on Youtube) that can be created and shared very easily from your existing Git repository as can be seen here. The simulation runs entirely in the cloud on a virtual machine. This way the potential of failure is very low but it is not a choice if your code is supposed to run on hardware and not only in simulation.
Docker
If your installation procedure is complex you might as well use a container such as a Docker.
More information about using Docker in combination with ROS can be found here. The Docker container might introduce though a bit of overhead and it is probably no choice for code which should have real-time priority in combination with a real-time patched operating system.
Debian or snap package
Another way of sending somebody a ROS package is by packing it into a Debian or snap package. This process takes a while and is in particular favourable if you want to give your code to a large number of users that should use the code out of the box. Instructions on how this can be done for Debian packages can be found here and here, while a guide for snap can be found here.
I have a genrule in Bazel that is supposed to manipulate some files. I think I'm not accessing these files by the correct path, so I want to look at the directory structure that Bazel is creating so I can debug.
I added some echo statements to my genrule and I can see that Bazel is working in the directory /home/lyft/.cache/bazel/_bazel_lyft/8de0a1069de8d166c668173ca21c04ae/sandbox/linux-sandbox/1/execroot/. However, after Bazel finishes running, this directory is gone, so I can't look at the directory structure.
How can I prevent Bazel from deleting its temporary files so that I can debug what's happening?
Since this question is a top result for "keep sandbox files after build bazel" Google search and it wasn't obvious for me from the accepted answer, I feel the need to write this answer.
Short answer
Use --sandbox_debug. If this flag is passed, Bazel will not delete the files inside the sandbox folder after the build finishes.
Longer answer
Run bazel build with --sandbox_debug option:
$ bazel build mypackage:mytarget --sandbox_debug
Then you can inspect the contents of the sandbox folder for the project.
To get the location of the sandbox folder for current project, navigate to project and then run:
$ bazel info output_base
/home/johnsmith/.cache/bazel/_bazel_johnsmith/d949417420413f64a0b619cb69f1db69 # output will be something like this
Inside that directory there will be sandbox folder.
Possible caveat: (I'm NOT sure about this but) It's possible that some of the files are missing in sandbox folder, if you previously ran a build without --sandbox_debug flag and it partially succeeded. The reason is Bazel won't rerun parts of the build that already succeeded, and consequently the files corresponding to the successful build parts might not end up in the sandbox.
If you want to make sure all the sandbox files are there, clean the project first using either bazel clean or bazel clean --expunge.
You can use --spawn_strategy=standalone.
You can also use --sandbox_debug to see which directories are mounted to the sandbox.
You can also set the genrule's cmd to find . > $# to debug what's available to the genrule.
Important: declare all srcs/outs/tools that the genrule will read/write/use, and use $(location //label/of:target) to look up their path. Example:
genrule(
name = "x1",
srcs = ["//foo:input1.txt", "//bar:generated_file"],
outs = ["x1out.txt", "x1err.txt"],
tools = ["//util:bin1"],
cmd = "$(location //util:bin1) --input1=$(location //foo:input1.txt) --input2=$(location //bar:generated_file) --some_flag --other_flag >$(location x1out.txt) 2>$(location x1err.txt)",
)
As part of our efforts to create a bazel-maven transition interop tool (that creates maven sized jars from more granular sized bazel jars),
we have written an aspect that runs on bazel build of the entire bazel repo and writes important information to txt files outputs (e.g.: jar file paths, compile deps targets and runtime deps targets, etc.)
We ran across an issue where the repo's code was changed such that some of the txt file were not written anymore. But the old txt file from previous runs (before the code change) remained!
Is there a way to know that these txt files are no longer relevant?
You should be able to run with --build_event_json_file=file.json and try to locate generated artifacts. For example we use it on ci.bazel.io to locate actual test.xml file that were generated: https://github.com/bazelbuild/continuous-integration/blob/09975cbb487a84a62ca1e43aa43e7c6fe078f058/jenkins/lib/src/build/bazel/ci/BazelUtils.groovy#L218
The definition of the protocol can be found in build_event_stream.proto
I seem to be getting a lot of pain with the processing of app.config and token files (we have this working with the older ".11" templates).
It looks like currently (using the ReleaseTfvcTemplate.12.xaml) this is running the tokenisation after the build.
While I can make the app.config / myapp.exe.config stuff work by deliberately copying the .token file into my output folder (so the recursive search finds it) this feels pretty horrid.
As a fix I'm tempted to move lines 182-230 to just before the RunMSBuild task on line 175 (creating a new sequence at that point)
Is this the correct approach or have I missed some documentation somewhere (or a later version of the template?)
Thanks guys... Anyway for future reference I did make the change.
However I had misunderstood the exact nature of the order things happen out of the box which is as follows:
Get the project out of source control
build with msbuild
Copy the .config.token file over the .config file. This is in the TFS template
As part of the deploy to a server then the token entries in the .config files are replaced. This is in the release manager template.
Tests are run in the msbuild binary output folders.
The problem is this doesn't really work if you're using an project type that uses app.config file as the msbuild process renames these output.exe.config during the msbuild stage so you need to create both an output.exe.config (marked as copy to output) and an output.exe.config.token so when the post deploy is the final output gets configured correctly. This also a problem if you want to tokenise some mstest dlls as these typically use an app.config as well. Basically this is a bit of a mess unless you are using web.config.
We worked our way around this by using the modification I suggested above (you need to create a sequence at line 175 and move lines 178-230 up into the sequence, this is GetBuildDirectory variable bits and the if statement) followed by adding an additional deployment stage which copies back onto the build server with the new tokenised files so the mstest can run against them.
So our new process looks like this:
Get the project out of source control
Copy the .config.token file over the .config file i.e app.config.token copied over app.config
build with msbuild (this means we end up with tokenised myapp.exe.config and mytests.dll.config)
As part of the deploy to a server then the token entries in the .config files are replaced. This is a release management step in the release template.
Deploy the tests back into a folder on build server (think this has to be a fixed folder until update 4 of release manager is deployed) the token entries in the .config files are replaced (so our integration tests can use the newly deployed servers). This is a release management step in the release template.
Tests are run in the fixed folder on the build server (rather than the msbuild output directory) so the test wildcard needs to be changed in the tfs build template.
Quick final note we don't use that build directory variable and it's left at blank I'm not convinced this would work if it was set to a value...
The replacement of variables in config files with Release Management happens at deployment time and not at compile time.
When RM deployes your app it inserts the correct variables.
It sounds like you're hitting one of two issues:
You need to include the .token file in your project and make sure it's set to Copy Always, so that it gets copied to the build output folder.
If you're building a web application, I've seen a bug in the release build process template that doesn't touch the contents of the _PublishedWebsites folder. I don't know if it's been fixed in Update 4 or not, but it was definitely still a problem in earlier versions.
My file structure contains src/ folder with the project's source code, and this folder is the one I want to have in the jenkins' workspace.
However, I also have build folder which is needed for apache ant, and it is changing with every single "ant" command executed. The problem is this folder weights over 200mb. I don't want to end up pushing it to the repo everytime I run the "ant" command.
If someone who reads it has some experience with this - what's the best way to do it? Is it possible to pull src/ folder from the repo, and build/ folder from the system? But I guess it will be wrong because this way only I will be able to execute ant command...
What's the best way to set this?
Huh? You put build in your .hgignore file and then you'll not commit that directory or what's in it. That's the usual setup, but maybe I'm missing some nuance of your question.