roslaunch failed: cannot launch node - beagleboneblack

I have downloaded and compiled some Ros nodes from here (just to have more info). I am trying to launch the five ROS nodes with parameters using a launchfile that is taken from that repo.
After executing source catkin_ws/devel_isolated/setup.bash and executing roslaunch crab.launch(the launch file from the link above) the next error appears:
root#beaglebone:~# roslaunch crab.launch
... logging to /root/.ros/log/4f6332fe-dbe2-11e3-86a8-7ec70b079d59/roslaunch-beaglebone-2067.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://beaglebone:58881/
SUMMARY
========
PARAMETERS
* /clearance
* /duration_ripple
* /duration_tripod
* /joint_lower_limit
* /joint_upper_limit
* /port_name
* /robot_description
* /rosdistro
* /rosversion
* /trapezoid_h
* /trapezoid_high_radius
* /trapezoid_low_radius
NODES
/
crab_body_kinematics (crab_body_kinematics/body_kinematics)
crab_gait (crab_gait/gait_kinematics)
crab_imu (crab_imu/imu_control)
crab_leg_kinematics (crab_leg_kinematics/leg_ik_service)
crab_maestro_controller (crab_maestro_controller/controller_sub)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
ERROR: cannot launch node of type [crab_leg_kinematics/leg_ik_service]: can't locate node [leg_ik_service] in package [crab_leg_kinematics]
ERROR: cannot launch node of type [crab_maestro_controller/controller_sub]: can't locate node [controller_sub] in package [crab_maestro_controller]
ERROR: cannot launch node of type [crab_body_kinematics/body_kinematics]: can't locate node [body_kinematics] in package [crab_body_kinematics]
ERROR: cannot launch node of type [crab_gait/gait_kinematics]: can't locate node [gait_kinematics] in package [crab_gait]
ERROR: cannot launch node of type [crab_imu/imu_control]: can't locate node [imu_control] in package [crab_imu]
I have reinstalled the packages as suggested in some other threats about similar problems.
I also have noticed that
1º- if I move all the executablesof the nodes to the folder src/<package>/, I'm able to execute roslaunch crab.launch. But I don´t want to leave it like that, not proper way to work ;)
Additional info:
2º- If I execute, for example, source devel_isolated/<package>/setup.bashand then roslaunch crab.launch, the package which I have just source-d works and executes... (while the other still don't)
3º- So I have source-d all the source devel_isolated/<package>/setup.bash and try again: no one worked this time.
This leads to think that the problems are due to ROS variable enviroment: if I make an export | grep ROSafter 2º, I can see that the package path appears in $ROS_PATH-s and the others are not there:
root#beaglebone:~# export | grep ROS
declare -x ROS_DISTRO="hydro"
declare -x ROS_ETC_DIR="/opt/ros/hydro/etc/ros"
declare -x ROS_MASTER_URI="http://localhost:11311"
declare -x ROS_PACKAGE_PATH="/root/catkin_ws/src/crab_msgs:/root/catkin_ws/src/joy:/root/catkin_ws
/src/ps3joy:/root/catkin_ws/src/xacro:/root/catkin_ws/src/roslint:/root/catkin_ws/src/kdl_parser:/root/catkin_ws
/src/urdf:/root/catkin_ws/src/urdf_parser_plugin:/root/catkin_ws/src:/opt/ros/hydro/share:/opt/ros/hydro
/stacks:/root/ros_catkin_ws/install_isolated/share:/root/ros_catkin_ws/install_isolated/stacks"
declare -x ROS_ROOT="/opt/ros/hydro/share/ros"
declare -x ROS_TEST_RESULTS_DIR="/root/catkin_ws/build_isolated/crab_msgs/test_results"
root#beaglebone:~# source catkin_ws/devel_isolated/crab_imu/setup.bash
declare -x ROS_PACKAGE_PATH="/root/catkin_ws/src/crab_imu:/root/catkin_ws/src/crab_msgs:/root/catkin_ws
/src/joy:/root/catkin_ws/src/ps3joy:/root/catkin_ws/src/xacro:/root/catkin_ws/src/roslint:/root/catkin_ws
/src/kdl_parser:/root/catkin_ws/src/urdf:/root/catkin_ws/src/urdf_parser_plugin:/root/catkin_ws/src:/opt
/ros/hydro/share:/opt/ros/hydro/stacks:/root/ros_catkin_ws/install_isolated/share:/root/ros_catkin_ws
/install_isolated/stacks"
declare -x ROS_TEST_RESULTS_DIR="/root/catkin_ws/build_isolated/crab_imu/test_results"
Seems that 3º overwrites the source executed before..., meaning that in ROS_PACKAGE_PATHdoes not appear all he packages as they should.
I also have tried to force ROS_PACKAGE_PATHusing exportcommand, but it didn't work. So, I have to change more environment variables apart from that, but don't know which one...
So, I don't know if I diagnosis is correct and, if so, what should I do to correct this... Hope I have gathered enough info.
Thanks in advance!!
Iñigo

set the executable bit for files. most probably you need to set executable permissions for files.
chmod +x filename.

Related

Pulling ROS2 packages in via bazel WORKSPACE mech in drake-ros_*

Question related to Drake-ros's #142 - I am unable to pull in the rviz_2d_overlay_plugins in drake_ros_viz via the bazel WORKSPACE mech. I have tried both focal and jammy rolling/humble distributions. I see the following error -
ERROR: no such package '#ros2//': Failed to setup #ros2 repository: './run.bash /home/arrowhead/.cache/bazel/_bazel_arrowhead/90cc0dfb7c9efd65bbcc1ee9dd934f8e/external/bazel_ros2_rules/ros2/scrape_distribution.py -i geometry_msgs -i rclcpp -i rclpy -i tf2_ros -i tf2_ros_py -i visualization_msgs -i rviz_2d_overlay_plugins -o distro_metadata.json' exited with 1
--- captured stderr ---
Traceback (most recent call last):
File "/home/arrowhead/.cache/bazel/_bazel_arrowhead/90cc0dfb7c9efd65bbcc1ee9dd934f8e/external/bazel_ros2_rules/ros2/scrape_distribution.py", line 54, in <module>
main()
File "/home/arrowhead/.cache/bazel/_bazel_arrowhead/90cc0dfb7c9efd65bbcc1ee9dd934f8e/external/bazel_ros2_rules/ros2/scrape_distribution.py", line 46, in main
distro = scrape_distribution(
File "/home/arrowhead/.cache/bazel/_bazel_arrowhead/90cc0dfb7c9efd65bbcc1ee9dd934f8e/external/ros2/resources/ros2bzl/scraping/__init__.py", line 90, in scrape_distribution
packages, dependency_graph = build_dependency_graph(
File "/home/arrowhead/.cache/bazel/_bazel_arrowhead/90cc0dfb7c9efd65bbcc1ee9dd934f8e/external/ros2/resources/ros2bzl/scraping/__init__.py", line 50, in build_dependency_graph
raise RuntimeError(msg)
RuntimeError: Cannont find package 'rviz_2d_overlay_plugins'
I see the package on ROS Index here and my relevant WORKSPACE Bazel bit is given below.
ros2_archive(
name = "ros2",
include_packages = [
"geometry_msgs",
"rclcpp",
"rclpy",
"tf2_ros",
"tf2_ros_py",
"visualization_msgs",
"rviz_2d_overlay_plugins"
],
sha256_url = "https://build.ros2.org/view/Hci/job/Hci__nightly-cyclonedds_ubuntu_jammy_amd64/lastSuccessfulBuild/artifact/ros2-humble-linux-jammy-amd64-ci-CHECKSUM", # noqa
strip_prefix = "ros2-linux",
url = "https://build.ros2.org/view/Hci/job/Hci__nightly-cyclonedds_ubuntu_jammy_amd64/lastSuccessfulBuild/artifact/ros2-humble-linux-jammy-amd64-ci.tar.bz2", # noqa
)
Primary Suggestion: You may want to consider installing ROS Humble directly onto your Jammy OS, and then use ros2_local_repository rather than ros2_archive:
https://github.com/RobotLocomotion/drake-ros/blob/06dc6aa7d0237f4edb74130bcce8e9e78689d50c/bazel_ros2_rules/ros2/defs.bzl#L221-L239
Then you should be able to use a locally installed ROS 2 version, and install the package you want, either to /opt/ros or in a workspace that you're chaining.
FTR, in Anzu at TRI, we use ros2_archive presently for the following reasons (some of which are my opinions):
We wanted to use ROS 2 Humble on Focal, possibly mixing multiple different versions along different checkouts.
I want things to update only when I (we) precisely want. (For high confidence reproducibility, also to denoise apt upgrade)
For reproduciblity, I want us to maintain the ability to have decent reproducibility without aggressive containerization.
This means you should consolidate your development solely onto Jammy.
If you are already containerizing, then that should hopefully not be too large of a jump.
Alternative: If you really want to maintain archives, is to build your own extension archive including the packages you want. Here's an example kinda-hack that I had shimmed into Anzu:
https://github.com/EricCousineau-TRI/repro/tree/4079be209bfbc282d4580b5b2979faa148e1f394/ros/drake_ros_example_layered_archive

8th wall web app setup child compilation failed

I am new to 8th wall. I have cloned 8th wall web from git and executed below steps properly
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_location>
but on execution of last step which is for ex.
./serve/bin/serve -n -d gettingstarted/xraframe/ -p 7777
I am getting below errors
Failed to compile.
Error: Child compilation failed: Entry module not found: Error:
Can't resolve
'C:\8thWall_Project\web\serve\bin\gettingstarted\xraframe"
\index.html' in 'C:\8thWall_Project\web\serve': Error: Can't resolve
'C:\8thWall_Project\web\serve\bin\gettingstarted\xraframe"
\index.html' in 'C:\8thWall_Project\web\serve'
compiler.js:79 childCompiler.runAsChild
[serve]/[html-webpack-plugin]/lib/compiler.js:79:16
Compiler.js:306 compile
[serve]/[webpack]/lib/Compiler.js:306:11
Compiler.js:631 hooks.afterCompile.callAsync.err
[serve]/[webpack]/lib/Compiler.js:631:15
Hook.js:154 AsyncSeriesHook.lazyCompileHook
[serve]/[tapable]/lib/Hook.js:154:20
Compiler.js:628 compilation.seal.err
[serve]/[webpack]/lib/Compiler.js:628:31
Hook.js:154 AsyncSeriesHook.lazyCompileHook
[serve]/[tapable]/lib/Hook.js:154:20
Compilation.js:1325 hooks.optimizeAssets.callAsync.err
[serve]/[webpack]/lib/Compilation.js:1325:35
Any idea or pointers what is missing?
Thanks
I don't know why, but bat file don't want to be opened by path. Just go to the serve\bin directory and launch bat from here, like that:
7777 is unnecessary. problem was, that it can't find path to your xraframe
project, as you are in another directory, so you have to go tow directories up in ypur path for xraframe
It seems as if you're attempting this on a Windows computer. The serve process for Windows is slightly different than on macOS.
Instead of the normal serve script, use the serve.bat executable.
serve\bin\serve.bat -n -d gettingstarted\xraframe -p 7777
https://docs.8thwall.com/web/#locally-from-windows

Why can't ld called from MSYS find (existing static) library when arguments are read from a response #file containing backslashes?

This is basically the same issue as in mingw ld cannot find some library which is exist in the search path, MinGW linker can't find MPICH2 libraries - and I'm aware that there are heaps of posts on StackOverflow regarding the issue of static and dynamic linking with MinGW - but I couldn't find anything that explains how I can troubleshoot.
I am building a project with a huge linker command like (via g++) on MinGW, in a MSYS2 shell (git-bash.exe). The process fails with, among others:
/z/path/to/../../../../i686-w64-mingw32/bin/ld.exe: cannot find -lssl
I add -Wl,--verbose to the g++ linker call (to be passed to ld), and I can see for the -L/z/path/to/libs/openssl/lib/mingw -lssl:
...
attempt to open /z/path/to/libs/openssl/lib/mingw/libssl.a failed
...
/z/path/to/libs/openssl/lib/mingw/ssl.dll failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
But this is weird, because the file exists?
$ file /z/path/to/libs/openssl/lib/mingw/libssl.a
/z/path/to/libs/openssl/lib/mingw/libssl.a: current ar archive
(... and it was built with the same compiler on the same machine)?
Weirdly, once it attempts to open with forward slash .../libssl.a, once with backslash ...\libssl.a - but at least the first path checks out in a bash shell, as shown above?
It gets even worse if I try to specify -l:libssl.a -- or if I specify -L/z/path/to/libs/openssl/lib/mingw -Wl,-Bstatic -lssl -- instead; then all attempts to open are with a backslash:
...
attempt to open /z/path/to/scripts/other/build/openssl/build/mingw/lib\libssl.a failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
To top it all off, if I look it up manually through the command line using ld, it is found ?!:
$ ld -L/z/path/to/libs/openssl/lib/mingw -lssl --verbose
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/ssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
Does anyone have an idea why this happens, and how can I get ld to finally find these libraries? Or rather - how can I troubleshoot, and understand why these libraries are not found, when they exist at the paths where ld tries to open them?
OK, found something more - not sure if this is a bug; but my problem is that I'm actually reading arguments from a file (otherwise I get g++: Argument list too long). So, to simulate that:
$ echo " -Wl,--verbose -L/z/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
# nothing
$ g++ `cat tcmd3` 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
... it turns out, if the very same arguments are fed on the command line, then static library lookup succeeds - but if the arguments are read from file through the # at-sign, then static library lookup fails?! Unfortunately, I cannot use on my actual project, since even with cat, I'd still get g++: Argument list too long ... So how can I fix this?
MSYS has special handling of directories as arguments when they are used in the shell. This translates e.g. /<drive_letter>/blabla to the proper Windows style paths. This is to accomodate Unix programs that don't handle Z: style directory root.
What you see here is that MSYS isn't performing this interpretation for string read from a file. When you think about it, it's very logical, but as you have experienced first-hand, also sometimes annoying.
Long story short: don't put Unix style paths in files with command arguments. Instead, pass them through e.g. cygpath -w, which works in MSYS2 (which should be the MSYS that Git for Windows 2+ comes with).
Ok, with some more experiments, I noticed that:
-L/z/path/to/libs/openssl/lib/mingw, the Unix path specification, tends to fail - while if we specify the same, except starting with a Windows drive letter, that is:
-LZ:/path/to/libs/openssl/lib/mingw, then things work - also from an arguments file with # at-sign:
$ echo " -Wl,--verbose -LZ:/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
I guess, since the shell is MSYS2/git-bash.exe, entering full POSIX paths on the shell with /z/... is not a problem, because the shell will convert them - but in a file, there is nothing to convert them, so we must use Windows/MingW convention to specify them...

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources