ERROR: cannot launch node of type [darknet_ros/darknet_ros]: - ros

dishita#dishita-VirtualBox:~/catkin_ws/src/darknet_ros$ roslaunch darknet_ros darknet_ros.launch
... logging to /home/dishita/.ros/log/a54fc4ec-3828-11ed-8e10-2d44a183ac97/roslaunch-dishita-VirtualBox-7714.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://dishita-VirtualBox:37933/
SUMMARY
PARAMETERS
/darknet_ros/actions/camera_reading/name: /darknet_ros/chec...
/darknet_ros/config_path: /home/dishita/cat...
/darknet_ros/image_view/enable_console_output: True
/darknet_ros/image_view/enable_opencv: True
/darknet_ros/image_view/wait_key_delay: 1
/darknet_ros/publishers/bounding_boxes/latch: False
/darknet_ros/publishers/bounding_boxes/queue_size: 1
/darknet_ros/publishers/bounding_boxes/topic: /darknet_ros/boun...
/darknet_ros/publishers/detection_image/latch: True
/darknet_ros/publishers/detection_image/queue_size: 1
/darknet_ros/publishers/detection_image/topic: /darknet_ros/dete...
/darknet_ros/publishers/object_detector/latch: False
/darknet_ros/publishers/object_detector/queue_size: 1
/darknet_ros/publishers/object_detector/topic: /darknet_ros/foun...
/darknet_ros/subscribers/camera_reading/queue_size: 1
/darknet_ros/subscribers/camera_reading/topic: /webcam/image_raw
/darknet_ros/weights_path: /home/dishita/cat...
/darknet_ros/yolo_model/config_file/name: yolov2-tiny.cfg
/darknet_ros/yolo_model/detection_classes/names: ['person', 'bicyc...
/darknet_ros/yolo_model/threshold/value: 0.3
/darknet_ros/yolo_model/weight_file/name: yolov2-tiny.weights
/rosdistro: noetic
/rosversion: 1.15.14
NODES
/
darknet_ros (darknet_ros/darknet_ros)
auto-starting new master
process[master]: started with pid [7722]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to a54fc4ec-3828-11ed-8e10-2d44a183ac97
process[rosout-1]: started with pid [7732]
started core service [/rosout]
ERROR: cannot launch node of type [darknet_ros/darknet_ros]: Cannot locate node of type [darknet_ros] in package [darknet_ros]. Make sure file exists in package path and permission is set to executable (chmod +x)
Ive sourced the bash file and made the file executible using chmod +x ~/catkin_ws/src/darknet_ros
I am still getting this error, help me out.

You have to build the darknet_ros repo again.
Use:
catkin build darknet_ros
also source the file again.

chmod +x ~/catkin_ws/src/darknet_ros is not enough to make the ROS node file executable.
You have to locate the source file of the node, which probably is darknet_ros/ros/ yolo_object_detector_node.cpp, and make this file executable.

Related

ERROR: cannot launch node of type [camera_driver/realsense2_driver]

I am trying to perform HandEyeCalibration and following
https://github.com/lixiny/handeye-calibration-ros
When I run:
roslaunch camera_driver realsense_driver.launch
Getting Error:
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://lambda-quad:35657/
SUMMARY
========
PARAMETERS
* /realsense2_driver/resHeight: 480
* /realsense2_driver/resWidth: 640
* /rosdistro: noetic
* /rosversion: 1.15.14
NODES
/
realsense2_driver (camera_driver/realsense2_driver)
auto-starting new master
process[master]: started with pid [5110]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 173a7e54-0114-11ed-a3fd-d5ee6c9da619
process[rosout-1]: started with pid [5143]
started core service [/rosout]
ERROR: cannot launch node of type [camera_driver/realsense2_driver]: Cannot locate node of type [realsense2_driver] in package [camera_driver]. Make sure file exists in package path and permission is set to executable (chmod +x)
Please help

getting this error while trying to run cube js

I'm getting this error while trying to run cube js with the default command in the getting started docs. I've started this in a folder and running it in docker.
Warning. There is no cube.js file. Continue with environment variables
πŸ”₯ Cube Store (0.28.31) is assigned to 3030 port.
Warning. Option apiSecret is required in dev mode. Cube.js has generated it as e3b8c5a35fe378f4d481ada777e5f3c4
πŸ”“ Authentication checks are disabled in developer mode. Please use NODE_ENV=production to enable it.
πŸ¦… Dev environment available at http://localhost:4000
πŸš€ Cube.js server (0.28.31) is listening on 4000
2021-09-03 15:06:01,512 INFO [cubestore::http::status] <pid:17> Serving status probes at 0.0.0.0:3031
2021-09-03 15:06:01,515 INFO [cubestore::metastore] <pid:17> Using existing metastore in /cube/conf/.cubestore/data/metastore
thread '
main
' panicked at '
called `Result::unwrap()` on an `Err` value: Error { message: "IO error: While fsync: a directory: Invalid argument" }
', /project/cubestore/src/metastore/mod.rs:1542:40
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Cube Store Start Error: undefined
I guess it’s corrupted metastore due to it was incorrectly shutdown for you locally. Could you please try to drop the .cubestore directory?

Brew postinstall mysql#5.7 complaining about data directory not empty when it is empty

Having a lot of trouble installing mysql 5.7 on Mac Mojave, (ran 'brew install mysql#5.7')
on initial install, got msg saying postinstall was not completed successfully (please see msg below).
So, after I delete everything in the directory /usr/local/var/mysql (which mysql says is not empty), I STILL get same message when re-running postinstall command ... (which is quite annoying seems MySQL is populating the data dir then complaining it is not empty?!)
[08:02:48][~/tmp]#brew postinstall mysql#5.7
==> Postinstalling mysql#5.7
==> /usr/local/Cellar/mysql#5.7/5.7.28/bin/mysqld --initialize-insecure --user=gert --basedir=/usr/local/Cellar/mysql#5.7/5.7.28 --datadir=/usr/local/var/my Last 15 lines from /Users/gert/Library/Logs/Homebrew/mysql#5.7/post_install.01.mysqld: 2019-12-09 08:03:39 +0200
/usr/local/Cellar/mysql#5.7/5.7.28/bin/mysqld
--initialize-insecure
--user=gert
--basedir=/usr/local/Cellar/mysql#5.7/5.7.28
--datadir=/usr/local/var/mysql
--tmpdir=/tmp
2019-12-09T06:03:39.151987Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use
--explicit_defaults_for_timestamp server option (see documentation for more details). 2019-12-09T06:03:39.154025Z 0
[ERROR] --initialize specified but the data directory has files in it. Aborting. 2019-12-09T06:03:39.154074Z 0 [ERROR] Aborting
Trying to start mysql as root gives error:
[08:04:41][~/tmp]#sudo /usr/local/opt/mysql#5.7/bin/mysql.server start
Password: Starting MySQL ..... ERROR! The server quit without updating
PID file (/var/run/mysqld/mysqld.pid).
Banging head against wall for days now trying to follow StackOverflow posts MySql server startup error 'The server quit without updating PID file ', none of which is working ...
My my.cnf:
[mysqld]
# Only allow connections from localhost
#bind-address = 127.0.0.1
#SO posts said to comment out the above ...
pid-file = /var/run/mysqld/mysqld.pid #Checked, this folder + file exists, with write permissions
Try using a data dir away from the mysql directory i.e if mysql is in /usr/local/mysql, use the data dir as /var/data.
root#photon [ /var ]# /usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --datadir=/var/data
2020-02-22T21:42:27.121230Z 0 [System] [MY-013169] [Server] /usr/local/mysql/bin/mysqld (mysqld 8.0.19) initializing of server in progress as process 820
2020-02-22T21:42:35.018238Z 5 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.

hadoop only launch local job by default why?

I have written my own hadoop program and I can run using pseudo distribute mode in my own laptop, however, when I put the program in the cluster which can run example jar of hadoop, it by default launches the local job though I indicate the hdfs file path, below is the output, give suggestions?
./hadoop -jar MyRandomForest_oob_distance.jar hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
12/03/16 16:21:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/03/16 16:21:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/16 16:21:25 INFO mapred.JobClient: Running job: job_local_0001
12/03/16 16:21:25 INFO mapred.MapTask: io.sort.mb = 100
12/03/16 16:21:25 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/16 16:21:25 INFO mapred.MapTask: record buffer = 262144/327680
12/03/16 16:21:25 WARN mapred.LocalJobRunner: job_local_0001
java.io.FileNotFoundException: File /user/randomforest/input/genotype1.txt does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at Data.Data.loadData(Data.java:103)
at MapReduce.DearMapper.loadData(DearMapper.java:261)
at MapReduce.DearMapper.setup(DearMapper.java:332)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/03/16 16:21:26 INFO mapred.JobClient: map 0% reduce 0%
12/03/16 16:21:26 INFO mapred.JobClient: Job complete: job_local_0001
12/03/16 16:21:26 INFO mapred.JobClient: Counters: 0
Total Running time is: 1 secs
LocalJobRunner has been chosen as your configuration most probably has the mapred.job.tracker property set to local or has not been set at all (in which case the default is local). To check, go to "wherever you extracted/installed hadoop"/etc/hadoop/ and see if the file mapred-site.xml exists (for me it did not, a file called mapped-site.xml.template was there). In that file (or create it if it doesn't exist) make sure it has the following property:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
See the source for org.apache.hadoop.mapred.JobClient.init(JobConf)
What is the value of this configuration property in the hadoop configuration on the machine you are submitting this from? Also confirm that the hadoop executable you are running references this configuration (and that you don't have 2+ installations configured differently) - type which hadoop and trace any symlinks you come across.
Alternatively you can override this when you submit your job, if you know the JobTracker host and port number using the -jt option:
hadoop jar MyRandomForest_oob_distance.jar -jt hostname:port hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
If you're using Hadoop 2 and your job is running locally instead of on the cluster, ensure that you have setup mapred-site.xml to contain the mapreduce.framework.name property with a value of yarn. You also need to set up an aux-service in yarn-site.xml
Checkout the Cloudera Hadoop 2 operator migration blog for more information.
I had the same problem that every mapreduce v2 (mrv2) or yarn task only ran with the mapred.LocalJobRunner
INFO mapred.LocalJobRunner: Starting task: attempt_local284299729_0001_m_000000_0
The Resourcemanager and Nodemanagers were accessible and the mapreduce.framework.name was set to yarn.
Setting the HADOOP_MAPRED_HOME before executing the job fixed the problem for me.
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
cheers
dan

CVS error - CVS exited with error code 1

I am seeing this error for quite sometime now.
I am running ant build on CYGWIN which inturn runs on WindowsXP.
The resolution(bad one) I found was to delete my gcct/first directory and run ant build again (which runs from another directory). It runs successfully but if I modify some code under gcct/first, I do not want to delete it because of this error.
I did see this link. The resolution here does not apply to me since I do not have .cvspass defined anywhere in the build.xml.
C:\svn\CEL_v3681\buildCore.xml:1883: cvs exited with error code 1
Command line was [Executing 'cvs' with arguments:
'checkout'
'-A'
'-rfirst_v2_126'
'gcct/first'
The ' characters around the executable and arguments are
not part of the command.
environment:
ALLUSERSPROFILE=C:\Documents and Settings\All Users
ANT_HOME=C:/Apps/Apache/apache-ant-1.7.0
APPDATA=C:\Documents and Settings\shankarc\Application Data
CLASSPATH=./;C:/Program Files/Java/jre1.5.0_07/lib/ext/QTJava.zip
COMMONPROGRAMFILES=C:\Program Files\Common Files
COMPUTERNAME=NYKPWM2035798
COMSPEC=C:\WINNT\system32\cmd.exe
CUSTPROF=Roaming700Live
CVSROOT=:pserver:shankarc#amcvs2.lehman.com:/home/eqcvs/cmte
CVS_RSH=/bin/ssh
FP_NO_HOST_CHECK=NO
HOME=C:\Apps\CYGWIN\home\shankarc
HOMEDRIVE=F:
HOMEPATH=\
HOSTNAME=nykpwm2035798
IDEA_PROPERTIES=C:\Documents and Settings\shankarc\idea.properties
INFOPATH=/usr/local/info:/usr/share/info:/usr/info:
JAVA_HOME=C:/Program Files/Java/jdk1.6.0_21/
JDK_HOME=C:\Program Files\Java\jdk1.6.0_21\
LOGONSERVER=\\NYKPSM00069
MANPATH=/usr/local/man:/usr/share/man:/usr/man::/usr/ssl/man
NUMBER_OF_PROCESSORS=2
OS=Windows_NT
PATH=C:\Apps\CYGWIN\usr\local\bin;C:\Apps\CYGWIN\bin;C:\Apps\CYGWIN\bin;C:\Apps\CYGWIN\usr\X11R6\bin;C:\Apps\Apache\apache-ant-1.7.0\bin;C:\Program Files\Java\jdk1.6.0_21\bin\;C:\Apps\CYGWIN\bin;C:\Program Files\VisualSVN Server\bin;C:\Program Files\Sudowin\Clients\Console;C:\Program Files\Fortify Software\Fortify 360 v2.5.0\bin
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PSC1
PRINTER=\\NYKPSM04020\NYKLPR1301-03-03C05
PROCESSOR_ARCHITECTURE=x86
PROCESSOR_IDENTIFIER=x86 Family 6 Model 15 Stepping 6, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=0f06
PROFGROUP=FONP
PROGRAMFILES=C:\Program Files
PROMPT=$P$G
PWD=/cygdrive/c/svn/CEL_v3681/gcct/cel
QHOME=c:\q
QTJAVA=C:\Program Files\Java\jre1.5.0_07\lib\ext\QTJava.zip
SESSIONNAME=Console
SHLVL=1
SITECODE=NYK
SITEIDENT=NYK
SVN_ASP_DOT_NET_HACK=1
SYSTEMDRIVE=C:
SYSTEMROOT=C:\WINNT
TEMP=C:\TEMP
TERM=cygwin
TMP=C:\TEMP
UATDATA=C:\WINNT\system32\CCM\UATData\D9F8C395-CAB8-491d-B8AC-179A1FE1BE77
USER=shankarc
USERDNSDOMAIN=INTRANET.BARCAPINT.COM
USERDOMAIN=INTRANET
USERNAME=shankarc
USERPROFILE=C:\Documents and Settings\shankarc
WINDIR=C:\WINNT
CVS_PASSFILE=C:\Apps\CYGWIN\home\shankarc\.cvspass]
Total time: 58 seconds
How I resolve this?
I had the same issue and found that even though I was not using .cvspass I did have a build property of cvs.pass set which needed to be reset to OVERRIDE to function depending on how you set up your cvs access (though it looked similar from your post). This needed to be changed in build.properties and .build.properties. Hope this helps!

Resources