We need to use CAN devices with a Coral Dev Board. We have found that the Mendel image available for download does not have the drivers enabled.
We followed the instructions to download the kernel source files here: https://coral.googlesource.com/docs/+/refs/heads/master/GettingStarted.md
We found that the CAN driver source files, Makefiles, and Kconfig files are present in the kernel source, for example source files:
linux-imx/drivers/net/can/usb/gs_usb.c
linux-imx/drivers/net/can/spi/mcp251x.c
linux-imx/include/linux/can/platform/mcp251x.h
And the Makefile for gs_usb.c:
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the Linux Controller Area Network USB drivers.
#
obj-$(CONFIG_CAN_EMS_USB) += ems_usb.o
obj-$(CONFIG_CAN_ESD_USB2) += esd_usb2.o
obj-$(CONFIG_CAN_GS_USB) += gs_usb.o
obj-$(CONFIG_CAN_KVASER_USB) += kvaser_usb.o
obj-$(CONFIG_CAN_PEAK_USB) += peak_usb/
obj-$(CONFIG_CAN_8DEV_USB) += usb_8dev.o
obj-$(CONFIG_CAN_MCBA_USB) += mcba_usb.o
Looking through the Makefiles and Kconfig files, everything seemed to be in order with a possible exception of one line in the file:
linux-imx/drivers/net/Kconfig
We added:
source "drivers/net/can/Kconfig"
We have also tried adding the following enable flags to the defconfig files:
CONFIG_CAN=y
CONFIG_CAN_RAW=y
CONFIG_CAN_BCM=y
CONFIG_CAN_DEV=y
CONFIG_CAN_AT91=m
CONFIG_CAN_RCAR=m
CONFIG_CAN_XILINXCAN=y
CONFIG_CAN_MCP251X=y
CONFIG_CAN_GS_USB=y
So far without success, the CAN drivers are not being compiled and installed in the image, and are not on the board after we flash it. If anyone has any suggestions, we are all ears.
For anyone interested, I solved it as follows:
enable can in packages/linux-imx/debian/defconfig
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_DEV=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
CONFIG_PROC_FS=m
enable usb and spi can in linux-imx/arch/arm64/configs/defconfig
CONFIG_CAN_GS_USB=m
CONFIG_CAN_MCP251X=m
add the following line to linux-imx/drivers/net/Kconfig
source "drivers/net/can/Kconfig"
follow instructions to set up the repo and prepare to check out the repository:
https://coral.googlesource.com/docs/+/refs/heads/master/GettingStarted.md
create a working directory to hold the source code.
cd into working directory
pull the repo and make changes to the files listed in 1, 2, and 3.
repo init -u https://coral.googlesource.com/manifest
repo sync -j$(nproc)
source the build files
source build/setup.sh
compile the code
m
connect the board
change to the output directory
j product
update the kernel
mdt install linux-image-4.14.98-imx_12–4_arm64.deb
reboot the board
Related
What I wanted
I want to reduce my docker image size through removing a cloned source code from my workspace in 'src' directory according to this suggestion. For that matter, I have just deployed my source code using the following commands:
# install package
sudo catkin config --install
catkin_make
catkin_make install
The problem
As a result, install directory generate with many other library folders. I then navigated in to /share/ folder and tried to roslaunch one of my launch file. This is the error that I got:
ERROR: cannot launch node of type [oxford_gps_eth/gps_node]: oxford_gps_eth
ROS path [0]=/opt/ros/melodic/share/ros
ROS path [1]=/home/ubuntu/catkin_ws/src
ROS path [2]=/opt/ros/melodic/share
No processes to monitor
shutting down processing monitor...
Expectation
I was able to launch my node, even being in /install/share/ directory without removing the cloned source code in 'src' directory.
I wanted to launch my nodes after building and remove my source code so that I can utilize my image.
You have to source install/setup.bash. This script will setup your environmental variables such as PYHONPATH and is what roslaunch uses to find packages.
I have a kinda weird problem. I'm currently messing around with the VRX-Simulator, which simulates an unmanned-watersurface-vehicle.
For the installation I followed the guide on https://bitbucket.org/osrf/vrx/wiki/tutorials/SystemSetupInstall.
Then I tried to modify some of the files and tried to rebuild the project.
This was the point when I noticed it always used the "old" version of my simulation within gazebo.
From now on no matter what I did (I even deleted the whole catkin workspace folder) ROS somehow managed to always launch the original version of my simulation even without any build/src folder existing when I used roslaunch.
roslaunch vrx_gazebo sandisland.launch
So my question would be how can I get rid of my simulation/model and where does ros/gazebo cache my simulation?
You most probably installed the package with the command from the tutorial sudo apt install ros-melodic-vrx-gazebo. So the package launched with roslaunch vrx_gazebo sandisland.launch was not in your catkin workspace. If you want to get rid of it you can uninstall it with sudo apt remove ros-melodic-vrx-gazebo. But this is not strictly necessary.
There are several ways to find out where some ros package is located, try running some of these commands:
rospack find vrx_gazebo will show you where the package used is located
roscd vrx_gazebo will take you to the folder where it is installed something like
/opt/ros/melodic/share/vrx_gazebo
If you also followed the tutorials installing from source code then the issue most likely was not sourcing the built packages. The last line of the guide is a bit misleading. The line *Remember to run this command every time you open a new terminal. is meant to reference the command source ~/vrx_ws/devel/setup.bash
Whether the installed package or the package built from source is used depends on which order they are listed in the environment variable ROS_PACKAGE_PATH. This variable is modified by both source /opt/ros/melodic/setup.bash and source ~/vrx_ws/devel/setup.bash. So have a look at the variable after each step with printenv | grep ROS or echo $ROS_PACKAGE_PATH. Theoretically if you source your terminal in the order I had the source commands it should be using the package built from source, you can verify with the rospack find ... and roscd ... commands mentioned earlier.
In the end it is probably easier to add the sourcing commands to your .bashrc file so you would not forget to source the terminals as mentioned in the ROS installation tutorial. You can add the sourcing of the workspace to the the same file, you will just have to be aware that you would need to change the file, should you want to use a different workspace.
http://wiki.ros.org/melodic/Installation/Ubuntu#melodic.2BAC8-Installation.2BAC8-DebEnvironment.Environment_setup
relevant command from the tutorial:
echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
you could do the same for the workspace:
echo "source ~/vrx_ws/devel/setup.bash" >> ~/.bashrc
And after running those commands run exec bash to get the changes into the current terminal. All future terminals will have those commands already loaded.
I followed the Hyperledger fabric documentation to install and configure it in Windows 10. However when I run the command - "./byfn.sh -m generate" for first-network sample application, I get the following error,
I have gone thru all StackOverflow questions regarding this and made sure following steps are done,
Have set the $PATH variable correctly to include bin folder.
Have downloaded the platform-specific binary and my bin folder looks like this,
I have doubts about following steps,
I have installed Docker for Windows and was able to verify the docker installation by running hello-world image in Docker. However, I have not shared any of my local drives in Docker. Not sure whether this is the cause of this error.
Please note that this is my first question in StackOverflow. Forgive me for any mistakes/redundancies. Any help is greatly appreciated.
I'd suggest making sure that you run the script to download / install the binaries and images from within the fabric-samples directory.
The $Path is exported every time you run the byfn.sh script, confirm that the path configuration in the byfn.sh is correct and points to your correct bin location
# prepending $PWD/../bin to PATH to ensure we are picking up the correct binaries
# this may be commented out to resolve installed version of tools if desired
export PATH=${PWD}/../../bin:${PWD}:$PATH
export FABRIC_CFG_PATH=${PWD}
I'm trying to write simple firewall that can drop packet by filter. For this purpose i'm use WinDivert. I'm load WinDivert.dll and add WinDivert.lib and WinDivert32.sys to project folder. Then i try use WinDivertOpen() to install WinDivertDriver. The result is always negative.
What do I do wrong and how can I successfully install the driver? Code example.
I solved this problem in the following way.
1) In Project->Properties->Linker->Input->Additional Dependencies set path to WinDivert.lib .
2) Moved to root folder files WinDivert.dll and WinDivert32.sys.
3) Include windivert.h to my project.
4) Set my PC in TESTSIGNING Boot Configuration (Use for this Windows Driver Kit 7.1.0.).
5) Restart PC.
If these steps did not help you should build WinDivert from sources with Windows Driver Kit 7.1.0. and Visual Studio 12 or higher as it is described here
I'm attempting to compile libhdfs (a native shared library that allows external apps to interface with hdfs). It's one of the few steps I have to take to mount Hadoop's hdfs using Fuse.
The compilation seems to go well for a while but finishes with "BUILD FAILED" and the following problems summary -
commons-logging#commons-logging;1.0.4: configuration not found in commons-logging#commons-logging;1.0.4: 'master'. It was required from org.apache.hadoop#Hadoop;working#btsotbal800 commons-logging
log4j#log4j;1.2.15: configuration not found in log4j#log4j;1.2.15: 'master'. It was required from org.apache.hadoop#Hadoop;working#btsotbal800 log4j
Now, I have a couple questions about this, in that the book which I'm using to do this doesn't go into any details about what these things really are.
Are commons-logging and log4j libraries which Hadoop uses?
These libraries seem to live in $HADOOP_HOME/lib. They are jar files though. Should I extract them, try to change some configurations, and then repack them back into a jar?
What does 'master' in the errors above mean? Are there different versions of the libraries?
Thank you in advance for ANY insight you can provide.
If you are using cloudera hadoop(cdh3u2), you dont need to build the fuse project.
you can find the binary(libhdfs.so*) inside the directory $HADOOP_HOME/c++/lib
Before fuse mount update the "$HADOOP_HOME/contrib/fuse-dfs/src/fuse_dfs_wrapper.sh" as follows
HADOOP_HOME/contrib/fuse-dfs/src/fuse_dfs_wrapper.sh
#!/bin/bash
for f in ${HADOOP_HOME}/hadoop*.jar ; do
export CLASSPATH=$CLASSPATH:$f
done
for f in ${HADOOP_HOME}/lib/*.jar ; do
export CLASSPATH=$CLASSPATH:$f
done
export PATH=$HADOOP_HOME/contrib/fuse-dfs:$PATH
export LD_LIBRARY_PATH=$HADOOP_HOME/c++/lib:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/
fuse_dfs $#
LD_LIBRARY_PATH contains the list of directories here
"$HADOOP_HOME/c++/lib" contains libhdfs.so and
"/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/" contains libjvm.so
\# modify /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/ as your java_home
Use the following command for mounting hdfs
fuse_dfs_wrapper.sh dfs://localhost:9000/ /home/510600/mount1
for unmounting use the following command
fusermount -u /home/510600/mount1
I tested fuse only in hadoop pseudo mode not in cluster mode