Beaglebone P8_3 to P8_10 doesn't toggle using Adafruit_BBIO - beagleboneblack

After installing the adafruit_bbio successfully on my beaglebone rev c 4gb ,with Ubuntu and simplecv ,written a python code which would toggle 16 LEDs starting from P8_3 to P8_18 the first 8 gpio pins doesn't toggle according to my code but the rest P8_11 to P8_18 toggles perfectly!!!Any help please?

Sounds like you don't have the pin multiplexers configured to their GPIO modes. The pinmux is set through the AM335x's control module, which can only be configured from kernel space. If you're running the 3.8 kernel ($ uname -r) then the way to mux the pins is through the Device Tree. You need to either manually load your own overlays, or you could use the universal-io overlay and the config-pin command line utility.
You could also use the PyBBIO library, which includes Device Tree overlays for the GPIO pins and loads them automatically so you don't have to worry about writing your own.

Related

Is there a way to get nomachine to better show the caret in terminal?

Host machine: Debian 10 running NoMachine 7.2.3
Settings:
Specified H264
User Hardware Encoding enabled
Use Specific Frame Rate enabled (60FPS)
Use Acceleration enabled
Client: Windows 10 running NoMachine 7.2.3
Both machines have monitors attached.
Using NX protocol for connection.
FullScreen / Scale to Window / Desktop is currently 2560x1440 (reduced from native while testing this issue)
Specific issue:
I do a ton of work in the terminal and when viewing desktop via nomachine, the terminal caret is randomly not visible. The same issue is less noticeable with right click menus and other areas of "visual updates in small screen space." If this were another remote desktop vendor I would try to find the "don't update just regions" setting to force the entire display to update regularly, but I can't find similar settings for nomachine. I have a dedicated gigabit connection between the two machines with no other traffic on that line, so bandwidth is not an issue.
To recreate:
I disabled caret blink (using universal access / accessibility settings) so the caret is a solid block in terminal / vi. If I edit a text file in vi and move up and down, the caret will only update visually every other line or so (verified on the physical screen it is moving correctly). Same if I highlight or insert, etc. You inevitably miss a character or so or lose your place).
I have tried changing speed vs quality slider, resolutions, swapping from h264 to VP8, etc.
I have disabled:
multi-pass display encoding
frame buffering on decoding
client side image post-processing
Nothing seems to change this specific issue. Yes I can make dragging a quarter-screen-sized terminal window smoother, but that doesn't help me follow the caret in vi/vim. Both machines are nicely spec'd (client has 16G / RTX2080, server has 32G / GTX1080)
Is there a way to get nomachine to update all the screen all the time, or at least better refresh small areas like a terminal caret?
(OP): Based on a night of troubleshooting, the issue seemed to be either:
An issue with the Debian install of the nvidia drivers
The server machine is a laptop with a broken main screen (but with an HDMI external monitor plugged in). The Debian X-server may have been confused as to whether it was headless or not and caused issues with nomachine (which tries to detect headless and start a virtual session).
The solution to this exact problem would be to disable the GUI and force a virtual session, per https://www.nomachine.com/AR03P00973 (dummy dongles won't work because the laptop's main display is not a standard plug).
In my specific case, I needed GUI access on the server at times so I couldn't use the above methods, and I could not remedy the problem with Debian, so I wiped the system and installed Ubuntu 20.04, which is more forgiving with graphics drivers and monitors. After setting up the Ubuntu system as similarly as possible to the Debian system and letting the proprietary nvidia drivers auto install, nomachine connected at the same resolution and worked perfectly, without the lag in small screen areas.

How to find the GPIO mapping for 'boot switch' in Beagle bone black Rev C?

I have a Beagle Bone black Rev C and as a part of my learning ,I would want to configure the Boot switch button as an input GPIO .
can some one please help me how to find the GPIO mapping related to the Boot switch pin? I have tried looking the expansion headers in the BBB_SRM document with no results .
Thanks a lot in advance .
Your main source would be BeagleBone Black Rev schematic, thankfully full schematic of the BBB is readily available.
Then on page 6 you will find boot connections and Boot switch in particular is connected to SYS_BOOT2 and LCD_DATA2 that eventually connected to GPIO2_8

ESP8266 with the MicroPython always reboots

I've flashed the MicroPython into the NodeMSU board based on 12E chip and have used the screen command in the terminal on OS X to run the REPL. It works a few seconds and the REPL resets.
I have no idea where is the problem ( I can write a few commands when the all my work clears and I see the MicroPython console from scratch.
without more information, this is a difficult issue to diagnose. basically, there are 4 possible causes of this behaviour:
power fluctuation causes the board to reset.
the board resets because the reset pin is physically set to gnd
the board resets because the reset pin is logically set to gnd
the function machine.reset() is called
steps to diagnose:
try a powered hub, separate power supply, different usb cable, different usb port to power your device and observe if the reset happens
inspect the board. see if there is a solder bridge between the reset pin and gnd (next to each other as seen on this image or between the pins on the reset button
and 4) here you need to look at the code in boot.py and main.py; both located on the internal filesystem on your board. You can get those files using webrepl of using the following code:
print(open('boot.py').read())
print(open('main.py').read())
If you print the content here we can inspect it with you.
alternatively, try reflashing micropython using the latest .bin from micropython.org and see if the clean version of micropython corrects the issue.

Point Grey Bumblebee2 firewire 1394 with Nvidia Jetson TK1 board

I have successfully interfaced Point Grey Bumblebee2 firewire1394 camera with Nvida Jetson TK1 board and I get the video using Coriander and video for Linux loop back device is working as well. But when I tried to access camera using OpenCV and Coriander at the same time, I have conflicts. And when I tried to access the video from camera by closing the Coriander then I can get the video but in that case I am not able to change the mode and format of the video. Anyone can help me to resolve this problem. Can I change the video mode of the camera from OpenCV.
You will have to install the flycapture sdk for ARM if you want to do it manually (by code). The flycap UI software i dont believe works on ARM, let alone ubuntu 14.04, just ubuntu 12.04 x86. If you have access, what I usually do is plug it into my windows machine and use the Flycap software to change the configurations on the camera.
I found this question completely randomly, but coincidentally I am trying to interface the bumblebee2 with the jetson right now as well. Would you care to share as to what firewire mini-PCIe you used and how you went about any configurations (stock or grinch kernel, which L4T version) ?
Also, although not fully complete, you can view a code example as to how to interface with the camera using the flycaputre sdk here: https://github.com/ros-drivers/pointgrey_camera_driver. It is a ROS driver, but you can just reference the PointGreyCamera.cpp file for examples if your not using ROS.
Hope this helps
This is not well advertised, but PtGrey do not support firewire on ARM (page 4):
Before installing FlyCapture, you must have the following prerequisites:... A Point Grey USB 3.0 camera, (Blackfly, Grasshopper3, or Flea3)
Other Point Grey imaging cameras (FireWire, GigE, or CameraLink) are NOT supported
However as you have seen it is possible to use the camera (e.g. in Coriander) using standard firewire tools.
libdc1394 or the videography library should do what you need.

Capturing multiple webcams (uvcvideo) with OpenCV on Linux

I am trying to simultaneously stream the images from 3 Logitech Webcam Pro 900 devices using OpenCV 2.1 on Ubuntu 11.10. The uvcvideo driver gets loaded for these.
Capturing two devices works fine, however with three I run into the out of space error for the third:
libv4l2: error turning on stream: No space left on device
I seem to be running into this issue:
http://renoirsrants.blogspot.com.au/2011/07/multiple-webcams-on-zoneminder.html and I have attempted to do the quirks=128 (or pretty much any other power-of-two value) trick but to no avail. I also tried on another machine with two USB 2.0 hubs and connecting two cameras to one and the third camera to the second, which resulted into the same problem. I am initializing roughly as follows (using N cameras so the result is actually put into an STL vector):
cv::VideoCapture cap0(0); //(0,1,2..)
and attempting to capture all the cameras in a loop as
cap0.retrieve(frame0);
This works fine for N=2 cameras. When I set N=3 the third window opens but no image appears and the console is spammed full of V4L2 errors. Similarly, when I set N=2, and attempt to open the third camera in say Cheese (simple webcam capture application), this doesn't work either.
Now comes the big but: After trying guvcview by starting three instances of that, I was able to view three cameras at once (with no problems in terms of frame rate or related), so it does not seem to be a hardware issue. I figure there is some property that I should set, but I'm not sure what that is. I have looked into MJPEG (which these cameras seem to support), but haven't succeeded into setting this property, or detect in which mode (yuyv?) they are running if I start them from OpenCV.
Thoughts?
I had this problem too and have a solution that lets me capture 2 cameras at 640x480 with mjpeg compression. I am using a Creative "Live Cam Sync HD VF0770" which incorrectly reports its bandwidth requirements. The quirks=128 fix works for 320x240 uncompressed video. But for compressed (mjpg) format the quirks=128 does not work (it does nothing for compressed formats).
To fix this I modified the uvc driver as follows:
download the kernel sources
mkdir -p ~/Software/kernel-git
cd ~/Software/kernel-git
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
git checkout v3.2
# NOTE: `uname -r` shows me my current kernel is 3.2.0-60-generic
# For a different kernel use a different tag
copy uvc dir:
mkdir -p ~/Software/uvcvideo_driver
cd ~/Software/uvcvideo_driver
#cp -a ~/Software/kernel-git/linux/drivers/media/usb/uvc .
cp ~/Software/kernel-git/linux/drivers/media/video/uvc .
modify Makefile
cd ~/Software/uvcvideo_driver/uvc
vi Makefile
obj-m += aauvcvideo.o
aauvcvideo-objs := uvc_driver.o uvc_queue.o uvc_v4l2.o uvc_video.o uvc_ctrl.o \
uvc_status.o uvc_isight.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
Force bandwith to 0x400 when compressed.
cd ~/Software/uvcvideo_driver/uvc
vw uvc_video.c
Find the uvc_fixup_video_ctrl() function. At the end of the function add:
if (format->flags & UVC_FMT_FLAG_COMPRESSED) {
ctrl->dwMaxPayloadTransferSize = 0x400;
}
build the aauvcvideo module:
make
remove old module and insert new one:
sudo rmmod uvcvideo
sudo insmod ./aauvcvideo.ko quirks=128
run gucview twice with compression in 2 different windows to test
guvcview --device=/dev/video1 --format=mjpg --size=640x480
guvcview --device=/dev/video2 --format=mjpg --size=640x480
Good luck!
-Acorn
I had this exact problem, using three logitech quickcam pro 9000 cameras (using ubuntu). I could read from two, but not three. In my case, I wasn't using opencv, but was accessing the cameras through V4L2 directly, using memory-mapped IO. Simply put, there was not enough USB bandwidth to allocate three buffers.
I was reading in the uncompressed frames, however. As soon as I switched the format to MJPEG, the data was small enough, and I could read from the three cameras. I used libjpeg to decode the MJPEG stream.
I haven't looked into how to change the image format using OpenCV, but I do know that it needs to be MJPEG to fit all that data.
Before I switched to MJPEG, I spent a lot of time trying to access each camera one at a time, streaming a single frame before switching to the next. Not recommended!
Most likely there is USB bandwidth contention reported by the driver of the video capture device. Check if the pixel format is YUYV, which happens to be uncompressed. On the contrary, if the pixel format is MJPG (compressed), it is possible to have multiple devices on the same USB channel.
v4l2-ctl -d /dev/video0 --list-formats
The output would be something like below:
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : 16bpp YUY2, 4:2:2, packed
The following are the possible solutions:
Use capture devices from different manufacturers such that the drivers loaded are different. Generally the same driver handling multiple devices need to handle the bandwidth effectively.
Use a PCI USB extension card if available to attach the 2nd USB video capture device. This workaround worked excellently for me when I tried attaching AVerMedia DVD EZMaker 7 that loaded the driver cx231xx.
OpenCV can be built to use either v4l or libv4l, and only the v4l version supports compressed formats, while the libv4l version supports just one uncompressed format for OpenCV 2.4.11. (See autosetup_capture_mode_v4l2() for v4l and the code following line 692 for libv4l.) OpenCV 3.0.0 does not improve much over 2.4.11 here; it still supports only uncompressed formats for libv4l.
Since your error mentions libv4l2, you seem to have the libv4l version and OpenCV captured uncompressed in your case. To build a v4l version of OpenCV, your cmake command should contain
-D WITH_LIBV4L=OFF
(WITH_LIBV4L was enabled by default for me.)
A note on bandwidth and USB. USB 2.0 (which virtually all webcams use) has a bandwidth of 480 Mbit/s. 640x480 at 30 fps and 24 bits/pixel uncompressed is about 221 Mbit/s, so one can use up USB 2.0 bandwidth quickly with uncompressed webcams. One gets 480 Mbit/s for each USB host controller, see this answer on how to list them. (USB hubs do not add host controllers, and several USB ports on a motherboard are typically connected to the same host controller. All devices and hubs connected to a host controller share the bandwidth.)
For webcams that reserve more USB bandwidth than they need, e.g., those with footnote [13] on the UVC driver page, the FIX_BANDWIDTH quirk can help. But the quirk works only for uncompressed formats (unless you do the kernel hack in Acorn's answer here). In my case (Ubuntu 14.04, many Microsoft LifeCam Cinemas at 320x240), the quirk worked when I used the libv4l version of OpenCV (four LifeCams on an ASMedia USB host controller worked well) but for the v4l version -- which I confirmed to use MJPEG -- I got a VIDIOC_STREAMON: No space left on device error as soon as I tried to capture from a second LifeCam! (For the same machine, Intel and VIA host controllers did better and each worked with two LifeCams for v4l; the LifeCam reserves 48% of USB 2.0 bandwidth.)
this works as charm for me
sudo rmmod uvcvideo
sudo modprobe uvcvideo quirks=128
This will be reset every reboot. If this works, create the following file:
sudo vi /etc/modprobe.d/uvcvideo.conf
containing the line:
options uvcvideo quirks=128
check this link
http://renoirsrants.blogspot.in/2011/07/multiple-webcams-on-zoneminder.html
With kernel 4.15.0-34-generic on Ubuntu 18.04 and OpenCV 3.4 compiled with gstreamer/v4l support I am able to stream 3x720p on a single USB port using a powered hub using MJPG compression with gstreamer in python (using 2xC922 and 1xC920 cameras - the 10fps framerate isn't necessary for this to work):
def open_cam_usb(dev, width, height):
gst_str = (
"v4l2src device=/dev/video{} ! "
"image/jpeg,widh=(int){},height=(int){},framerate=10/1,"
"format=(string)RGB ! "
"jpegdec ! videoconvert ! appsink"
).format(dev, width, height)
return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
One of the most helpful things I discovered was if you place a Sleep(ms) call in between your capture initializations. This allowed me to retrieve two webcam captures simultaneously without problem.

Resources