Guys i am using a code segment to detect red color with camera and at the same time taking data with sensonrs in multithread. After a while I am getting "No Such Device" error and camera and all usb devices such as mouse and keyboard are frozing. Is there any idea what is the source of this error?
I had a similar problem, caused by xhci usb (high speed usb) driver disabling my internal usb hub. It occurred each time I've run an OpenCV based application and crash it / kill it. This caused timeout of xci usb driver, which could be seen in dmesg.
The only solution I found was to build a custom kernel for my Kubuntu Linux, which included high-speed usb driver as a module, instead of having it compiled into the kernel.
I've used this wiki to set-up kernel building environment, and changed xhciusb module from *(compiled into kernel) to M(module plugged into kernel at runtime).
Related
Im having a hard time getting hardware acceleration for videos working in electron running on Linux (ARM64) and Linux (Intel64). Im not sure if this is an issue with the flags electron is using for chromium or if its more an issue at drive level on the host machines. Or maybe it's just not possible. Both machines are running Chromium 95 snap 64 bit.
When running chromium (ARM64) without any flags and running chrome://gpu i get the following:
When running chromium (ARM64) with --enable-features=VaapiVideoDecoder i get the following:
This leads me to believe that when calling chrome with the flag hardware acceleration should be working. Just to add to the complexity of this if i go to youtube and check media it looks like it may still be disabled (even with the flags):
I have read through a number of articles titled 'how to enable hardware acceleration in electron'. Most of which list the following flags to provide:
app.commandLine.appendSwitch('ignore-gpu-blacklist')
app.commandLine.appendSwitch('enable-gpu-rasterization')
app.commandLine.appendSwitch('enable-accelerated-video')
app.commandLine.appendSwitch('enable-accelerated-video-decode')
app.commandLine.appendSwitch('use-gl', 'desktop')
app.commandLine.appendSwitch('enable-features', 'VaapiVideoDecoder')
I have tried all of these but nothing seems to make any difference. When running a video in electron it has the following properties:
Is anyone able to point me in the right direction with this? Thank you.
This has been solved. The main issue was the VaAPI driver needing to be installed on the hardware running the application. Secondly the only flags needed was the following:
app.commandLine.appendSwitch('enable-features', 'VaapiVideoDecoder')
Host machine: Debian 10 running NoMachine 7.2.3
Settings:
Specified H264
User Hardware Encoding enabled
Use Specific Frame Rate enabled (60FPS)
Use Acceleration enabled
Client: Windows 10 running NoMachine 7.2.3
Both machines have monitors attached.
Using NX protocol for connection.
FullScreen / Scale to Window / Desktop is currently 2560x1440 (reduced from native while testing this issue)
Specific issue:
I do a ton of work in the terminal and when viewing desktop via nomachine, the terminal caret is randomly not visible. The same issue is less noticeable with right click menus and other areas of "visual updates in small screen space." If this were another remote desktop vendor I would try to find the "don't update just regions" setting to force the entire display to update regularly, but I can't find similar settings for nomachine. I have a dedicated gigabit connection between the two machines with no other traffic on that line, so bandwidth is not an issue.
To recreate:
I disabled caret blink (using universal access / accessibility settings) so the caret is a solid block in terminal / vi. If I edit a text file in vi and move up and down, the caret will only update visually every other line or so (verified on the physical screen it is moving correctly). Same if I highlight or insert, etc. You inevitably miss a character or so or lose your place).
I have tried changing speed vs quality slider, resolutions, swapping from h264 to VP8, etc.
I have disabled:
multi-pass display encoding
frame buffering on decoding
client side image post-processing
Nothing seems to change this specific issue. Yes I can make dragging a quarter-screen-sized terminal window smoother, but that doesn't help me follow the caret in vi/vim. Both machines are nicely spec'd (client has 16G / RTX2080, server has 32G / GTX1080)
Is there a way to get nomachine to update all the screen all the time, or at least better refresh small areas like a terminal caret?
(OP): Based on a night of troubleshooting, the issue seemed to be either:
An issue with the Debian install of the nvidia drivers
The server machine is a laptop with a broken main screen (but with an HDMI external monitor plugged in). The Debian X-server may have been confused as to whether it was headless or not and caused issues with nomachine (which tries to detect headless and start a virtual session).
The solution to this exact problem would be to disable the GUI and force a virtual session, per https://www.nomachine.com/AR03P00973 (dummy dongles won't work because the laptop's main display is not a standard plug).
In my specific case, I needed GUI access on the server at times so I couldn't use the above methods, and I could not remedy the problem with Debian, so I wiped the system and installed Ubuntu 20.04, which is more forgiving with graphics drivers and monitors. After setting up the Ubuntu system as similarly as possible to the Debian system and letting the proprietary nvidia drivers auto install, nomachine connected at the same resolution and worked perfectly, without the lag in small screen areas.
Background: We are looking to release a commercial product based on the Android Things OS and Pi 3 hardware. The OS seems to become corrupt over time. Usually after several weeks of continuous testing. By corrupt, the Android screen will no longer appear on startup and putting SD into new hardware does not remedy. We are using an application Factory Image base on the 0.5.1-devpreview created in the Console.
My question: Is there a way to debug or monitor what caused this state in the OS? Direct serial connection?
try to clean the sd card with the diskpart command and start again from scratch.
And to debug, maybe a USB to TTL cable may help. As explained here.
Regards!
I have been using the JAI SDK as well as the JAI control tool that is installed with it for more than two years now without a problem. Recently I updated the SDK and the JAI GigE Vision Filter Driver that comes with it to the latest version from their website.
On the development pc the update went well and everything still works as before. However, on another machine (a laptop) the same update caused both the software developed using the SDK and the control tool to generate unrecoverable errors whenever it tries to open a GigE camera. I have tried re-installing and restarting many times. I also made sure there are no conflicts in the device manager. However, I always get the same exception whether it comes from the JAI control tool, the JAI GigE Vision persistent ip configuration tool or my own software written using the SDK. Here is the exception description:
************** Exception Text **************
Jai_FactoryDotNET.Jai_FactoryWrapper+FactoryErrorException: Error
at Jai_FactoryDotNET.Jai_FactoryWrapper.ThrowFactoryException(EFactoryError error) in T:\JAI_trunk\source\JAIControlTool\JAISDK.NET\Jai_Factory_Wrapper.cs:line 184
at Jai_FactoryDotNET.CCamera..ctor(IntPtr factoryHandle, String cameraID, IntPtr hTL, IntPtr hIF, String genericName) in T:\JAI_trunk\source\JAIControlTool\JAISDK.NET\Camera.cs:line 1454
at Jai_FactoryDotNET.CFactory.UpdateDeviceList(EDriverType preferredDriverType) in T:\JAI_trunk\source\JAIControlTool\JAISDK.NET\Factory.cs:line 801
at IPConfig.IPConfigForm.SearchForCameras()
at System.Windows.Forms.Timer.OnTick(EventArgs e)
at System.Windows.Forms.Timer.TimerNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Has anyone seen this before?
I managed to find a solution to the problem but still do not have a good explanation as to why it is happening. It turns out the JAI GigE filter driver is causing the problem.
The pc that I originally used to test the upgrade has two gigabit Ethernet ports both with the filter driver enabled and both being used to interface with the camera. The laptop only has one Ethernet port and I use an Ethernet SmartCard adaptor for the second connection. However, the problem lies with the wireless Internet adaptor that also has the filter driver enabled as a network service.
The problem disappears when the filter driver is disabled on the wireless adaptor. This was never a problem in the last version of the SDK but now it seems that the filter driver should only be enabled on those network devices that are actually interfacing with the camera.
I have successfully interfaced Point Grey Bumblebee2 firewire1394 camera with Nvida Jetson TK1 board and I get the video using Coriander and video for Linux loop back device is working as well. But when I tried to access camera using OpenCV and Coriander at the same time, I have conflicts. And when I tried to access the video from camera by closing the Coriander then I can get the video but in that case I am not able to change the mode and format of the video. Anyone can help me to resolve this problem. Can I change the video mode of the camera from OpenCV.
You will have to install the flycapture sdk for ARM if you want to do it manually (by code). The flycap UI software i dont believe works on ARM, let alone ubuntu 14.04, just ubuntu 12.04 x86. If you have access, what I usually do is plug it into my windows machine and use the Flycap software to change the configurations on the camera.
I found this question completely randomly, but coincidentally I am trying to interface the bumblebee2 with the jetson right now as well. Would you care to share as to what firewire mini-PCIe you used and how you went about any configurations (stock or grinch kernel, which L4T version) ?
Also, although not fully complete, you can view a code example as to how to interface with the camera using the flycaputre sdk here: https://github.com/ros-drivers/pointgrey_camera_driver. It is a ROS driver, but you can just reference the PointGreyCamera.cpp file for examples if your not using ROS.
Hope this helps
This is not well advertised, but PtGrey do not support firewire on ARM (page 4):
Before installing FlyCapture, you must have the following prerequisites:... A Point Grey USB 3.0 camera, (Blackfly, Grasshopper3, or Flea3)
Other Point Grey imaging cameras (FireWire, GigE, or CameraLink) are NOT supported
However as you have seen it is possible to use the camera (e.g. in Coriander) using standard firewire tools.
libdc1394 or the videography library should do what you need.