How I open a Camera:
In terminal (text after the $ prompt = commands that I type):_________________
$ ls /dev/video*
/dev/video0 /dev/video1
$ vlc v4l2:///dev/video0
VLC media player 2.0.8 Twoflower (revision 2.0.8a-0-g68cf50b)
[0x9f2d908] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
VLC starts playing camera output.
In another new terminal (BOLD = commands that I type):
$ vlc v4l2:///dev/video1
VLC media player 2.0.8 Twoflower (revision 2.0.8a-0-g68cf50b)
[0x9b24908] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
[0xb0500960] v4l2 demux error: VIDIOC_STREAMON failed
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
[0xb0501188] v4l2 access error: cannot set input 0: Device or resource busy
[0xb0501188] v4l2 access error: cannot set input 0: Device or resource busy
[0xb5300618] main input error: open of `v4l2:///dev/video1' failed
No video is played, only error message.
Primary Objective: I want to open two cameras simultaneously in opencv (c++) ... got similar errors with opencv .. so I'm using VLC Player to debug the issue.
The errors when opening two cameras simultaneously using c++ opencv ( the code is similar to https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/starter_video.cpp?rev=4705 )
e557822#e557822-T740:~/Desktop/Camera/starter_video2$ ls /dev/video*
/dev/video0 /dev/video1 /dev/video2
e557822#e557822-T740:~/Desktop/Camera/starter_video2$ ./starter_video2 0 1
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
press space to save a picture. q or esc to quit
init done
opengl support available
libv4l2: error turning on stream: No space left on device
VIDIOC_STREAMON: No space left on device
UPDATE 7/24 :
This Ubuntu 12.04 (32- bit) is a guest OS running on VMware Fusion. The host OS here is OSX 10.9.4 running on a Mac Pro computer hardware.
$ lsusb -t
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
|__ Port 1: Dev 2, If 0, Class=HID, Driver=usbhid, 12M
|__ Port 2: Dev 3, If 0, Class=hub, Driver=hub/7p, 12M
|__ Port 1: Dev 4, If 0, Class='bInterfaceClass 0xe0 not yet handled', Driver=btusb, 12M
|__ Port 1: Dev 4, If 1, Class='bInterfaceClass 0xe0 not yet handled', Driver=btusb, 12M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci_hcd/6p, 480M
|__ Port 1: Dev 6, If 0, Class='bInterfaceClass 0x0e not yet handled', Driver=uvcvideo, 480M
|__ Port 1: Dev 6, If 1, Class='bInterfaceClass 0x0e not yet handled', Driver=uvcvideo, 480M
|__ Port 2: Dev 7, If 0, Class='bInterfaceClass 0x0e not yet handled', Driver=uvcvideo, 480M
|__ Port 2: Dev 7, If 1, Class='bInterfaceClass 0x0e not yet handled', Driver=uvcvideo, 480M
$
This is USB bandwidth problem, not VLC one.
VIDIOC_STREAMON: No space left on device is message which is given when USB bandwidth is full.
Most modern USB cameras do high speed, high density image output. USB2 is limited to 480Mbps which is 60 megabytes/s theoretical. In practice all kind of overhead will use about half of it and 30 megabytes/s is maximum you can get. This means camera can send 1 megabyte image at 30fps.
All you can do is get motherboard with multiple USB buses. All small computers tend to have just one. High-end gaming motherboards have 2..3 USB2.0 buses. You can see it under linux:
$ lsusb -t
/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 480M
|__ Port 4: Dev 2, If 0, Class=scard, Driver=usbfs, 12M
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci_hcd/2p, 480M
|__ Port 1: Dev 2, If 0, Class=hub, Driver=hub/8p, 480M
|__ Port 5: Dev 3, If 0, Class='bInterfaceClass 0xe0 not yet handled', Driver=btusb, 12M
|__ Port 5: Dev 3, If 1, Class='bInterfaceClass 0xe0 not yet handled', Driver=btusb, 12M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci_hcd/2p, 480M
|__ Port 1: Dev 2, If 0, Class=hub, Driver=hub/6p, 480M
|__ Port 3: Dev 3, If 0, Class=vend., Driver=rts5139, 480M
|__ Port 5: Dev 4, If 0, Class='bInterfaceClass 0x0e not yet handled', Driver=uvcvideo, 480M
|__ Port 5: Dev 4, If 1, Class='bInterfaceClass 0x0e not yet handled', Driver=uvcvideo, 480M
$
Most probably both cameras are connected to same bus.
Other option is to reduct FPS speed or resolution. But this not always helping because I have seen cameras which reserve 80% of USB bandwidth anyway.
Related
there are very similar questions to this one but all of them are solved by disabling some other wifi source by using modprobe and then reseting rfkill
in my case:
artixlinux:[rail]:/etc/modprobe.d$ rfkill list all
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
and with nmcli:
wlan0: unavailable
"Qualcomm Atheros AR9485"
wifi (ath9k), 5A:9D:61:C0:BB:F0, sw disabled, hw, mtu 1500
I've already tried to modprobe ath9k but that does nothing
hardware:
[System]
OS: Artix Linux 20220123 n/a
Arch: x86_64
Kernel: 5.18.0-zen1-1-zen
Desktop: KDE
Display Server: x11
[CPU]
Vendor: GenuineIntel
Model: Intel(R) Core(TM) i3-3227U CPU # 1.90GHz
Physical cores: 2
Logical cores: 4
[Memory]
RAM: 3.7 GB
Swap: 0.0 GB
[Graphics]
Vendor: Intel
OpenGL Renderer: Mesa Intel(R) HD Graphics 4000 (IVB GT2)
OpenGL Version: 4.2 (Compatibility Profile) Mesa 22.2.0-devel (git-3e679219a1)
OpenGL Core: 4.2 (Core Profile) Mesa 22.2.0-devel (git-3e679219a1)
OpenGL ES: OpenGL ES 3.0 Mesa 22.2.0-devel (git-3e679219a1)
Vulkan: Supported
See https://askubuntu.com/a/98719
"Hard blocked" cannot be changed by software, look for a wifi toggle on your keyboard or edges of the laptop; the device can also be hard blocked if disabled in the bios.
And:
https://askubuntu.com/questions/98702/how-to-unblock-something-listed-in-rfkill#comment618926_98719
FYI hard block also happens when the wifi is disabled in the bios.
I have a cluster situation consisting of 4 total nodes, 3 servers and 1 management node, working properly.
At the beginning of the month we planned to patch the OS and we started from the first server node with this procedure:
Stop service
S.O. patching
Server restart
Start service
The service of the first patched node named "serverA" fails to restart with this error:
Log entries cluster join:
serverA:
| INFO | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: shared=true ordered=false failed to connect to peer 10.237.110.195( Server serverB:9993):1024 because: java.net.ConnectException: Connection timed out (Connection timed out)
| WARN | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: Attempting reconnect to peer 10.237.110.195( Server serverB:9993):1024
ServerMgmt:
| WARN | pool-3-thread-1 | tributed.internal.ReplyProcessor21 | --> 15 seconds have elapsed while waiting for replies: <CreateRegionProcessor$CreateRegionReplyProcessor 44180 waiting for 1 replies from [10.237.110.194( Server serverA:632):1024]> on 10.237.110.225( Management:6033):1024 whose current membership list is: [[10.237.110.196( Server serverC:16805):1024, 10.237.110.225( Management:6033):1024, 10.237.110.195( Server serverB:9993):1024, 10.237.110.194( Server serverA:632):1024]]
The connection between the systems was verified with tcpdumps, udp 1024 is running fine.
We have tried redeploying the service and making numerous attempts but we always get the same error during startup.
Any suggestions? Thank you.
Marco.
I think to see this error message, serverA was probably able to send UDP messages to serverB but it is failing to create a TCP connection. It's hard to say why though - a firewall issue, some TCP configuration issue, ... ?
Check to see if serverB has anything interesting in its logs. Since you are using TCP dump, you should be watching for that TCP connection for serverB:9993, since it looks like that is wwhat failed.
There is no firewall between the systems, we've analyzed again the network connection, during startup from node a, and we can see that the communication can be established between all systems. But what we detected is, that on port 2323 which is configured as locater, the node sends packages to the b and c node, but only receives back packages from the c node, and not from the b node. This is for us again a sign that the b node has an issue. Does it give a way to check our assumption from the b node?
A node ip .194
B node ip .195
C node ip .196
Management ip .225
I'm trying to setup tensorflow to use GPU acceleration with WSL 2 running Ubuntu 20.04. I'm following this tutorial and am running into the error seen here. However, when I follow the solution there and try to start docker with sudo service docker start I get told docker is an unrecognized service. However, considering I can access the help menu and whatnot, I know docker is installed. While I can get docker to work with the desktop tool, since it doesn't support Cuda as mentioned in the SO post from earlier, it's not very helpful. It's not really giving me error logs or anything, so please ask if you need more details.
Edit:
Considering the lack of details, here are a list of solutions I've tried to no avail. 1 2 3
Update: I used sudo dockerd to get the container started and tried running the nvidia benchmark container only to be met with
INFO[2020-07-18T21:04:05.875283800-04:00] shim containerd-shim started address=/containerd-shim/021834ef5e5600bdf62a6a9e26dff7ffc1c76dd4ec9dadb9c1fcafb6c88b6e1b.sock debug=false pid=1960
INFO[2020-07-18T21:04:05.899420200-04:00] shim reaped id=70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736
ERRO[2020-07-18T21:04:05.909710600-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:05.909753500-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:06.001006700-04:00] 70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736 cleanup: failed to delete container from containerd: no such container
ERRO[2020-07-18T21:04:06.001045100-04:00] Handler for POST /v1.40/containers/70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled
Update 2: After installing windows insider and making everything as up to date as possible, I encountered a different error.
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Error: only 0 Devices available, 1 requested. Exiting.
I have a GTX 970, so I'm not sure why it's not being detected. After running sudo lshw -C display, it was confirmed that my graphics card isn't being detected. I got:
*-display UNCLAIMED
description: 3D controller
product: Microsoft Corporation
vendor: Microsoft Corporation
physical id: 4
bus info: pci#941e:00:00.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list
configuration: latency=0
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
My new laptop has an NVIDIA graphics card (not sure which one). I have freshly installed Fedora 27. However, I cannot get my HDMI or VGA port to work or even be recognized. I've tried following the instructions at https://www.if-not-true-then-false.com/2015/fedora-nvidia-guide/ and https://fedoraproject.org/wiki/Bumblebee (as well as a few others), but have been unsuccessful thus so far.
Here is some output from various commands for context:
> xrandr -q
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 1920 x 1080, current 1920 x 1080, maximum 1920 x 1080
default connected primary 1920x1080+0+0 0mm x 0mm
1920x1080 77.00*
> sudo lspci -v | grep -A 15 'VGA'
00:02.0 VGA compatible controller: Intel Corporation Device 591b (rev 04) (prog-if 00 [VGA controller])
Subsystem: Dell Device 07d1
Flags: bus master, fast devsel, latency 0, IRQ 255
Memory at eb000000 (64-bit, non-prefetchable) [size=16M]
Memory at 80000000 (64-bit, prefetchable) [size=256M]
I/O ports at f000 [size=64]
[virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: [40] Vendor Specific Information: Len=0c <?>
Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [ac] MSI: Enable- Count=1/1 Maskable- 64bit-
Capabilities: [d0] Power Management version 2
Capabilities: [100] Process Address Space ID (PASID)
Capabilities: [200] Address Translation Service (ATS)
Capabilities: [300] Page Request Interface (PRI)
Kernel modules: i915
> sudo lspci -v | grep -A 18 '3D'
01:00.0 3D controller: NVIDIA Corporation Device 179c (rev a2)
Subsystem: Dell Device 07d1
Flags: bus master, fast devsel, latency 0, IRQ 255
Memory at ec000000 (32-bit, non-prefetchable) [size=16M]
Memory at c0000000 (64-bit, prefetchable) [size=256M]
Memory at d0000000 (64-bit, prefetchable) [size=32M]
I/O ports at e000 [disabled] [size=128]
Expansion ROM at ed000000 [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [100] Virtual Channel
Capabilities: [250] Latency Tolerance Reporting
Capabilities: [258] L1 PM Substates
Capabilities: [128] Power Budgeting <?>
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Capabilities: [900] #19
Kernel modules: nouveau
Any ideas?
It ended up being because I installed my OS in Basic Graphics Mode: https://ask.fedoraproject.org/en/question/46846/what-does-nomodeset-do/
I have a job that is failing at the very start of the message:
"#*" and "#N" are reserved sharding specs. Filepattern must not contain any of them.
I have altered the destination location to be something other than the default (an email address) which would include the # symbol but I can still see it is using temporary destinations within that path that I am unable to edit.
Did anyone experience this issue before? I've got a file which is only 65k rows long, I can preview all of the complete data in Data Prep but when I run the job it fails which is super tedious and ~3hrs of cleaning down the drain if this won't run. (I appreciate it's not designed for this, but Excel was being a mare so it seemed like a good solution!)
Edit - Adding Logs:
2018-03-10 (13:47:34) Value "PTableLoadTransformGCS/Shuffle/GroupByKey/Session" materialized.
2018-03-10 (13:47:34) Executing operation PTableLoadTransformGCS/SumQuoteAndDelimiterCounts/GroupByKey/Read+PTableLoadTran...
2018-03-10 (13:47:38) Executing operation PTableLoadTransformGCS/Shuffle/GroupByKey/Close
2018-03-10 (13:47:38) Executing operation PTableStoreTransformGCS/WriteFiles/GroupUnwritten/Create
2018-03-10 (13:47:39) Value "PTableStoreTransformGCS/WriteFiles/GroupUnwritten/Session" materialized.
2018-03-10 (13:47:39) Executing operation PTableLoadTransformGCS/Shuffle/GroupByKey/Read+PTableLoadTransformGCS/Shuffle/Gr...
2018-03-10 (13:47:39) Executing failure step failure49
2018-03-10 (13:47:39) Workflow failed. Causes: (c759db2a23a80ea): "#*" and "#N" are reserved sharding specs. Filepattern m...
(c759db2a23a8c5b): Workflow failed. Causes: (c759db2a23a80ea): "#*" and "#N" are reserved sharding specs. Filepattern must not contain any of them.
2018-03-10 (13:47:39) Cleaning up.
2018-03-10 (13:47:39) Starting worker pool teardown.
2018-03-10 (13:47:39) Stopping worker pool...
And StackDriver warnings or higher:
W ACPI: RSDP 0x00000000000F23A0 000014 (v00 Google)
W ACPI: RSDT 0x00000000BFFF3430 000038 (v01 Google GOOGRSDT 00000001 GOOG 00000001)
W ACPI: FACP 0x00000000BFFFCF60 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
W ACPI: DSDT 0x00000000BFFF3470 0017B2 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
W ACPI: FACS 0x00000000BFFFCF00 000040
W ACPI: FACS 0x00000000BFFFCF00 000040
W ACPI: SSDT 0x00000000BFFF65F0 00690D (v01 Google GOOGSSDT 00000001 GOOG 00000001)
W ACPI: APIC 0x00000000BFFF5D10 00006E (v01 Google GOOGAPIC 00000001 GOOG 00000001)
W ACPI: WAET 0x00000000BFFF5CE0 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
W ACPI: SRAT 0x00000000BFFF4C30 0000B8 (v01 Google GOOGSRAT 00000001 GOOG 00000001)
W ACPI: 2 ACPI AML tables successfully acquired and loaded
W ACPI: Executed 2 blocks of module-level executable AML code
W acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
W ACPI: Enabled 16 GPEs in block 00 to 0F
W ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
W ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
W i8042: Warning: Keylock active
W GPT:Primary header thinks Alt. header is not at the end of the disk.
W GPT:41943039 != 524287999
W GPT:Alternate GPT header not at the end of the disk.
W GPT:41943039 != 524287999
W GPT: Use GNU Parted to correct GPT errors.
W device-mapper: verity: Argument 0: 'payload=PARTUUID=245B0EEC-6404-8744-AAF2-E8C6BF78D7B2'
W device-mapper: verity: Argument 1: 'hashtree=PARTUUID=245B0EEC-6404-8744-AAF2-E8C6BF78D7B2'
W device-mapper: verity: Argument 2: 'hashstart=2539520'
W device-mapper: verity: Argument 3: 'alg=sha1'
W device-mapper: verity: Argument 4: 'root_hexdigest=244007b512ddbf69792d485fdcbc3440531f1264'
W device-mapper: verity: Argument 5: 'salt=5bacc0df39d2a60191e9b221ffc962c55e251ead18cf1472bf8d3ed84383765b'
E EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
W [/usr/lib/tmpfiles.d/var.conf:12] Duplicate line for path "/var/run", ignoring.
W Could not stat /dev/pstore: No such file or directory
W Kernel does not support crash dumping
W Could not load the device policy file.
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [CLOUDINIT] cc_write_files.py[WARNING]: Undecodable permissions None, assuming 420
W [WARNING:persistent_integer.cc(75)] cannot open /var/lib/metrics/version.cycle for reading: No such file or directory
W No API client: no api servers specified
W Unable to update cni config: No networks found in /etc/cni/net.d
W unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
W No api server defined - no events will be sent to API server.
W Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
W Unable to update cni config: No networks found in /etc/cni/net.d
E Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
W No api server defined - no node status update will be sent.
E Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
E [ContainerManager]: Fail to get rootfs information unable to find data for container /
W Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
E Could not find capacity information for resource storage.kubernetes.io/scratch
W eviction manager: no observation found for eviction signal allocatableNodeFs.available
W Profiling Agent not found. Profiles will not be available from this worker.
E debconf: delaying package configuration, since apt-utils is not installed
W [WARNING:metrics_daemon.cc(598)] cannot read /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
E % Total % Received % Xferd Average Speed Time Time Time Current
E Dload Upload Total Spent Left Speed
E
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 3698 100 3698 0 0 64248 0 --:--:-- --:--:-- --:--:-- 64877