Sharing PCIe virtual functions available on the host with a docker container - docker

My host is running in SRIOV mode and has several physical devices that appear on the PCIe bus. Each physical function has a collection of 32 virtual functions. I want to share one of the virtual function with a docker container. These are crypto/compression accelerators, and I wrote a driver for one; so I'm familiar with SRIOV when I'm dealing with bare-metal or SRIOV hypervisors launching virtual machines. But now I'm trying to get access to the virtual functions inside a docker container.
On the host I can lspci and see my physical and virtual devices. But when I launch a container, all I see from within the container are the physical functions.
I have seen the "--device" parameter for "docker run", but I don't think it will work for passing a virtual function to a container.
Logistically, here's what I see on the host:
[localhost] config # lspci | grep "^85" | head -4
85:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
85:01.0 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
85:01.1 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
85:01.2 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
[localhost] config # lspci | grep "^85" | wc
33 295 2524
So we have 1 physical function at 85:00.0, and 32 virtuals.
But when I start the container and do the same examination from inside the container, all I see is the following:
[localhost] config # lspci | grep QAT
04:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
05:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
85:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
I've been told that this can be made to work: I can send in virtuals into the container, and my driver can do the rest.
My question: how can I pass virtual functions from the host into a container?

As mentioned in the comment (but with the flag name):
docker run -it --rm --cap-add=SYS_RAWIO ...
Then try lspci from inside the container again.

Related

CUDA Version mismatch in Docker with WSL2 backend

I am trying to use docker (Docker Desktop for Windows 10 Pro) with the WSL2 Backend (WINDOWS SUBSHELL LINUX (WSL) (Ubuntu 20.04.4 LTS)).
That part seems to be working fine, except I would like to pass my GPU (Nvidia RTX A5000) through to my docker container.
Before I even get that far, I am still trying to set things up. I found a very good tutorial aimed at 18.04, but found all the steps are the same for 20.04, just with some version numbers bumpede.
At the end, I can see that my Cuda versions do not match. You can see that here, .
The real issue is when I try to run the test command as shown on the docker website:
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
I get this error:
--> docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380:
starting container process caused: process_linux.go:545: container init caused: Running
hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli:
requirement error: unsatisfied condition: cuda>=11.6, please update your driver to a
newer version, or use an earlier cuda container: unknown.
... and I just don't know what to do, or how I can fix this.
Can someone explain how to get the GPU to pass through to a docker container successfully.
I had the same issue on Ubuntu when I tried to run the container:
s.evloev#some-pc:~$ docker run --gpus all --rm nvidia/cuda:11.7.0-base-ubuntu18.04
docker: Error response from daemon: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.7, please update your driver to a newer version, or use an earlier cuda container: unknown.
In my case it occurred when I tried to launch docker image that have nvidia cuda version which is higher than what was installed on my host.
When I checked my cuda version that was installed on my host I have found that it is version 11.3.
s.evloev#some-pc:~$ nvidia-smi
Thu Jul 21 15:06:33 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01 Driver Version: 465.19.01 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|... |
+-----------------------------------------------------------------------------+
So when I try to run the same cuda version (11.3) it works well:
s.evloev#some-pc:~$ docker run -it --gpus all --rm nvidia/cuda:11.3.0-base-ubuntu18.04 nvidia-smi
Thu Jul 21 12:13:46 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01 Driver Version: 465.19.01 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:65:00.0 Off | N/A |
| 0% 44C P8 7W / 180W | 1404MiB / 8110MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
The comment from #RobertCrovella resolved this:
please update your driver to a newer version when using WSL, the driver in your WSL setup is not something you install in WSL, it is provided by the driver on the windows side. Your WSL driver is 472.84 and this is too old to work with CUDA 11.6 (it only supports up to CUDA 11.4). So you would need to update your windows side driver to the latest one possible for your GPU, if you want to run a CUDA 11.6 test case. Regarding the "mismatch" of CUDA versions, this provides general background material for interpretation.
Downloading the most current Nvidia driver:
Version: R510 U3 (511.79) WHQL
Release Date: 2022.2.14
Operating System: Windows 10 64-bit, Windows 11
Language: English (US)
File Size: 640.19 MB
Now I am able to support CUDA 11.6, and the test from the docker documentation now works:
--> docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Ampere" with compute capability 8.6
> Compute 8.6 CUDA device: [NVIDIA RTX A5000]
65536 bodies, total time for 10 iterations: 58.655 ms
= 732.246 billion interactions per second
= 14644.916 single-precision GFLOP/s at 20 flops per interaction
Thank you for the quick response!

Dockerized nmap shows incorrect OS versions

I've noticed that when Nmap is dockerized it is yielding incorrect OS results. I've tried various pre-built docker images as well as one I created myself and they all show the same results.
Here are a few of the pre-built images I've tried:
https://hub.docker.com/r/instrumentisto/nmap
https://hub.docker.com/r/uzyexe/nmap/
I've run the same Nmap command with these images and using my locally installed Nmap version and here are the results (all images are using Nmap 7.80):
$ nmap -sV -O 192.168.1.1
------(locally installed nmap result - correct):
OS CPE: cpe:/o:linux:linux_kernel:2.6
OS details: Linux 2.6.8 - 2.6.30
Network Distance: 1 hop
Service Info: OS: Linux; Device: broadband router; CPE: cpe:/o:linux:linux_kernel
------(all docker image nmap results - incorrect):
OS CPE: cpe:/h:hp:jetdirect_170x cpe:/h:hp:inkjet_3000
Aggressive OS guesses: HP 170X print server or Inkjet 3000 printer (85%), HP LaserJet 4000 printer (85%), HP LaserJet 4250 printer (85%)
No exact OS matches for host (test conditions non-ideal).
Service Info: OS: Linux; Device: broadband router; CPE: cpe:/o:linux:linux_kernel
What's interesting to me is that the Service Info is actually correct across the scans, but nothing else is.
I'm trying to figure out of there is a setting/flag that I'm missing when executing the docker command. Here's what I've tried:
Setting the docker network to host (no change in result)
Setting the docker network to bridge (no change in result)
Not setting any network setting (no change in result)
I really need to get Nmap working in a docker container because it's integrated into a rails web app that I'm building utilizing the ruby-nmap gem.
Thanks!

Docker not seeing usb /dev/ttyACM0 after unplugging and then replugging

I'm running a Docker container Ubuntu 18.04 which I use to compile code and flash IOT devices, I use this command: docker run --privileged --device=/dev/ttyACM0 -it -v disc_vol1:/root/zephyr zephyr
To run the docker container, which allows me to see the usb devices. However if I for some reason need to unplug and replug the devices, whilst the container is still running, docker no longer sees them, until I restart the container.
Is there a solution for this problem?
DMESG after unplugging and then replugging:
[388387.919792] usb 3-3: USB disconnect, device number 47
[388387.919796] usb 3-3.1: USB disconnect, device number 48
[388387.957792] FAT-fs (sdb): unable to read boot sector to mark fs as dirty
[388406.517953] usb 3-1: new high-speed USB device number 51 using xhci_hcd
[388406.666047] usb 3-1: New USB device found, idVendor=0424, idProduct=2422
[388406.666051] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[388406.666415] hub 3-1:1.0: USB hub found
[388406.666438] hub 3-1:1.0: 2 ports detected
[388407.881910] usb 3-1.1: new full-speed USB device number 52 using xhci_hcd
[388407.986919] usb 3-1.1: New USB device found, idVendor=0d28, idProduct=0204
[388407.986924] usb 3-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[388407.986927] usb 3-1.1: Product: DAPLink CMSIS-DAP
[388407.986929] usb 3-1.1: Manufacturer: ARM
[388407.986932] usb 3-1.1: SerialNumber: 1026000015afe1e800000000000000000000000097969902
[388407.987898] usb-storage 3-1.1:1.0: USB Mass Storage device detected
[388407.988131] scsi host10: usb-storage 3-1.1:1.0
[388407.991188] hid-generic 0003:0D28:0204.00A9: hiddev0,hidraw3: USB HID v1.00 Device [ARM DAPLink CMSIS-DAP] on usb-0000:00:14.0-1.1/input3
[388407.991926] cdc_acm 3-1.1:1.1: ttyACM0: USB ACM device
[388409.014753] scsi 10:0:0:0: Direct-Access MBED VFS 0.1 PQ: 0 ANSI: 2
[388409.015336] sd 10:0:0:0: Attached scsi generic sg2 type 0
[388409.015632] sd 10:0:0:0: [sdb] 131200 512-byte logical blocks: (67.2 MB/64.1 MiB)
[388409.015888] sd 10:0:0:0: [sdb] Write Protect is off
[388409.015892] sd 10:0:0:0: [sdb] Mode Sense: 03 00 00 00
[388409.016103] sd 10:0:0:0: [sdb] No Caching mode page found
[388409.016109] sd 10:0:0:0: [sdb] Assuming drive cache: write through
[388409.045555] sd 10:0:0:0: [sdb] Attached SCSI removable disk
[388482.439345] CIFS VFS: Free previous auth_key.response = 00000000df9e4b01
[388521.789341] CIFS VFS: Free previous auth_key.response = 0000000071020f34
[388554.099064] CIFS VFS: Free previous auth_key.response = 000000002a3aa60b
[388590.132004] CIFS VFS: Free previous auth_key.response = 000000009bed9fb5
[388606.372288] usb 3-1: USB disconnect, device number 51
[388606.372292] usb 3-1.1: USB disconnect, device number 52
[388606.415803] FAT-fs (sdb): unable to read boot sector to mark fs as dirty
[388622.643954] usb 3-3: new high-speed USB device number 53 using xhci_hcd
[388622.792057] usb 3-3: New USB device found, idVendor=0424, idProduct=2422
[388622.792061] usb 3-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[388622.792451] hub 3-3:1.0: USB hub found
[388622.792479] hub 3-3:1.0: 2 ports detected
And when I do ls /dev/ttyACM0 or /dev/ttyACM1 nothing changes when it is plugged or unplugged. The problem is that I cannot flash or see the devices with for example pyocd, when I do pycod list the devices wont show up until I restart the container.
Problem
The problem lies in device node creation mechanism.
As you can read in LFS docs, in 9.3.2.2. Device Node Creation:
Device files are created by the kernel by the devtmpfs filesystem.
By comparing mount entries in host:
$ mount
...
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=16259904k,nr_inodes=4064976,mode=755,inode64)
...
...and in container:
# mount
...
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64)
...
...you can notice that /dev filesystem in the container isn't the same thing as it is in the host.
It seems to me that privileged docker container recreates /dev structure while staring. Later, kernel does create device node in devtmpfs, but as long as the container uses separate filesystem for devices, the node isn't created there. As a confirmation, you can notice that after unplugging the device (the one that was connected before container started), its node still persists inside container, but disappears from the host.
Solution
You can workaround it by creating the node manually. In this example I plugged in /dev/ttyUSB1 while container was running.
On the host machine find major and minor device number:
$ ls -la /dev/ttyUSB*
crw-rw----+ 1 root plugdev 188, 0 gru 5 15:25 /dev/ttyUSB0
crw-rw----+ 1 root plugdev 188, 1 gru 5 15:26 /dev/ttyUSB1
# ^^^^^^ major and minor number
And create corresponding node inside container:
# ll /dev/ttyUSB*
crw-rw---- 1 root plugdev 188, 0 Dec 5 14:25 /dev/ttyUSB0
# mknod /dev/ttyUSB1 c 188 1
# ll /dev/ttyUSB*
crw-rw---- 1 root plugdev 188, 0 Dec 5 14:25 /dev/ttyUSB0
crw-r--r-- 1 root root 188, 1 Dec 5 15:16 /dev/ttyUSB1
The device should work.
Enhancement
You can also automate node creation by installing udev and writing some custom nodes inside container.
I found this repo that successfully sets up udev instance inside container - udevadm monitor correctly reflects udev events compared to host.
The last thing is to write some udev rules that will automagically create corresponding nodes inside the container:
ACTION=="add", RUN+="mknod %N c %M %m"
ACTION=="remove", RUN+="rm %N"
I haven't tested it yet, but I can see no reason that it will not work.
Better enhancement
You don't neet to install udev inside the container. You can run mknod there from script that runs on host machine (on host's udev trigger), as it's described here. It would be good to handle removing nodes as well.

Docker Desktop for Windows: No hypervisor is present on this system

I am new to Docker Desktop for Windows. I am getting an error when I tried the hello world example by following this. Update
Below is the steps I followed:
1 Installed Docker for Windows, stable version
2 Both Hyper-V and Virtualization have been enabled on my Windows 10
However, error below when switch to Linux container:
An error occurred.
Hardware assisted virtualization and data execution protection must be enabled in the BIOS. See https://docs.docker.com/docker-for-windows/troubleshoot/#virtualization-must-be-enabled
Please note the problem in this post occurs when using Windows containers. Step 3 is using Windows containers, not Linux.
3 Error below when trying out hello world
PS C:\Users\'#.lp> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
407ada6e90de: Pull complete
9c9e16cbf19f: Pull complete
2cb715c55064: Pull complete
990867d1296d: Pull complete
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container e646da0e13b5c2ba92db3ade35f6a334f9c2903efde26a78765f55f0498a86f1 encountered an error during CreateContainer: failure in a Windows system call: No hypervisor is present on this system. (0xc0351000) extra info: {"SystemType":"Container","Name":"e646da0e13b5c2ba92db3ade35f6a334f9c2903efde26a78765f55f0498a86f1","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\e646da0e13b5c2ba92db3ade35f6a334f9c2903efde26a78765f55f0498a86f1","Layers":[{"ID":"84cbd4e4-1a6a-5e55-86fa-927ba5be73e0","Path":"C:\\ProgramData\\Docker\\windowsfilter\\417caa6a366bad6fe0d68d2b459510e4c50fda5686b37fb91c9363ca103e9475"},{"ID":"e747017d-859e-5513-b9ad-346002efc167","Path":"C:\\ProgramData\\Docker\\windowsfilter\\43e4d5eeaebc150ea9da0bf919302a2d7646461e3da60b5cbd3db15d3d928698"},{"ID":"e0bd7f8a-622c-589f-9752-eb7b80b88973","Path":"C:\\ProgramData\\Docker\\windowsfilter\\e8ee5f9ec8d67bfebe230b67989dd788506e33627a4400bb63ba098b2a3fd733"},{"ID":"6f13d213-2d8c-5c37-b1f5-770f73ad2d9a","Path":"C:\\ProgramData\\Docker\\windowsfilter\\a731844c4d933200e984524b7273ac3a555792bafec6eab30722fdfd7992ee96"}],"HostName":"e646da0e13b5","HvPartition":true,"EndpointList":["0b88e638-56ea-4157-88a7-67fc3bc35958"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\e8ee5f9ec8d67bfebe230b67989dd788506e33627a4400bb63ba098b2a3fd733\\UtilityVM"},"AllowUnqualifiedDNSQuery":true}.
System information below:
PS C:\Users\'#.lp> docker --version
Docker version 17.09.1-ce, build 19e2cf6
PS C:\Users\'#.lp> docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.09.1-ce
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd json-file logentries splunk syslog
Swarm: inactive
Default Isolation: hyperv
Kernel Version: 10.0 16299 (16299.15.amd64fre.rs3_release.170928-1534)
Operating System: Windows 10 Pro
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.999GiB
Name: username
ID: 5EK5:6LMU:NPZG:3K2F:W3X7:2G7T:GFYU:GENE:LDBA:UASU:ZF26:T3AU
Docker Root Dir: C:\ProgramData\Docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: -1
Goroutines: 24
System Time: 2017-12-24T20:16:32.0728521Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
PS C:\Users\'#.lp> docker-compose --version
docker-compose version 1.17.1, build 6d101fb0
PS C:\Users\'#.lp> docker-machine --version
docker-machine.exe version 0.13.0, build 9ba6da9
Windows 10 Pro version 1709
Any idea?
Update:
PS C:\WINDOWS\system32> docker --version
Docker version 17.12.0-ce, build c97c6d6
PS C:\WINDOWS\system32> docker rm -f $(docker ps -a -q)
a7094c166be7
afbc956d0630
6cc2e3a20dcc
e646da0e13b5
PS C:\WINDOWS\system32> docker rmi -f $(docker images -q)
Untagged: hello-world:latest
Untagged: hello-world#sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Deleted: sha256:29528317da62a27024338f18abf29c992d6cdb4087f7d195cb6725bbe6bd15cc
Deleted: sha256:729a95d3f7234b02c27bdaf4fd81fd3fb9453445a85b713398c6bd05ad290ff5
Deleted: sha256:fcea8c486bda6858dee33a0ce494fba4839e542554b0588f6d00833a4155a537
Deleted: sha256:53cda6d9c060289530670af7ac429015f88d1ac58417f94f22c3dd2f03210436
Deleted: sha256:67903cf26ef4095868687002e3dc6f78ad275677704bf0d11524f16209cec48e
PS C:\WINDOWS\system32> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
407ada6e90de: Pull complete
711a33cda32c: Pull complete
f2954926b3d8: Pull complete
8b6a3aeeca73: Pull complete
Digest: sha256:66ef312bbac49c39a89aa9bcc3cb4f3c9e7de3788c944158df3ee0176d32b751
Status: Downloaded newer image for hello-world:latest
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container 99a306c2336a7bd503bfe8a744ace77cedc19bbc0d15e52b8d899bcea3db6b96 encountered an error during CreateContainer: failure in a Windows system call: No hypervisor is present on this system. (0xc0351000) extra info: {"SystemType":"Container","Name":"99a306c2336a7bd503bfe8a744ace77cedc19bbc0d15e52b8d899bcea3db6b96","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\99a306c2336a7bd503bfe8a744ace77cedc19bbc0d15e52b8d899bcea3db6b96","Layers":[{"ID":"a5eef81d-74bf-53d1-8517-78b635324fdb","Path":"C:\\ProgramData\\Docker\\windowsfilter\\afb89f854af8452a0a12dfb14dc47995e001057c7af209be45ed5ee4813d2ffd"},{"ID":"744a6817-2b8a-5b6a-a717-8932a5863c9f","Path":"C:\\ProgramData\\Docker\\windowsfilter\\21a39c2b74ff220eac42f6f96d6097a7ef0feb192c1a77c0e88068cd10207d33"},{"ID":"ee281c98-febf-545b-bd51-8aec0a88f617","Path":"C:\\ProgramData\\Docker\\windowsfilter\\62439684561a3d30068cae2c804512984637d4c8b489f6f7cbcb5c8fed588af5"},{"ID":"f023cffb-ac18-57fe-9894-a2f1798fd0b0","Path":"C:\\ProgramData\\Docker\\windowsfilter\\1354f5a762901ec48bcf6a3ca8aab615bc305e91315e6e77fdf2c8fee5d587a2"}],"HostName":"99a306c2336a","HvPartition":true,"EndpointList":["2ce5269d-8776-4e84-8b37-4d99fa0a9f7b"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\62439684561a3d30068cae2c804512984637d4c8b489f6f7cbcb5c8fed588af5\\UtilityVM"},"AllowUnqualifiedDNSQuery":true}.
PS C:\WINDOWS\system32> systeminfo
Host Name: XXXX
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.16299 N/A Build 16299
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Owner: '#.lp
Registered Organization:
Product ID: XXXXXXXXXXXXXXXXXXXXXXXXXXX
Original Install Date: 10/12/2017, 23:15:17
System Boot Time: 06/01/2018, 13:53:55
System Manufacturer: System manufacturer
System Model: System Product Name
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 15 Stepping 11 GenuineIntel ~2401 Mhz
BIOS Version: American Megatrends Inc. 0902 , 27/07/2011
Windows Directory: C:\WINDOWS
System Directory: C:\WINDOWS\system32
Boot Device: \Device\HarddiskVolume1
System Locale: en-gb;English (United Kingdom)
Input Locale: en-gb;English (United Kingdom)
Time Zone: (UTC+00:00) Dublin, Edinburgh, Lisbon, London
Total Physical Memory: 8,191 MB
Available Physical Memory: 2,209 MB
Virtual Memory: Max Size: 16,383 MB
Virtual Memory: Available: 4,745 MB
Virtual Memory: In Use: 11,638 MB
Page File Location(s): C:\pagefile.sys
Domain: WORKGROUP
Logon Server: \\XXXXX
Hotfix(s): 7 Hotfix(s) Installed.
[01]: KB4048951
[02]: KB4053577
[03]: KB4054022
[04]: KB4055237
[05]: KB4056887
[06]: KB4058043
[07]: KB4054517
Network Card(s): 5 NIC(s) Installed.
[01]: TunnelBear Adapter V9
Connection Name: Ethernet
Status: Media disconnected
[02]: Qualcomm Atheros AR8131 PCI-E Gigabit Ethernet Controller (NDIS 6.30)
Connection Name: Local Area Connection
Status: Media disconnected
[03]: Compact Wireless-G USB Network Adapter
Connection Name: Wi-Fi
DHCP Enabled: Yes
DHCP Server: XXXXX
IP address(es)
[01]: XXX
[02]: XXX
[04]: Hyper-V Virtual Ethernet Adapter
Connection Name: vEthernet (Default Switch)
DHCP Enabled: Yes
DHCP Server: 255.255.255.255
IP address(es)
[01]: X
[02]: X
[05]: Hyper-V Virtual Ethernet Adapter
Connection Name: vEthernet (nat)
DHCP Enabled: Yes
DHCP Server: 255.255.255.255
IP address(es)
[01]: X
[02]: X
Hyper-V Requirements: VM Monitor Mode Extensions: Yes
Virtualization Enabled In Firmware: Yes
Second Level Address Translation: No
Data Execution Prevention Available: Yes
Update 2
Still getting the same error, any idea?
PS C:\Users\'#.lp> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
e46172273a4e: Pull complete
61703422ec93: Pull complete
a17b8d9caad6: Pull complete
2dccc7619f71: Pull complete
Digest: sha256:41a65640635299bab090f783209c1e3a3f11934cf7756b09cb2f1e02147c6ed8
Status: Downloaded newer image for hello-world:latest
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: CreateComputeSystem 755110bc7813700701f2325c921fad7a4220c8ff91d620ac51e258cb8b1ab700: No hypervisor is present on this system.
(extra info: {"SystemType":"Container","Name":"755110bc7813700701f2325c921fad7a4220c8ff91d620ac51e258cb8b1ab700","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\755110bc7813700701f2325c921fad7a4220c8ff91d620ac51e258cb8b1ab700","Layers":[{"ID":"535189fb-71a2-598a-bd98-f711c29cf301","Path":"C:\\ProgramData\\Docker\\windowsfilter\\5e4cc131c334b8171b269003b9659ba578f9528372dd28054624d0bbde003b4f"},{"ID":"93d17dd0-2837-5522-a207-2b9e009a9d2b","Path":"C:\\ProgramData\\Docker\\windowsfilter\\87d235bd8d5ca1534f7396bf90d96ee9012875f8ae0e56556af19ebce73cdf80"},{"ID":"6899fe53-2cd7-5ec6-8edc-bf8859eea3e7","Path":"C:\\ProgramData\\Docker\\windowsfilter\\f75a64ae1fe066c392738bc643e1f49f1f0ee0bce4214c8655714b7386cdc3fc"},{"ID":"efbc003d-b691-5d30-ad65-d7dff28ca9e8","Path":"C:\\ProgramData\\Docker\\windowsfilter\\74033dce6b43107101f831d96c6bebe0ceb1df34f8e5c82421ee3f296b20a70c"}],"HostName":"755110bc7813","HvPartition":true,"EndpointList":["93c1c71e-11b5-49d3-82fd-d467d9b625b6"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\f75a64ae1fe066c392738bc643e1f49f1f0ee0bce4214c8655714b7386cdc3fc\\UtilityVM"},"AllowUnqualifiedDNSQuery":true}).
PS C:\Users\'#.lp> docker --version
Docker version 18.09.2, build 6247962
PS C:\Users\'#.lp>
Here is what worked for me: Open command prompt as admin and run
bcdedit /set hypervisorlaunchtype auto
and then reboot
What had happened:
I had to start an android emulator and Android Studio said that "Emulator is incompatible With Hyper-V" so it ran this command to disable hyper-v bcdedit /set hypervisorlaunchtype off
These steps fixed it:
1. bcdedit /set hypervisorlaunchtype auto
2. reboot computer
3. docker run hello-world
if both docker and Hyper-V are installed try to recreate the image in docker. It works for me.
You can check the status of Hyper-V in system by typing the following command in powershell:
systeminfo
You should also switch to container for windows in docker if not already.
Had the same problem.
Had to enable the virtualization in the BIOS to solve it.
If you're attempting to run Docker for Windows inside a Parallels virtual machine, you must enable 'Nested Virtualization'.
https://kb.parallels.com/en/116239
This is only available in the Pro and Business Editions. I had to upgrade my version to support this as I was running Desktop.
If you are running Docker in a VM, may be you need look into "Nested Virtualization", the Virtualization need to be exposed from Physical Server to VM.
For example, expose Virtualization in Hyper-V platform throuth powershell scripts:
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true
see the link:
Run Hyper-V in a Virtual Machine with Nested Virtualization
I had faced the same issue and it got resolved after upgrading my windows to latest version...
I did everything as suggested on this post and others to no avail. What did work for me was the following:
Turn Windows Features OFF : Hyper-V and Containers
Force an windows update to Windows 10 Pro ver: 1803
The update completed. Then when I started docker it asked me if I wanted to enable Hyper-V and Containers. I answered yes. The machine rebooted twice.
After this everything worked perfectly. Unfortunately I cannot say for sure if point 1 or point 2 or both together fixed the issue. I would suggest try point 1 above first followed by a reboot then starting docker. I suspect this rather than forcing a update to Windows 1803 will fix the problem.
For virtualbox users you need to enable nested virtualization
VM -> Configuration -> System -> Processor -> Enable nested VT-x/AMD-v
Run the following command on Windows Power Shell
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
if it requires a restart, then just follow the steps.
more info or options, please check docs from microsoft
Step 1:
Uninstall Docker.
Step 2:
Open "Turn windows features on or off" from Control Panel.
Uncheck both features - "Containers" and "Hyper-V", if they are checked.
Step 3:
Reboot PC
Step 4:
Check both features - "Container" and "Hyper-V".
Step 5:
Reboot PC.
Step 6:
Install docker and execute docker run hello-world
BIOS LEVEL Virtualization is a must
Hyper-V and Containers Windows features must be enabled.
The following hardware prerequisites are required to successfully run Client Hyper-V on Windows 10:
64-bit processor with Second Level Address Translation (SLAT)
4GB system RAM
BIOS-level hardware virtualization support must be enabled in the BIOS settings.
https://docs.docker.com/docker-for-windows/install/

mobile devices under Mac Os X to connect to Docker

I'm trying to connect to my docker instance the devices I have connected to my laptop.
Concretely I have 4 devices (two iphones, two android) and I would like to be able to start 4 docker instances and connect each device to one instance.
What I expected to do is as simply as in ubuntu
docker run --privileged -v /dev/bus/usb:/dev/bus/usb -d -P my-android:0.0.1
But my host OS is a Mac OS X, also the instances I'm creating, because I need access to the instruments tool.
but so far I read that under mac os x, devices are connected directly though usb not being mounted.
this is what I got when I search for the iphone device:
iPhone USB:
Type: Ethernet
BSD Device Name: en6
IPv4:
Configuration Method: DHCP
IPv6:
Configuration Method: Automatic
Proxies:
Exceptions List: *.local, 169.254/16
FTP Passive Mode: Yes
Do you know how can I connect the devices to the docker instances?
Thanks!!!!
I got this working with the docker-machine on virtualbox with the VirtualBox Extension Pack installed (provides support for USB 2.0 and USB 3.0 devices).
have the mobile phone connected to the host system.
$ ioreg -p IOUSB | grep SAMSUNG
+-o SAMSUNG_Android#14100000 <class AppleUSBDevice, id 0x100000c66, registered, matched, active, busy 0 (13 ms), retain 34>
create the a docker machine with the virtualbox driver (I've named it
base)
docker-machine create --driver virtualbox base
stop the machine to enable the USB Controller on the VM
docker-machine stop base
docker-machine start base
- activate the base VM as docker host
eval $(docker-machine env base)
- start ubuntu container with the usb devices mounted
docker run -it --rm -v /dev/usb/bus:/dev/bus/usb ubuntu /bin/bash
- install the usbutils just to demo with lsusb that the android device is connected
root#ce1e4be0bb73:/# apt-get update && apt-get install -y usbutils
1st run of lsusb (did not show the device)
root#ce1e4be0bb73:/# lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
to show the device I had to unplug and plug again my phone, 2nd run of lsusb
root#ce1e4be0bb73:/# lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 04e8:6860 Samsung Electronics Co., Ltd Galaxy (MTP)
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Resources