Recently I have a framework which can run in Linux only, but I am used to work on windows.I know that Nvidia-docker doesn't support windows, so I have to choose install double system in one host or use a virtual machine allocated as Linux.Maybe I prefer the latter.So I want to know if the virtual machine can use GPU in host or in workstation? What should I do to solve this problem? Better method hoped! Thanks!
For VMWare: https://www.dell.com/support/article/en-us/sln288103/how-to-enable-a-vmware-virtual-machine-for-gpu-pass-through?lang=en
And Hyper-V also can pasthrough GPU: https://social.technet.microsoft.com/Forums/en-US/b9e21b8f-8774-49c2-b499-b2b8ff2a41a2/hyperv-windows-10-pci-passthrough?forum=win10itprovirt
Related
it is possible to install docker desktop on virtual machine (vmware) windows 10 on a VMWARE ESXI ?
i am trying to install desktop docker on my vmware virtual machine with windows 10.
I installed the wsl2 support but at the end of the installation docker crashes with the following error:
Docker desktop 4.0.1
Installation failed
Component CommunityIstaller.ServiceAction failed to start services: The service did not respond to the start or control request in a timely fashion
I have done several tests but I cannot avoid this crash in any way.
The Operating System is a build that meets the minimum requirements to install Docker.
However, I noticed that Hyper-V is not enabled in the windows features. can this be a problem?
I think maybe it's a grafted virtualization problem because I install docker inside a VM. it's possible?
How can I solve? (or do you think that i will fix this problem with linux virtual machines?)
Does your host machine have all the advanced flags for 'efficient' nested virtualization? I wouldnt really recommend a third layer install of docker (as the final container is then virtual , on paravirtual (wsl2) on virtual (HyperV), on virtual (Esxi). I heavily assume the performance will be terrible.
And yes: You need Hyper-V, it's a requirement still. I assume, as you say its not available on the features, youre running a windows 10 home? Then sorry, you need at least Windows 10 Pro for Hyper-V support.
But as youre running a ESXI host, go the better performing way: Install any Linux distro of your choise, install docker there - if you wanna use it for Visual Studio etc. , you can still remotely debug etc. - and its performing better than on an a even deeper nested virtualized windows-wsl2. And btw: if its because of GUI, simply install the free Visual Studio Code, it offers Docker Tools which offer you many configiruation and monitoring options in a GUI , without enforcing you to do such a super deep nesting.
Yes, it's definitely possible. I'd probably check the hardware assisted virtualization (if available) is enabled. If so, you might want to make sure you've satisfied the rest of the requirements for the WSL2 backend deployment. If you're still having issues, try an older version and try upgrading from there.
I have constructed a machine-learning computer with two RTX 2070 SUPER NVIDIA GPUs connected with SLI Bridge, Windows OS (SLI verified in NVIDIA Control Panel).
I have benchmarked the system using http://ai-benchmark.com/alpha and got impressive results.
In order to take the best advantage of libraries that use the GPU for scientific tasks (cuDF) I have created a TensorFlow Linux container:
https://www.tensorflow.org/install/docker
using “latest-gpu-py3-jupyter” tag.
I have then connected PyCharm to this container and configured its interpreter as an interpreter of the same project (I mounted the host project folder in the container).
When I run the same benchmark on the container, I get the error:
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[50,56,56,144] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[node MobilenetV2/expanded_conv_2/depthwise/BatchNorm/FusedBatchNorm (defined at usr/local/lib/python3.6/dist-packages/ai_benchmark/utils.py:238) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
This error relates to the exhaustion of GPU memory inside the container.
Why is the GPU on the windows host successfully handle the computation and the GPU on the Linux container exhaust the memory?
What makes this difference? is that related to memory allocation in the container?
Here is an awesome link from docker.com that explains why your desired workflow won't work. It also wouldn't work with RAPIDS cudf either. Docker Desktop works using Hyper V which isolates the hardware and doesn't access to the GPU the way the Linux drivers expect. Also, nvidia-docker is linux only
I can let you know that RAPIDS (cudf) currently doesn't support this implementation either. Windows, however, does work better with a Linux host. For both tensorflow and cudf, I strongly recommend that you use (or dual boot) one of the recommended OSes as your host OS, found here: https://rapids.ai/start.html#prerequisites. If you need Windows in your workflow, you can run it on top of your Linux host.
There is a chance that in the future, a WSL version will allow you to run RAPIDS on Windows, letting you- on your own- craft an on Windows solution.
Hope this helps!
I am currently trying to understand and learn Docker. I have an app, .exe file, and I would like to run it on either Linux or OSX by creating a Docker. I've searched online but I can't find anything allowing one to do that, and I don't know Docker well enough to try and improvise something. Is this possible? Would I have to use Boot2Docker? Could you please point me in the right direction? Thank you in advance any help is appreciated.
Docker allows you to isolate applications running on a host, it does not provide a different OS to run those applications on (with the exception of a the client products that include a Linux VM since Docker was originally a Linux only tool). If the application runs on Linux, it can typically run inside a container. If the application cannot run on Linux, then it will not run inside a Linux container.
An exe is a windows binary format. This binary format incompatible with Linux (unless you run it inside of an emulator or VM). I'm not aware of any easy way to accomplish your goal. If you want to run this binary, then skip Docker on Linux and install a Windows VM on your host.
As other answers have said, Docker doesn't emulate the entire Windows OS that you would need in order to run an executable 'exe' file. However, there's another tool that may do something similar to what you want: "Wine" app from WineHQ. An abbreviated summary from their site:
Wine is a compatibility layer capable of running Windows applications
on several operating systems, such as Linux and macOS.
Instead of simulating internal Windows logic like a virtual
machine or emulator, Wine translates Windows API calls
on-the-fly, eliminating the performance and memory penalties of
other methods and allowing you to cleanly integrate Windows
applications into your desktop.
(I don't work with nor for WineHQ, nor have I actually used it yet. I've only heard of it, and it seems like it might be a solution for running a Windows program inside of a light-weight container.)
I notice that nvidia has support for GPU and Docker, but I believe this is only for linux at the moment. Has anyone got it working on windows 10?
In particular, I'm hoping to get access to it for machine learning applications.
https://github.com/NVIDIA/nvidia-docker
Since Docker uses Virtualbox to work on Windows, and Virtualbox will not expose CUDA to the guest without PCI passthrough, I think it will not be possible to do this as you are thinking.
For 2018-01, it looks like no one was able to make it work yet.
Moreover, they say (#29, #197) it would require DDA (PCI passthrough), so, theoretically it should be possible to make it work on Windows Server 2016, but not on Windows-10. But even for Windows Server 2016 - I've not found any success stories.
Seems that in Windows 10 Docker does not uses Virtualbox to work in Windows. So it may work.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_10
Answer using Delphi preferred but any solution would be helpful.
What I'd like to do is to create an app that when run from within VMPlayer will create a shared folder to a known location on the host.
The VMPlayer will be running Windows XP 32bit, the host running a Windows OS as well probably Windows 7 x64.
There is vmrun.exe utility that can be used to control VM. Look at:
http://www.vmware.com/support/developer/vix-api/vix110_vmrun_command.pdf
You need to think of your VMPlayer virtualized hardware as an independent computer, running it's own independent operating on it's own hardware. That's the way virtualization works!
Technically the HOST doesn't even know it's "running" the other computer, so it's not going to treat it differently. The same is true for the GUEST operating system: you are running a "vanila" operating system, it has no reason to treat it's HOST computer differently, to the GUST the HOST is just an other computer accessible through the local network.
That being said, you can re-write your question like this:
I'd like to create an app that when run from one computer will create a shared folder to a known location on an other computer. One computer would be running Windows XP 32 bit, the other would be running an other version of Windows, probably Windows 7 x64
The answer: Of course you can't do that: it would create a security breach! If you're able to create the shared folder, anyone's able to create a shared folder. Anyone could create a shared folder to any location on your machine!
To wrap this up, if you could run your application on the HOST, not the GUEST, you might be able to use VmWare API to do something, but AFAIK the API is not available with the free VmPlayer. Also, if you could run applications on both guest and host you'd be able to do whatever you want.