I heard that docker must run under 64bit linux. Is it correct?
We will implement a linux with ARM core - 32bits. Is the docker can run under this platform.
Now docker only supports amd64 architecture. According to this post, docker developers will eventually provide support for other architectures,
https://github.com/docker/docker/issues/136.
Docker requires a 64-bit installation regardless of your Ubuntu version.
https://docs.docker.com/installation/ubuntulinux/
https://docs.docker.com/installation/
Related
I am trying to get gpu support on my container
without the nvidia-docker
I know with the nvidia docker, I just have to use
--runtime=nvidia but my current circumstances does not allow using nvidia-docker
I tried installing the nvidia driver, cuda, cudnn on my container but it fails.
How can I use tensorflow gpu without nvidia docker on my container?
You can use x11docker
Running a docker image on X with gpu is as simple as
x11docker --gpu imagename
You'll be happy to know that the latest Docker version now comes with support for nvidia GPU's. You'll need to use --device flag to specify your Nvidia driver. See - How to use GPU a docker container
Earlier, you had to install nvidia-docker which was plain docker with a thin layer of abstraction for nvidia GPU's. See - Nvidia Docker
You cannot simply install nvidia drivers in a docker container. The container must have access to the hardware. Though I'm not certain, but mounts might help you with that issue. See- https://docs.docker.com/storage/
You can use anaconda to install and use Tensorflow-gpu.
Make sure you have the latest nvidia drivers installed.
Install Anaconda 2 or 3 from the official site.
https://www.anaconda.com/distribution/
Create a new environment and install tensorflow-gpu and cudatoolkit.
$conda create -n tf-gpu tensorflow-gpu python cudnn cudatoolkit
You can also specify the version of application.
E.g $conda create -n tf-gpu tensorflow-gpu python=3.5 cudnn cudatoolkit=8
Please do check if your hardware has the minimum compute capability to support the version of CUDA that you are/will be using.
If you can't pass --runtime=nvidia as a command-line option (eg docker-compose), you can set the default runtime in the Docker daemon config file /etc/docker/daemon.json:
{
"default-runtime": "nvidia"
}
js server in docker. On my Windows 10 64bit it works fine. But when i try to run the image on my raspberry pi: standard_init_linux.go:190: exec user process caused "exec format error".
mariu5 in the docker Forum has a workaround. But I do not know what to do with it.
https://forums.docker.com/t/standard-init-linux-go-190-exec-user-process-caused-exec-format-error/49368/4
Where can I updated the deployment.template.json file and has the Raspberry Pi 3 Model B+ a arm32 architecture?
You need to rebuild your image on the raspberry pi or find one that is compatible with it.
Perhaps this example might help:
https://github.com/hypriot/rpi-node-example-hello-world
The link you posted is not a work around but rather a "one can't do that".
You have to run the docker image that was built for a particular
Architecture on a docker node that is running that same Architecture.
Take docker out of the picture for a moment. You can’t run a Linux
application compiled and built on a Linux ARM machine and run it on a
Linux amd64 machine. You can’t run a Linux application compiled and
built on a Linux Power machine on a Linux amd64 machine.
https://forums.docker.com/t/standard-init-linux-go-190-exec-user-process-caused-exec-format-error/49368/4
I have a Synology Disk Station 118 (appears it is using Arm8 processor)
There is no Docker package found by searching within Package Manager
I found this article but the link to Synology packages only has X64 packages and article says Docker does not work from Arm
But it does seem from various articles Docker is available from arm8 platforms
https://github.com/docker-library/official-images#architectures-other-than-amd64
and there is a link to unofficial
https://hub.docker.com/u/arm64v8/
but aren't these just containers rather than than the actual docker itself ?
So it is possible to install on my Synology Nas 118. This is required to test a docker file for my application.
The answer is YES. Any ARM type of Synology NAS supports docker, not completely but it can be enough.
Please follow the steps below to install docker/dockerd in ARM Synology NAS.
Download static docker binary at https://download.docker.com/linux/static/stable/ . Choose the right version for your ARM chip, most likely aarch64 will be the one for your Synology NAS. You can use an old version https://download.docker.com/linux/static/stable/aarch64/docker-17.09.0-ce.tgz and give it a try, although newer versions could work too.
tar xzvf /path/to/.tar.gz
sudo cp docker/* /usr/bin/
create the /etc/docker/daemon.json configuration file with the following configuration:
{
"storage-driver": "vfs",
"iptables": false,
"bridge": "none"
}
sudo dockerd &
sudo docker run -d --network=host portainer/portainer:linux-arm64
Please note, you need to set storage drive vfs, iptables off, bridge off due to a Linux kernel problem. And you need to run docker container with --network=host mode.
It is not usual, but it is necessary due to Synology NAS kernel limitations.
Or you can have a try with this automatic script:
https://raw.githubusercontent.com/wdmomoxx/catdriver/master/install-docker.sh
I have found ready script for installing docker and docker-compose for ARM NAS:
https://wiki.servarr.com/docker-arm-synology
in the github proyect docker on arm and you can read in proyect:
No official Docker images work on the ARM architecture because they contain binaries built for x64 (regular PCs).
So, you need get source binary from application, and compile to architecture ARM if you need install application.
In Nvidia's developer page (https://devblogs.nvidia.com/nvidia-docker-gpu-server-application-deployment-made-easy/)
It states that nvidia-docker provides "driver-agnostic CUDA images".
I would just like to inquire/clarify if this is only driver version specific or does this also apply to OS?
For example:
Host = CentOS
Docker Image/Container = Ubuntu
Does using nvidia-docker provide a way to utilize the CentOS's nvidia driver in the Ubuntu Docker Container?
Currently what I do is I always have 2 Docker files for supporting Ubuntu Host and CentOS Host and manually mount /dev/nvidia0 and copy the library files (or install the driver) inside the docker image.
I've asked this already to the Nvidia, but still waiting for them to answer.
I'll be trying it my self too to find out but I just thought to try my luck if anyone from SO already knows the answer.
Thank you in advance guys.
I've tested this and it does work.
"driver-agnostic CUDA images" is not only limitted to different versions of the driver but also across different OS (binary)
Thank you.
I'm building a platform that builds docker images. However to compile the containers correctly, I need to know the intended os type,... windows or linux as well as the possible kernel version that the containers were intended to be built for.
Is there anyway for me to use the docker sdk to get the target kernel for the container or image?
The docker manifest inspect <IMAGE> command tells you the image's target architecture and OS. Links: https://docs.docker.com/edge/engine/reference/commandline/manifest/#manifest-annotate
For images running on Linux, the kernel should be version 3.10 or higher. Have no idea what's the requirements for images on Windows.