Can I install Docker on arm8 based Synology Nas - docker

I have a Synology Disk Station 118 (appears it is using Arm8 processor)
There is no Docker package found by searching within Package Manager
I found this article but the link to Synology packages only has X64 packages and article says Docker does not work from Arm
But it does seem from various articles Docker is available from arm8 platforms
https://github.com/docker-library/official-images#architectures-other-than-amd64
and there is a link to unofficial
https://hub.docker.com/u/arm64v8/
but aren't these just containers rather than than the actual docker itself ?
So it is possible to install on my Synology Nas 118. This is required to test a docker file for my application.

The answer is YES. Any ARM type of Synology NAS supports docker, not completely but it can be enough.
Please follow the steps below to install docker/dockerd in ARM Synology NAS.
Download static docker binary at https://download.docker.com/linux/static/stable/ . Choose the right version for your ARM chip, most likely aarch64 will be the one for your Synology NAS. You can use an old version https://download.docker.com/linux/static/stable/aarch64/docker-17.09.0-ce.tgz and give it a try, although newer versions could work too.
tar xzvf /path/to/.tar.gz
sudo cp docker/* /usr/bin/
create the /etc/docker/daemon.json configuration file with the following configuration:
{
"storage-driver": "vfs",
"iptables": false,
"bridge": "none"
}
sudo dockerd &
sudo docker run -d --network=host portainer/portainer:linux-arm64
Please note, you need to set storage drive vfs, iptables off, bridge off due to a Linux kernel problem. And you need to run docker container with --network=host mode.
It is not usual, but it is necessary due to Synology NAS kernel limitations.
Or you can have a try with this automatic script:
https://raw.githubusercontent.com/wdmomoxx/catdriver/master/install-docker.sh

I have found ready script for installing docker and docker-compose for ARM NAS:
https://wiki.servarr.com/docker-arm-synology

in the github proyect docker on arm and you can read in proyect:
No official Docker images work on the ARM architecture because they contain binaries built for x64 (regular PCs).
So, you need get source binary from application, and compile to architecture ARM if you need install application.

Related

mounted volumes are only partially appearing and empty using docker-compose on WSL

Here is my docker-compose file (sorry for the images but the WSL terminal won't let me copy-paste indented text):
The intention being that external_stuff contains my mounts directory. When I look in the mounts directory I clearly see my drives:
However, when I run docker-compose up I only see a single folder ("c") as opposed to all of my drives, and when I navigate into that folder it appears empty:
I tried running sudo -E docker-compose up but that makes no difference.
What's going on here, and how do I fix it?
My system:
Docker desktop version 2.1.0.5
Windows build 1903 / OS Build 18362.476
I think I'm running wsl 1 but I really have no idea. If I run wsl -l from a powershell it spits out a bunch of command line options.
I'm running Ubuntu 18.04.2 LTS directly from the "Ubuntu" app in windows.
Dockerfile:
FROM python:3
Docker Desktop is running on a VM and you need to share your drives with it.
When running Docker Desktop through WSL, you will still need to share the drives you are using.
For this, you simply have to go into Docker Desktop Settings > Shared Drives and then allow sharing your drives.
Then you can work on Docker Desktop through WSL with linux commands, linux paths, etc.
Disclaimer : WSL and Docker Desktop are really unstable, with sharing volumes, permissions, inotify events, etc. You can find more information about theses problems on the answer of this question : Docker is not recompiling upon changing anything in angular project in windows

Is there a way for GPU support without nvidia-docker

I am trying to get gpu support on my container
without the nvidia-docker
I know with the nvidia docker, I just have to use
--runtime=nvidia but my current circumstances does not allow using nvidia-docker
I tried installing the nvidia driver, cuda, cudnn on my container but it fails.
How can I use tensorflow gpu without nvidia docker on my container?
You can use x11docker
Running a docker image on X with gpu is as simple as
x11docker --gpu imagename
You'll be happy to know that the latest Docker version now comes with support for nvidia GPU's. You'll need to use --device flag to specify your Nvidia driver. See - How to use GPU a docker container
Earlier, you had to install nvidia-docker which was plain docker with a thin layer of abstraction for nvidia GPU's. See - Nvidia Docker
You cannot simply install nvidia drivers in a docker container. The container must have access to the hardware. Though I'm not certain, but mounts might help you with that issue. See- https://docs.docker.com/storage/
You can use anaconda to install and use Tensorflow-gpu.
Make sure you have the latest nvidia drivers installed.
Install Anaconda 2 or 3 from the official site.
https://www.anaconda.com/distribution/
Create a new environment and install tensorflow-gpu and cudatoolkit.
$conda create -n tf-gpu tensorflow-gpu python cudnn cudatoolkit
You can also specify the version of application.
E.g $conda create -n tf-gpu tensorflow-gpu python=3.5 cudnn cudatoolkit=8
Please do check if your hardware has the minimum compute capability to support the version of CUDA that you are/will be using.
If you can't pass --runtime=nvidia as a command-line option (eg docker-compose), you can set the default runtime in the Docker daemon config file /etc/docker/daemon.json:
{
"default-runtime": "nvidia"
}

How to install nvidia driver in windows docker container?

I am doing a feasibility study on docker(windows container only not linux). I am using Windows Server 1809 with container support in aws, which by default bundled with docker and with g3 instance(using tesla M60).
1)
I know nvidia-docker is not available for windows. I want to confirm that default docker wont support gpu also, for that i want to install nvidia driver inside docker container. The docker file I am using is shown below
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY nvidia-driver-folder nvidia-driver-folder
WORKDIR /nvidia-driver-folder
RUN setup.exe -s -clean -noreboot -noeula
The nvidia-driver-folder contains setup.exe for installing driver. I tried the same command inside container that is
docker run -it sampleapp cmd
Then inside container
setup.exe -s -clean -noreboot -noeula
After that I verified program files inside container, no folder relating to nvidia is created. In normal system(my local machine) even if the gpu is not present, the command I have written above will create a folder NVIDIA Corportation,
2)
Is there any other means to get gpu working inside windows docker container
Please help me for the above two questions

How do I build an Arm version of Docker File for Qnap Arm based Server

I have a Docker File for my application and I use Docker Hub to build it.
This works fine on a Synology DS218+ Disk Station, which is Intel based.
Qnap supports Docker on both Intel and Arm devices with its Container Station software , I have purchased a TS131P to test this out but it failed with exec format error. Apparently I have to build an Arm version of the image, but how do I do this ?
Can I build the image on the Qnap itself somehow ?
Update
So my base image was openjdk:8-jre-alpine, so I have found on DockerHub an arm32 equivalent of this, https://hub.docker.com/r/arm32v6/openjdk/ so now:
Created a new BitBucket rep
Copied over Docker File
Changed first line of Docker File to FROM arm32v6/openjdk:8-jre-alpine
Created a new Automated Build on Docker linked to this repo
But the build is now failing on the second line
RUN apk --no-cache add \
curl \
tini
with
[91mstandard_init_linux.go:190: exec user process caused "exec format error"
Since I am using arm image I assume that apk should be compiled for arm, or do I need to tell Docker Hub to build on Arm rather than Intel ?
The simple answer is you have to build an arm image on an arm server, so I built in on the Arm nas itself, since this supports Docker, this is what I did
Ensure ContainerStation running on nas server
ssh nas server (from PC)
docker build buildfile docker login
--enter username username
--enter password password
docker images (to get imageId of built image)
docker tag imageId repoName/imageName:latest
docker push
and this was enough to make arm32 version available to be installed on arm32 machine.
Currently I have two separate images, one for Intel and one for Arm. I understand that there is a way to combine multiple images into a single super image, but I have not attempted that yet.
repoName/imageName:latest

All images and containers disappeared after host kernel downgrade

Good day.
On the host machine was installed kernel 3.16. After installation the kernel 3.14 via deb package I lost all docker images and containers. Output of commands "docker images" and "docker ps -a" is empty. Is this normal behavior of docker?
Thanks.
I will answer myself. It may be useful someone.
Docker used storage driver "aufs" on the old kernel. Therefore the module "aufs.ko" must be loaded. In the new kernel support aufs was not be enabled and docker began to use storage driver "devicemapper".
To actually fix it on Ubuntu, run
sudo apt-get -y install linux-image-extra-$(uname -r)
This will install the aufs kernel module that docker requires but can be lost during kernel upgrades. Not sure why the package manager misses this dependency.
As Denis Pitikov points out, images and containers can disappear if the storage driver that created them (e.g. aufs) is no longer available.
When run on Ubuntu 14.04, the current Docker install script automatically installs the linux-image-extra-* package (suitable for your current kernel version). This includes the aufs kernel module.
On some systems, the linux-image-generic package may not be installed. On these systems, the next time you run a dist-upgrade, the kernel will be upgraded but the corresponding linux-image-extra-* will not be installed. When you reboot you won't have the aufs module, and your containers and images may have disappeared.
To fix it: first, check that you're running a generic kernel already:
$ uname -r
3.13.0-49-generic
If so, consider installing linux-image-generic:
$ apt-get install linux-image-generic
That will upgrade your kernel to the version required by that package and will install the -extra package too.

Resources