Good day.
On the host machine was installed kernel 3.16. After installation the kernel 3.14 via deb package I lost all docker images and containers. Output of commands "docker images" and "docker ps -a" is empty. Is this normal behavior of docker?
Thanks.
I will answer myself. It may be useful someone.
Docker used storage driver "aufs" on the old kernel. Therefore the module "aufs.ko" must be loaded. In the new kernel support aufs was not be enabled and docker began to use storage driver "devicemapper".
To actually fix it on Ubuntu, run
sudo apt-get -y install linux-image-extra-$(uname -r)
This will install the aufs kernel module that docker requires but can be lost during kernel upgrades. Not sure why the package manager misses this dependency.
As Denis Pitikov points out, images and containers can disappear if the storage driver that created them (e.g. aufs) is no longer available.
When run on Ubuntu 14.04, the current Docker install script automatically installs the linux-image-extra-* package (suitable for your current kernel version). This includes the aufs kernel module.
On some systems, the linux-image-generic package may not be installed. On these systems, the next time you run a dist-upgrade, the kernel will be upgraded but the corresponding linux-image-extra-* will not be installed. When you reboot you won't have the aufs module, and your containers and images may have disappeared.
To fix it: first, check that you're running a generic kernel already:
$ uname -r
3.13.0-49-generic
If so, consider installing linux-image-generic:
$ apt-get install linux-image-generic
That will upgrade your kernel to the version required by that package and will install the -extra package too.
Related
firstly, I'm still beginner in docker.
I need to run multiple version of TensorFlow and each version requires a specific cuda version.
my host operating system is ubuntu 16.04
I need to have multiple versions of cuda on my OS since I'm working on multiple projects each requires a different versions of cuda. I tried to use conda and virtual environments to solve that problem. after a while I gave up and started to search for alternatives.
apparently virtual machines can't access GPU, only if you own a specif gpu type you can run the official NVIDIA visualizer.
I have a NVIDIA 1080 gpu. I installed a new image of Ubuntu 16.04 and started to work on dockerfiles to create custom images for my projects.
I was trying to avoid using docker to avoid complexity,after I failed in installing and running multiple versions of cuda I turned to docker. apparently you can't access cuda via docker directly if you don't install the cuda driver on the host machine.
I'm still not sure if I could run docker containers with a different cuda version than the one I installed in my pc.
if that is the case, NVIDIA messed up big time. usually if their is no need to use docker we avoid it to overcome additional complexities. when we need to work with multiple environments, and conda and virtual environment fail. we head out towards docker. so If nvidia limits the usage in docker container to one cuda version, they only intended to allow developers to work on one project of special dependencies per operating system.
please confirm if I can run containers that each have a specific cuda versions.
Moreover I will greatly appreciate if someone point out to a guide on how to use conda enviroments to build docker files and how to run conda env in docker container.
Having several CUDA versions is possible with Docker. Moreover, none of them needs to be at your host machine, you can have CUDA in a container and that's IMO is the best place for it.
To enable GPU support in container and make use of CUDA in it you need to have all of these installed:
Docker
(optionally but recommended) docker-compose
NVIDIA Container Toolkit
NVIDIA GPU Driver
Once you've obtained these you can simply grab one of the official tensorflow images (if the built-in python version fit your needs), install pip packages and start working in minutes. CUDA is included in the container image, you don't need it on host machine.
Here's an example docker-compose.yml to start a container with tensorflow-gpu. All the container does is a test whether any of GPU devices available.
version: "2.3" # the only version where 'runtime' option is supported
services:
test:
image: tensorflow/tensorflow:2.3.0-gpu
# Make Docker create the container with NVIDIA Container Toolkit
# You don't need it if you set 'nvidia' as the default runtime in
# daemon.json.
runtime: nvidia
# the lines below are here just to test that TF can see GPUs
entrypoint:
- /usr/local/bin/python
- -c
command:
- "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
Running this with docker-compose up you should see a line with GPU specs in it. It looks like this and appears at the end:
test_1 | 2021-01-23 11:02:46.500189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/device:GPU:0 with 1624 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
I am trying to get gpu support on my container
without the nvidia-docker
I know with the nvidia docker, I just have to use
--runtime=nvidia but my current circumstances does not allow using nvidia-docker
I tried installing the nvidia driver, cuda, cudnn on my container but it fails.
How can I use tensorflow gpu without nvidia docker on my container?
You can use x11docker
Running a docker image on X with gpu is as simple as
x11docker --gpu imagename
You'll be happy to know that the latest Docker version now comes with support for nvidia GPU's. You'll need to use --device flag to specify your Nvidia driver. See - How to use GPU a docker container
Earlier, you had to install nvidia-docker which was plain docker with a thin layer of abstraction for nvidia GPU's. See - Nvidia Docker
You cannot simply install nvidia drivers in a docker container. The container must have access to the hardware. Though I'm not certain, but mounts might help you with that issue. See- https://docs.docker.com/storage/
You can use anaconda to install and use Tensorflow-gpu.
Make sure you have the latest nvidia drivers installed.
Install Anaconda 2 or 3 from the official site.
https://www.anaconda.com/distribution/
Create a new environment and install tensorflow-gpu and cudatoolkit.
$conda create -n tf-gpu tensorflow-gpu python cudnn cudatoolkit
You can also specify the version of application.
E.g $conda create -n tf-gpu tensorflow-gpu python=3.5 cudnn cudatoolkit=8
Please do check if your hardware has the minimum compute capability to support the version of CUDA that you are/will be using.
If you can't pass --runtime=nvidia as a command-line option (eg docker-compose), you can set the default runtime in the Docker daemon config file /etc/docker/daemon.json:
{
"default-runtime": "nvidia"
}
I have a Synology Disk Station 118 (appears it is using Arm8 processor)
There is no Docker package found by searching within Package Manager
I found this article but the link to Synology packages only has X64 packages and article says Docker does not work from Arm
But it does seem from various articles Docker is available from arm8 platforms
https://github.com/docker-library/official-images#architectures-other-than-amd64
and there is a link to unofficial
https://hub.docker.com/u/arm64v8/
but aren't these just containers rather than than the actual docker itself ?
So it is possible to install on my Synology Nas 118. This is required to test a docker file for my application.
The answer is YES. Any ARM type of Synology NAS supports docker, not completely but it can be enough.
Please follow the steps below to install docker/dockerd in ARM Synology NAS.
Download static docker binary at https://download.docker.com/linux/static/stable/ . Choose the right version for your ARM chip, most likely aarch64 will be the one for your Synology NAS. You can use an old version https://download.docker.com/linux/static/stable/aarch64/docker-17.09.0-ce.tgz and give it a try, although newer versions could work too.
tar xzvf /path/to/.tar.gz
sudo cp docker/* /usr/bin/
create the /etc/docker/daemon.json configuration file with the following configuration:
{
"storage-driver": "vfs",
"iptables": false,
"bridge": "none"
}
sudo dockerd &
sudo docker run -d --network=host portainer/portainer:linux-arm64
Please note, you need to set storage drive vfs, iptables off, bridge off due to a Linux kernel problem. And you need to run docker container with --network=host mode.
It is not usual, but it is necessary due to Synology NAS kernel limitations.
Or you can have a try with this automatic script:
https://raw.githubusercontent.com/wdmomoxx/catdriver/master/install-docker.sh
I have found ready script for installing docker and docker-compose for ARM NAS:
https://wiki.servarr.com/docker-arm-synology
in the github proyect docker on arm and you can read in proyect:
No official Docker images work on the ARM architecture because they contain binaries built for x64 (regular PCs).
So, you need get source binary from application, and compile to architecture ARM if you need install application.
For work purpose, I have an ova file which I need to convert it to DockerFile.
Does someone know how to do it?
Thanks in advance
There are a few different ways to do this. They all involve getting at the disk image of the VM. One is to mount the VDI, then create Docker image from that (see other Stackoverflow answers). Another is to boot the VM and copy the complete disk contents, starting at root, to a shared folder. And so on. We have succeeded with multiple approaches. As long as the disk in the VM is compatible with the kernel underlying the running container, creating Docker image that has the complete VM disk has worked.
Yes it is possible to use a VM image and run it in a container. Many our customers have been using this project successfully: https://github.com/rancher/vm.git.
RancherVM allows you to create VMs that run inside of Kubernetes pods,
called VM Pods. A VM pod looks and feels like a regular pod. Inside of
each VM pod, however, is a container running a virtual machine
instance. You can package any QEMU/KVM image as a Docker image,
distribute it using any Docker registry such as DockerHub, and run it
on RancherVM.
Recently this project has been made compatible for kubernetes as well. For more information: https://rancher.com/blog/2018/2018-04-27-ranchervm-now-available-on-kubernetes
Step 1
Install ShutIt as root:
sudo su -
(apt-get update && apt-get install -y python-pip git docker) || (yum update && yum install -y python-pip git docker which)
pip install shutit
The pre-requisites are python-pip, git and docker. The exact names of these in your package manager may vary slightly (eg docker-io or docker.io) depending on your distro.
You may need to make sure the docker server is running too, eg with ‘systemctl start docker’ or ‘service docker start’.
Step 2
Check out the copyserver script:
git clone https://github.com/ianmiell/shutit_copyserver.git
Step 3
Run the copy_server script:
cd shutit_copyserver/bin
./copy_server.sh
There are a couple of prompts – one to correct perms on a config file, and another to ask what docker base image you want to use. Make sure you use one as close to the original server as possible.
Note that this requires a version of docker that has the ‘docker exec’ option.
Step 4
Run the build server:
docker run -ti copyserver /bin/bash
You are now in a practical facsimile of your server within a docker container!
Source
https://zwischenzugs.com/2015/05/24/convert-any-server-to-a-docker-container/
in my opinon it's totally impossible. But you can create a dockerfile with same OS and mount your datas.
I would like to use docker on a Linux environnent so I have 2 options :
Native install of docker on my linux mint
Use docker via a VM with boot2docker (or Vagrant/puppet)
I think that the VM way is more easy to install but you may have some difficulties to share data between your laptop and docker container (you have to install guest addition in virualbox for example ...)
On the other hand, the native install seems less easy but I think you gain some performance and sharing data more easily ...
So I would like to know, for you what are the advantages/inconvenients of the 2 methods ?
What was your choice and why ?
Thanks :)
Native installation of Docker
If you are already on Linux, there is simply no need for another tool and layer like a VM
Better performance (since you are not in a VM, but on your machine directly)
It is pretty easy, e.g. to install Docker on Ubuntu 14 just run curl -sSL https://get.docker.io/ubuntu/ | sudo sh
VM/Boot2Docker
Docker will not "pollute" your system - if you don't want to use Docker anymore, just throw away your VM, nothing will be left on your system
If you are on Linux already, I would just install Docker and you are done.