How to increase docker disk image size in Ubuntu - docker

I am trying to increase the docker image size on ubuntu. When I do docker info I get following info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-87-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.67GiB
Name: no1010042033112.corp.adobe.com
ID: PYZE:KYTG:DXED:QI37:43ZM:56BB:TLM6:X2OJ:WDPA:35UP:Z4CU:DSNC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
As you can see that total memory is Total Memory: 15.67GiB. I couldn't find a way to do it on Ubuntu. I tried following ways
1) sudo dockerd --storage-opt dm.basesize=100G
2) Changing DOCKER_OPTS ="--storage-opt dm.basesize=50G" in /etc/default/docker.
But none of these helped. This option is easily available in Docker config in Windows. But how to do it from a ubuntu terminal

Docker, on Linux, with the overlay2 storage driver, uses all of the host system's disk (and memory). There's no way to make it use less disk (without repartitioning your main system disk) and no way to give it more (without adding new hardware).
Docker for Mac, the Linux-flavored Docker for Windows, and Docker Machine all work by launching virtual machines that run a minimal Linux OS. That VM has a specific disk and memory allocation and there's UI controls for it, but it's because the containers are running on a different OS and need an actual virtualization layer.
On Linux, Docker also supports several storage drivers. These require varying amounts of Linux kernel support. Early versions of Docker used something called devicemapper which worked by allocating space in (most often) a fixed-size file, and then the dm.basesize option you give matters ("dm" is for "devicemapper"). Current versions of Docker on current versions of Linux use a different driver called overlay2 which just stores image and container content in ordinary directories. You still need kernel support for it but there's no reserved or limited disk space. That's also why the only size number in the docker info output is memory, which is a different resource.

Related

docker 20: change docker image location

i want to change the pull location for the docker image, i already follow the instruction from this link https://quick-adviser.com/how-do-i-change-docker-location/.
i already try adding file daemon.json to C:\ProgramData\Docker\config and fill it with following code:
{
"data-root": "D:\\docker-image"
}
then restarting docker desktop. After that, i trying pull a small size docker image like adminer docker pull adminer and check to D:\docker-image. After the execution is completed, the folder still empty
here's my docker info output:
$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.7.1)
compose: Docker Compose (Docker Inc., v2.2.3)
scan: Docker Scan (Docker Inc., v0.16.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.16.3-microsoft-standard-WSL2
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 6.04GiB
Name: docker-desktop
ID: V6FY:3JES:DVIP:5ZLG:6J26:IXE7:RKCB:T3MK:RR4B:X2XC:JR7B:LEIH
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
Basically you're running Docker Desktop with WSL2. So files are no longer located in Windows directly. Docker creates an Linux-Virtual-Machine (WSL2) where all files are located inside in a Linux folder structure.
# Image-Layers are somewhere here:
\\wsl$\docker-desktop-data\version-pack-data\community\docker\image\overlay2
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
If you want to change the folder on Windows side (e.g. to use disk space on another drive) then you have to move the image file. This is described in the question I posted. How can I change the location of docker images when using Docker Desktop on WSL2 with Windows 10 Home?
Two different possibilities given: Moving the WSL2 in total or Move the file and generate a Symlink (mklink) to keep WSL2 like it is and just redirect to the file in another location.
By default docker locates the WSL-Image-File for the Data here: %homepath%\AppData\Local\Docker\wsl\data as ext4.vhdx
With this Docker Desktop and WSL2 setup you can't just change the path where the images are located by just using a windows path because under the hood docker works in Linux. So images are stored in Linux.
If you just want to get the image data to move them to another system check docker save and docker load https://docs.docker.com/engine/reference/commandline/save/
Does this fit your use case?

docker import with Docker for Windows using Linux containers no space left on device error with Storage Driver: overlay2

I'm importing a large database image at 15 GB (I can import an image of 9.5 GB without a problem.) with Docker for Windows using Linux containers. I'm using Windows 10 Pro 1803, build 17134.1006.
Error:
PS C:\WINDOWS\system32> docker import "C:\Users\oscar\Desktop\MSSQL.tar" mssql
Error response from daemon: Error processing tar file(exit status 1): write /var/opt/mssql/data/TestDatabase.mdf: no space left on device
I have removed all dangling volumes and unused images as suggested here:
https://stackoverflow.com/a/37287054/3850405
When reading about storage drivers I came across some limitations for devicemapper and other drivers.
https://docs.docker.com/engine/reference/commandline/dockerd/#options-per-storage-driver
For devicemapper:
Specifies the size to use when creating the base device, which limits
the size of images and containers. The default value is 10G. Note,
thin devices are inherently “sparse”, so a 10G device which is mostly
empty doesn’t use 10 GB of space on the pool. However, the filesystem
will use more space for the empty case the larger the device is.
https://docs.docker.com/engine/reference/commandline/dockerd/#dmbasesize
I can't find any image limitation for overlay2, only 20 GB limit for containers as default.
Specifies the size to use when creating the sandbox which is used for
containers. Defaults to 20G.
I tried to run the example command but I got an error. dockerd is not mapped to command.
C:\>dockerd --storage-opt size=40G
'dockerd' is not recognized as an internal or external command,
operable program or batch file.
https://docs.docker.com/engine/reference/commandline/dockerd/#size
Since it should work anyway I'm not sure I'm on the right path there.
System information:
PS C:\WINDOWS\system32> docker --version
Docker version 19.03.2, build 6a30dfc
PS C:\WINDOWS\system32> docker info
Client:
Debug Mode: false
Server:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 8
Server Version: 19.03.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.184-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.837GiB
Name: docker-desktop
ID: XD32:TQJ4:EKWP:BPE4:ETXW:XFXE:LB3L:J4WB:PCFR:DAXK:MJ62:47RI
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 28
Goroutines: 42
System Time: 2019-09-17T10:00:50.1259999Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
Storage space should not be an issue:
I should learn to read... read it as 55.29 GB unused. After increasing Disk image max size in Docker -> Settings -> Advanced everything started working.

Error on commit - How to increase Docker default container size when using Docker Toolbox and Windows 7

I'm running Docker on Windows 7, so Docker runs within a Virtual Box.
I've got an issue where I've got an Oracle image which has then had a database restored to it, pushing the image size up to 7.5 gigs. I want to do a docker commit on this, but I'm getting an out of space error when I do the commit.
I've seen a lot of posts on changing the default container size, but I'm not sure if this is possible with aufs, or how to change to a different file system type when running on Windows 7/VirtualBox.
Does anyone know how to increase the default container size in this environment ?
Error response from daemon: Error processing tar file(exit status 1): write /u01/app/oracle/oradata/XE/support.dbf: no space left on device
Docker info:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 72
Server Version: 17.10.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 127
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 0351df1c5a66838d0c392b4ac4cf9450de844e2d
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.4.93-boot2docker
Operating System: Boot2Docker 17.10.0-ce (TCL 7.2); HEAD : 34fe485 - Wed Oct 18 17:16:34 UTC 2017
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.79GiB
Name: default
ID: 2NWU:57WJ:4QAP:EBMY:MMF2:JFWA:IBWU:THGO:A4VD:SGVW:YQBP:MP2N
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 24
Goroutines: 35
System Time: 2017-12-06T09:33:14.736388742Z
EventsListeners: 2
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
The default size for a docker machine is 20GB, you can double-check by running docker-machine inspect default. This size can be configured when creating the machine from the cli. You can recreate the default machine and set the size as follows:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-disk-size "400000" default
You can also edit the file C:\Program Files\Docker Toolbox\start.sh to set default values for size and ram for created machines.

How to create a Dockerfile from scratch

I am new to Docker containers. I need to create a new image from scratch.
I have a folder in desktop named "Playground." Inside that folder I have a Java specific version and OATS folder and inside the OATS folder I have an .exe to install OATS.
Requirement:
I need to create an image and convert the image to a container, and when I run the container it should install Java and the OATS application.
My Docker info:
C:\Users\Satish_D1\Desktop\Playground>docker info
Containers: 5
Running: 1
Paused: 0
Stopped: 4
Images: 4
Server Version: 17.06.2-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.41-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.196GiB
Name: moby
ID: DEB5:62EN:AUOA:MNHN:XBSI:XXXR:DRF6:YJPD:4D2Y:672Y:R6EE:DLFG
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 22
Goroutines: 34
System Time: 2017-09-11T13:31:33.4927898Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:127.0.0.0/8
Live Restore Enabled: false
My Work till now:
I have created a Dockerfile and below is the code I tried in the Dockerfile:
FROM scratch
COPY jdk-7u79-windows-x64
COPY C:\Users\Satish_D1\Desktop\Playground\oats-win64-full-12.5.0.3.1012\setup.bat
Image01
Image02
Thanks in Advance,
Satish.
There's a few things wrong that stand out. First, according to the filenames, you appear to be trying to run a Windows binary on a Linux runtime. This won't work, the Linux kernel will not run Windows binaries.
Next, when you try to use scratch, you will not have anything on the image filesystem, it's completely empty. That includes no libraries, no shell, and no distribution package tools. Using scratch is typically done for statically linked binaries, and for OS base images that have their entire OS filesystem packaged as a tar file that gets extracted as the first step. If your jdk doesn't include all the OS libraries, shell, and other dependencies, then you'll need more than "scratch" as your base image.
If you are determined to use scratch for your base, then I'd look at a desired jdk image on the Docker hub and follow back the image tree until you get to scratch and look at all the steps needed to recreate that jdk image. You'll likely need to checkout multiple repos if you want to recreate this. And if you want to do this with Windows instead of Linux binaries, then you'll need to change your Docker host settings and look into recreating the Windows core base image.

Building docker for the ARM-64 architecture

I have been trying to compile docker for the ARM-64 architecture. Docker doesn’t officially support ARM 64-bits (at least not through the package management tools); hence I have to build it from source. Building docker binary set needs docker itself as a dependency. I’ve already managed to compile both the docker daemon and the client via the following (hack) command:
./hack/make.sh dynbinary
However, I haven’t managed to run it successfully. Both binaries are compiled and work, but when I want to start up the daemon it complains about other dependencies:
Failed to connect to containerd. Please make sure containerd is installed in your PATH or you have specified the correct address. Got error: exec: "docker-containerd": executable file not found in $PATH
As I mentioned earlier, I cannot build all the binaries as they need docker itself running.
Looking forward to your help.
Two weeks ago, I was able to install Docker on a Pine64 running Armbian (Debian based). It was as easy as following the official documentation for armhf with one exception, change [arch=armhf] by [arch=arm64] when you add the new apt source.
After the install you have a real Arm64 docker running :
root#pine64:~# docker system info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 60
Server Version: 17.12.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 3.10.107-pine64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 979.6MiB
Name: pine64
ID: xxx
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: xxx
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

Resources