Docker local volume increase size everyday - docker

I have 5X Docker containers running in VM (mostly no volume attached)
After a while with no new containers deploy, I've notice Docker consume more disk space everyday
I've tried to remove log and/or unused image with
sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
docker system prune --volumes
It reclaim a very little disk space
Then, I've found 1 local volume that use 30ish GB (it was 29GB yesterday - growth rate ~1GB per day),
docker volume inspect <volume id>
[
{
"CreatedAt": "2022-06-28T12:00:15+07:00", << created last hour
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/4301ac15fed0bec1cd93aa181ab18c5227577c2532fff0a5f4e23956da1cfe4f/_data",
"Name": "4301ac15fed0bec1cd93aa181ab18c5227577c2532fff0a5f4e23956da1cfe4f",
"Options": null,
"Scope": "local"
}
]
And I don't even know what service/container use or create this volume.
How do I know that is it safe to remove this volume or how can I limit consumption of disk space ?

Related

NFS volume created manually mounts but shows empty contents

server: docker ubuntu, 18.06.3-ce
local : docker for mac, 19.03.13
I have created a volume in the swarm manually, to a remote nfs server. When I try to mount this volume in a service it appears to work, but the contents are empty and any writes seem to succeed (calling code doesn't crash), but the bytes are gone. Maybe even to /dev/null.
When I declare a similar volume inside the compose file it works. The only difference I can find is the label "com.docker.stack.namespace".
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
my_nfs
version: "3.5"
services:
my-api:
volumes:
- "compose_nfs:/data1/" # works fine
- "externl_nfs:/data2/" # empty contents, forgotten writes
volumes:
externl_nfs:
external: true
compose_nfs:
driver: local
driver_opts:
type: nfs
o: addr=10.0.1.100
device: ":/data/"
When inspecting the networks they are identical, except for that label.
{
"CreatedAt": "2020-20-20T20:20:20Z",
"Driver": "local",
"Labels": {
# label missing on the manually created one
"com.docker.stack.namespace": "stackie"
},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "compose_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}
If you use an external volume, swarm is deferring the creation of that volume to you. Volumes are also local to the node they are created on, so you must create that volume on every node where swarm could schedule this job. For this reason, many will delegate the volume creation to swarm mode itself and put the definition in the compose file. So in your example, before scheduling the service, on each node run:
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
external_nfs
Otherwise, when the service gets scheduled on a node without the volume defined, it appears that swarm will create the container, and that create command generates a default named volume, storing the contents on that local node (I could also see swarm failing to schedule the service because of a missing volume, but your example shows otherwise).
Answering this, since it is an older version of docker and probably not relevant to most people, considering the NFS part.
It appears to be a bug of some sort in docker/swarm.
Create a NFS volume on the swarm (via api, from remote)
Volume is correct on the manager node which was contacted
Volume is missing the options on all other worker nodes
As some strange side effect, the volume seems to work. It can be mounted, writes succeed without issue, but all bytes written disappear. Reads work but every file is "not found", which is logical considering the writes disappearing.
On manager:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T15:56:44+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}]
On worker:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T16:22:16+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": null,
"Scope": "local"
}]

Increasing the disk size that docker can access in Container Optimized OS

I am attempting to run a simple daily batch script that can run for some hours, after which it will send the data it generated and shut down the instance. To achieve that, I have put the following into user-data:
users:
- name: cloudservice
uid: 2000
runcmd:
- sudo HOME=/home/root docker-credential-gcr configure-docker
- |
sudo HOME=/home/root docker run \
--rm -u 2000 --name={service_name} {image_name} {command}
- shutdown
final_message: "machine took $UPTIME seconds to start"
I am creating the instance using a python script to generate the configuration for the API like so:
def build_machine_configuration(
compute, name: str, project: str, zone: str, image: str
) -> Dict:
image_response = (
compute.images()
.getFromFamily(project="cos-cloud", family="cos-stable")
.execute()
)
source_disk_image = image_response["selfLink"]
machine_type = f"zones/{zone}/machineTypes/n1-standard-1"
# returns the cloud init from above
cloud_config = build_cloud_config(image)
config = {
"name": f"{name}",
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"type": "PERSISTENT",
"boot": True,
"autoDelete": True,
"initializeParams": {"sourceImage": source_disk_image},
}
],
# Specify a network interface with NAT to access the public
# internet.
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
# Allow the instance to access cloud storage and logging.
"serviceAccounts": [
{
"email": "default",
"scopes": [
"https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/bigquery",
],
}
],
# Metadata is readable from the instance and allows you to
# pass configuration from deployment scripts to instances.
"metadata": {
"items": [
{
# Startup script is automatically executed by the
# instance upon startup.
"key": "user-data",
"value": cloud_config,
},
{"key": "google-monitoring-enabled", "value": True},
]
},
}
return config
I am however running out of disk space inside the docker engine.
Any ideas on how to increase the size of the volume available to docker services?
The Docker engine uses the space of the disk of the Instance. So if the container doesn't have space is because the disk of the Instance is full.
The first thing that you can try to do is create an Instance with a bigger disk. The documentation says:
disks[ ].initializeParams.diskSizeGb string (int64 format)
Specifies the size of the disk in base-2 GB. The size must be at least
10 GB. If you specify a sourceImage, which is required for boot disks,
the default size is the size of the sourceImage. If you do not specify
a sourceImage, the default disk size is 500 GB.
You could increase the size adding the field diskSizeGb in the deployment:
"disks": [
{
[...]
"initializeParams": {
"diskSizeGb": 50,
[...]
Other thing you could try is execute the following command in the instance to see if the disk is full and what partition is full:
$ df -h
In the same way you could execute the following command to see the disk usage of the Docker Engine:
$ docker system df
The client and daemon API must both be at least 1.25 to use this command. Use the docker version command on the client to check your client and daemon API versions.
If you want more infomration you could use the flag -v
$ docker system df -v

Docker: find out by command line if and which shared drives are enabled

Do you know if there is a way to find out by using the command line, if and which shared drives are enabled?
Thanks
EDIT:
[Windows - Git Bash] To find out which drives are shared:
cat C:/Users/[user]/AppData/Roaming/Docker/settings.json
[Windows - CMD prompt]
type C:\Users\[User]\AppData\Roaming\Docker\settings.json
Within the file, you'll find the JSON object you're looking for:
"SharedDrives": {
"C": true
},...
To find out which volumes are mounted on your host you can use the following commands:
docker volume ls
This will give you a list, for more details you can inspect a single volume
docker volume inspect 2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d
Which will return something similar to:
{
"CreatedAt": "2020-02-24T08:35:57Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d/_data",
"Name": "2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d",
"Options": null,
"Scope": "local"
}

Docker volumes-from blank, from network share

I have two container one is setup as a data volume, I can go inside the data container and explore the files that are mounted from a network share with out any issues.
how ever on the second docker instance when I go to the folder with mounted volumes the folder exists but all the files and directories that should be there are not visible.
this used to work so I can only assume its due to docker 1.9 I am seeing this on a linux and mac box.
Any ideas as to the cause ? is this a bug or is there something else i can investigate ?
output of inspect.
"Volumes": {
"/mnt/shared_app_data": {},
"/srv/shared_app_data": {}
},
"Mounts": [
{
"Name": "241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53",
"Source": "/var/lib/docker/volumes/241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53/_data",
"Destination": "/mnt/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
},
{
"Name": "061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57",
"Source": "/var/lib/docker/volumes/061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57/_data",
"Destination": "/srv/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
}
],
the files are mounted in the docker file in this manner
RUN echo '/srv/path ipaddress/255.255.255.0(rw,no_root_squash,subtree_check,fsid=0)' >> /etc/exports
RUN echo 'ipaddress:/srv/path /srv/shared_app_data nfs defaults 0 0' >> /etc/fstab
RUN echo 'ipaddress:/srv/path /mnt/shared_app_data nfs defaults 0 0' >> /etc/fstab
and then when the container starts it runs.
service rpcbind start
mount -a
You need to be sure that the second container does mount the VOLUME declared in the first one
docker run --volumes-from first_container second_container
Make sure the first container does have the right files: see "Locating a volume"
docker inspect first_container
# more precisely
sudo ls $(docker inspect -f '{{ (index .Mounts 0).Source }}' first_container)

Mesos/Marathon Memory usage limits for Docker

We are created a wordpress container using mesos-Marathon, we allocated 0.1 CPU and 64mb RAM.
When we check the docker stats, we observed that memory allocations we differed with what we are allocated in marathon,
Is there any way to update memory usage limit for Docker container, can we set up any default limits for all containers at demon level.(By Mesos / Docker demon level)
We try do load test on WordPress site, container got killed for just 500 connections, we try to do load test using JMeter.
Thanks in Advance
Docker doesn't have a memory option for your docker daemon yet. As to what is the default memory limit for containers you can only set limits at runtime (not after runtime) with the following options:
-m, --memory="" Memory limit
--memory-swap="" Total memory (memory + swap), '-1' to disable swap
As per this
I also see that there's still in issue open here. Make sure you are using Mesos (0.22.1) or later.
How about creating your containers with something like this Marathon request?
curl -X POST -H "Content-Type: application/json" http://<marathon-server>:8080/v2/apps -d#helloworld.json
helloworld.json:
{
"id": "helloworld",
"container": {
"docker": {
"image": "ubuntu:14.04"
},
"type": "DOCKER",
"volumes": []
},
"cmd": "while true; do echo hello world; sleep 1; done",
"cpus": 0.1,
"mem": 96.0, # Update the memory here.
"instances": 1
}

Resources