I have an annoying issue regarding getting USB_mass_storage on BBB to work when connected to Windows
I have created an image :
dd bs=1M if=/dev/zero of=/usb.bin count=64
Formatted it:
mkdosfs /usb.bin -F 32 -I
I have mounted it, copied files to and from it, no problem.
Then I created a USB mass storage :
modprobe g_mass_storage file=./usb.bin stall=0 ro=0
Connected it to a USB port on my Linux, nor problem, I can see and manipulate files
On Windows I can see the drive, the size is correct, but filesystem is not recognized.
With ro=0 I am able to create a partition from within Windows and format it. I can copy files to and from it but when I mount it on BBB I can not see the files copied using Windows. I can still though see the files I copied to the mountpoint on BBB.
Can someone tell me what I am doing wrong ?
I disabled everything regarding g_multi, including RNDIS, Serial, CDC.
And it works perfectly under Linux.
You have created a raw disk image without a partition table on the Linux side. Linux doesn't care if it's a file, if it has a partition table, etc.
Windows however gets confused by the lack of partition table. As you noticed.
Having a partition table is preferable. What you can do on the Linux side of things:
losetup --partscan - Have the file get processed as a disk with partition table and get devices for each partition
Mount the partition directly using an offset
In this particular case the latter is probably the quickest. There is only one partition and the offset is known.
fdisk -l ./usb.bin
Multiply the Start value by the Units size. Use it as the offset below:
mount -o loop,offset=12345 ./usb.bin /mnt
Make sure to never access from both sides at the same time as this will lead to filsystem damage and data loss.
See also e.g. https://askubuntu.com/a/69447
Related
My raspberrypi suddenly had no more free space.
By looking at the folder sizes with the following command:
sudo du -h --max-depth=3
I noticed that a docker folder eats an incredible amount of hard disk space. It's the folder
var/lib/docker/containers/*
The folder seems to contain some data for the current running docker containers. The first letters of the filename correspond to the docker container-ID. One folder seems to grow dramatically fast. After stopping the affected container and removed him, the related folder disappeared. So the folder seems to have belonged to it.
Problem solved.
I wonder now what the reason could be that this folder size increases so much. Further, I wonder what is the best way to not run into the same problem again later.
I could write a bash script which removes the related container at boot and run it again. Better ideas are very welcome.
The container ids are directories, so you can look inside to see what is using space in there. The two main reasons are:
Logs from stdout/stdere. These can be limited with added options. You can view these with docker logs.
Filesystem changes. The underlying image filesystem is not changed, so any writes trigger a copy-on-write to a directory within each container id. You can view these with docker diff.
Background:
I run a java process in my docker container and I take histo dumps using jmap to a file at /home/heapdump.txt inside container. I get this file from the container for further processing.
Now, I do this at an interval of 5 minutes. However, after 20 mins meaning, 4 heapdumps, when I try to get this file, I get the below error:
{"message":"mount/:/var/lib/docker/overlay2/<container_id>/merged/hostroot, flags: 0x5001: no space left on device"}
I don't understand what no space left on device means in this case. 😕😕😕
Your storage is mapped to default /var. Which I believe will hold much less space unless you have manually allotted more.
Do a df -kh on your device and see the status of the device mapped to /var. You would have run out of space.
To fix this find a disk with good space - remember this will be used by docker to store all its image and volume data. and make the docker use it.
You need to configure this in daemon.json file as a data-root config like below.
{
“data-root”: “/new/data/root/path”
}
Remember to reload the daemon and restart docker service.
Once done you will see docker beautifully copies its image and volume data to the new directory.
once you test you can clean up the var/lib/docker.
Hope this helps
The Docker docs state:
Warning: Do not directly manipulate any files or directories within /var/lib/docker/. These files and directories are managed by Docker.
Let's say someone hasn't read that hint and deleted some files from /var/lib/docker/aufs/diff to free up some disk space. These files didn't live in a Docker volume and are not part of the original Docker image but have been created in the container writable layer. Restarting the given container frees up the disk space but are there any known side effects?
And for the next time: Does removing that kind of files or directories from within the container (via docker exec .. rm ..) result in a proper removal or are they only marked as deleted? The documentation currently doesn't describe this special case.
Restarting the given container frees up the disk space but are there any known side effects?
As you stated in your question, you should not "manipulate any files or directories within /var/lib/docker/", as any side-effect may appear and no documentation trace anything about this: it's internal Docker plumbing which may highly change other Docker versions, ut's not supposed to be exposed to end-users nor be tempered with. You could look at Docker code for your Docker version and all it's dependencies to understand what happened, but it's not really practical :-)
are there any known side effects?
There maybe be side effects - I insist on the may as anything can happen depending on your Docker version and configuration. Even if it may seem to be working, some things may be broken.
Well known side effect is Docker installation corruption, which may have present itself in various fashions: random container crash, data loss, unexplained bug, etc.
Best case scenario, you just discarded some data in your container and everything will work fine in the future.
Not-so-good scenario: you actually broke something in your installation and corrupted it, you'll be better of re-installing Docker entirely.
Does removing that kind of files or directories from within the container (via docker exec .. rm ..) result in a proper removal or are they only marked as deleted?
Deleting a file in the container will not always remove it from the system, it depends on the drive your are using. Doc has a section about writing files for all of them:
AUFS - it seemed implied that file is deleted, AUFS will copy the file from the image layer and work on it, it should then delete the copy
When a file is deleted within a container, a whiteout file is created in the container layer. The version of the file in the image layer is not deleted [...] Subsequent writes to the same file operate against the copy of the file already copied up to the container.
BTRFS - deleted and space reclaimed, doc is quite clear:
If a container creates a file and then deletes it, this operation is performed in the Btrfs filesystem itself and the space is reclaimed.
devicemapper - may not be deleted depending on config:
if you are using direct-lvm, the blocks are freed. If you use loop-lvm, the blocks may not be freed
OverlayFS - seemed implied that file is deleted, but the image file is kept
When a file is deleted within a container, a whiteout file is created in the container (upperdir). The version of the file in the image layer (lowerdir) is not deleted
ZFS - deleted:
If you create and then delete a file or directory within the container’s writable layer, the blocks are reclaimed by the zpool.
VFS is using a copy of the previous layer and work directly in a directory representing that layer, a deletion in the container should probably delete it from the related directory on host machine
The documentation currently doesn't describe this special case.
Yes, and it probably won't ;)
I am on docker version 1.11.2. I am trying to docker save an image but i get
an error.
i did docker images to see the size of the image and the result is this
myimage 0.0.1-SNAPSHOT e0f04657b1e9 10 months ago 1.373 GB
The server I am on is low on space but it has 2.2 GB available but when I run docker save myimage:0.0.1-SNAPSHOT > img.tar i get
write /dev/stdout: no space left on device
I removed all exited containers and dangling volumes in hopes of making it work but nothing helped.
You have no enough space left on device. So free some more space or try gzip on the fly:
docker save myimage:0.0.1-SNAPSHOT | gzip > img.tar.gz
To restore it, docker automatically realizes that is gziped:
docker load < img.tar.gz
In such a situation where you can't free enough space locally you might want to use storage available over a network connection. A little bit more difficult to set up are NFS or Samba.
The easiest approach could be piping the output through netcat, but keep in mind that this is at least by default unencrypted.
But as long as your production server is that low on space you are vulnerable to a bunch of other problems.
Until you can provide more free space I wouldn't create files locally, zipped or not. You could bring important services down when you run out of free space.
I use the following command to attach a disk to a running KVM virtual server:
virsh attach-disk vps_89 /dev/nbd31 --target vdc --driver tap
The disk is attached correctly. However, the disk is not attached as vdc inside the virtual server. I know the documentation says that the --target parameter is only giving a "hint" to the virtual server, but I can't really believe there is NO way to force the disk to be attached as a certain device (at least on Linux).
I am controlling my virtual servers through scripts and I must trust that the disk is attached to the exact device as I tell it to.
When I detach the disk using this:
virsh detach-disk vps_89 --target vdc
Then re-attaching the same disk again, the device-ID seems to be increamented each time, i.e vdc, vdd, vde, vdf etc... (totally ignoring my --target param)
Does anyone know a good (and reliable) way of attaching disks to KVM in a predictable way?
According to RedHat [1] and libvirt [2] this is not a bug, it's a feature:
The actual device name specified is not guaranteed to map to the
device name in the guest OS. Treat it as a device ordering hint.
The only available solution is to use UUID (/dev/disk/by-uuid/) for handling disk inside virtual machine.
https://bugzilla.redhat.com/show_bug.cgi?id=693372
http://libvirt.org/formatdomain.html