I am trying to automate the collection of disk information such as Manufcaturer, model,type (ex HDD ,SSD , nvme) and size of each of all the storage hardware on the server to keep track of server hardware configuration over time for our test rig. I have used a bunch of commands to get information about all the SSDs, HDDs and NVME installed on my server that uses the ESXi hypervisor. I am trying to parse the output of these commands
esxcli storage core device list
esxcli storage core adapter list
lspci |grep -i storage
esxcfg-scsidevs -a
Each of these partially provide the information I need, however I am not able to piece them together since I am not able to identify a common key to use for each of the drives that appears in the outputs of all these commands. Is there an esxi command that I can use where all the information I need is located at one place.
Related
Synopsis. A remote instance gets connected to the Internet via satellite modem when technician visits the cabin. Technician setups the application stack via docker compose and leaves the location. The location has no internet connection and periodically loses electricity (once in a few days).
The application stack is typical, like mysql + nodejs. And it is used by "polar bears". I mean nobody, it is a monitoring app.
How to ensure that docker images will be persisted for an undefined amount of time and the compose stack survives through endless reboots?
Unfortunately there is no real easy solution.
But with a little bit of yq magic to parse docker-compose.yaml and docker save command it is possible to store the images locally to a specific location.
Then we can add startup script to import these images using docker load into the local docker cache.
So I have a working DASK/SLURM cluster of 4 raspberry Pis with a common NFS share, that I can run Python jobs succesfully.
However, I want to add some more arm devices to my cluster that do not support NFS mounts (Kernel module missing) so I wish to move to fuse based ftp mounts wiht CurlftpFS.
I have setup the mounts sucesfully with anonymous username and without any passwords and the common FTP share can be seen by all the nodes (just as before when it was an NFS share).
I can still run SLURM jobs (since they do not use the share) but when I try to run a DASK job the master node timesout complaining that no worker nodes could be started.
I am not sure what exactly is the problem, since the share it open to anyone for read/write access (e.g. logs and dask queue intermediate files).
Any ideas how I can troubleshoot this?
I don't believe anyone has a cluster like yours!
At a guess, the filesystem access via FUSE, ftp and the pi is much slower than the OS is expecting, and you are seeing the effects of low-level timeouts, i.e., from Dask's point of view it appears that files reads are failing. Dask needs access to storage for configuration and sometimes temporary files. You would want to make sure that these locations are on local storage or tuned off. However, if this is happening during import of modules, which you have on the shared drive by design, there may be no fixing it (python loads many small files during import). Why not use rsync to move the files to the nodes?
I'd like to spin up an Ubuntu image with certain tools like testdisk for disk recovery. How can I manage all detected volumes on the host machine with testdisk inside a Docker container?
The O'reilly info worked for Windows with supposed limitations (inability to repartition). I'm assuming if you use disk management to see the disk number (0,1,2,etc) it will correspond to the sd# you have to reference. Supposedly with Windows Server Editions, you can use the device flag and specify a device class GUID to share inside Docker. But like previously mentioned, it isn't raw access but rather a shared device.
I am trying to setup a server for team note taking, and I am wondering what is the best way to backup its data, A.K.A my notes, automatically.
Currently I plan to run the server in a docker image.
The docker image will be hosted by a hosting service (such as Google).
I found a free hosting service that fits my need, but it does not allow mounting volumes to a docker image.
Therefore, I think the only way for me to backup my data is to transfer them to some other cloud services.
However, this requires that I have to store some sort of sensitive data for authentication in my docker image, apparently this is not cool.
So:
Is it possible to transfer data from a docker image to a cloud service without taking the risk of leaking password/private key?
Is there any other way to backup my data?
I don't have to use docker as all I need is actually Node.js.
But the server must be hosted on some remote machines because I don't have the ability/time/money to host a machine on my own...
I use borg backup to backup our servers (including docker volumes) ... and it's saved the day many times due to failure and stupidity.
It transfers over SSH so comms are encrypted. The repositories it uses are also encrypted on disk so that makes all your data safe. It de-duplicates, snapshots, prunes, compresses ... the feature list is quite large.
After the first backup, subsequent backups are much faster because it only submits the changes since the previous backup.
You can also mount the snapshots as filesystems so you can hunt down the single file you deleted or just restore the whole lot. The mounts can also be done remotely.
I've configured ours to backup /home, /etc and the /var/lib/docker/volumes directories (among others).
We rent a few cheap storage VPSs and send the data up to them nightly. They're in different geographic locations with different hosting providers, you know, because we're paranoid.
Beside docker swarm secrets, don't forget bind mounts strategies: you could have your data in a volume.
In that case, you can have a backup strategy done on the host (instead of the container at runtime), which would take that volume, compress it and save it elsewhere. See for instance this answer or this one.
I use the following command to attach a disk to a running KVM virtual server:
virsh attach-disk vps_89 /dev/nbd31 --target vdc --driver tap
The disk is attached correctly. However, the disk is not attached as vdc inside the virtual server. I know the documentation says that the --target parameter is only giving a "hint" to the virtual server, but I can't really believe there is NO way to force the disk to be attached as a certain device (at least on Linux).
I am controlling my virtual servers through scripts and I must trust that the disk is attached to the exact device as I tell it to.
When I detach the disk using this:
virsh detach-disk vps_89 --target vdc
Then re-attaching the same disk again, the device-ID seems to be increamented each time, i.e vdc, vdd, vde, vdf etc... (totally ignoring my --target param)
Does anyone know a good (and reliable) way of attaching disks to KVM in a predictable way?
According to RedHat [1] and libvirt [2] this is not a bug, it's a feature:
The actual device name specified is not guaranteed to map to the
device name in the guest OS. Treat it as a device ordering hint.
The only available solution is to use UUID (/dev/disk/by-uuid/) for handling disk inside virtual machine.
https://bugzilla.redhat.com/show_bug.cgi?id=693372
http://libvirt.org/formatdomain.html