how to identify disk type(HDD SSD and NVM)? - hard-drive

i need to create zpool configuration based on below requirements
HDD disks: create mirror or RAIDZ
SSD disks: create cache pool
but how to identify disk type? any logic to identify based read/write speed? if so how?
note: myserver is freebsd. but please dont recommend me to post freebsd forum but didnt solve my issue. if there are no commands but at least tell me the logic. how can i check read/write speed?

You can check the the full output of lspci command
sudo lspci -vvv | grep prog-if
It indicates the interface (NVM Express, IDE, etc.)

Related

Efficient use of Docker containers for fuzzing

I've been trying out various fuzzers (AFL, Nautilus, KLEE, etc) on different applications that take a file input and I was looking into pointing the "out" directory of these fuzzers (e.g. afl-fuzz -i in -o out ./app ##) to some sort of partition in memory (like ramfs). Is this necessary for these types of fuzzers? I'm concerned with all of the I/O to my disk for reading and writing files to send to the application.
I came across this answer to a similar question: Running Docker in Memory?
They mentioned that you can use -v to accomplish this. But when I tried to mount the RAM disk using the -v option for the out directory, I saw a significant performance drop in executions/sec in AFL. This dropped from ~2000 execs/sec to ~100 execs/sec. I know this is not because of the RAM disk partition, because using -v without the RAM disk passed in yields the same poor performance. Currently I have been running the fuzzer and then copying the contents over after I stop it to improve the performance. Should I be concerned with the hit on my disk?

Can I create a filesystem e.g. xfs in a Docker Container as part of an integration test?

I wish to test code that may have trouble on file systems that do not support d_type.
In order to do so I would like to create two small xfs file systems that respectively have ftype=0 which does not support d_type, and ftype=1 which does.
I'd like to run these tests in a Docker container as that is how our testing is set up.
It looks like I might be able to take advantage of the Docker devicemapper https://docs.docker.com/storage/storagedriver/device-mapper-driver/ .
I do not necessarily control the Docker Engine, that is I don't want to rely on creating these filesystems on the underlying machine and then exposing them to my container - so I would want to be able to do this in my Dockerfile or one I am running in the container.
But maybe there are other or better ways to do this.
I'm not sure this is a complete answer but hopefully a step towards it.
I am trying to do something similar. I want to do some tests using at least xfs, tmpfs & sshfs. I was hoping to find a fuse solution to emulate xfs as well.
You could definitely put a tmpfs, fuse-sshfs and even nfs inside a docker.
I have no experience with nbd but I think you could it within docker to provide xfs via a loopback device. See this blog for example.
This might not be necessary though as you can mount an image as a partition E.g.
mount -t <fs type> -o loop file.img /mnt
Assuming this works in docker we have our solution. I haven't tried this myself yet. If you get there first please post your solution
(perhaps you did as this question is a year old).
See also Emulate a hard drive in Linux
Otherwise vagrant is probably good solution for this kind of problem.

Listing all the storage drives in ESXi

I am trying to automate the collection of disk information such as Manufcaturer, model,type (ex HDD ,SSD , nvme) and size of each of all the storage hardware on the server to keep track of server hardware configuration over time for our test rig. I have used a bunch of commands to get information about all the SSDs, HDDs and NVME installed on my server that uses the ESXi hypervisor. I am trying to parse the output of these commands
esxcli storage core device list
esxcli storage core adapter list
lspci |grep -i storage
esxcfg-scsidevs -a
Each of these partially provide the information I need, however I am not able to piece them together since I am not able to identify a common key to use for each of the drives that appears in the outputs of all these commands. Is there an esxi command that I can use where all the information I need is located at one place.

Can I share docker images between windows and linux?

this might seems a stupid question, but here I am :
I'm running Ubuntu 16.04 and managed to install windows 10 in dual boot.
Running docker exclusively in linux so far, I decided to give it a try on Windows 10.
As I already downloaded several docker images on my Linux system, I'm willing to have a "shared" like development environment. I must admit this would be a waste of time and disk space to download Docker images I already downloaded before (on linux) on my fresh windows install.
So my question is simple : Can I use my linux images / containers on windows. I'm thinking of something like a global path variable pointing to my linux images to configure on docker windows.
Any idea if this is possible, and if yes, the pros and cons and the caveats ?
Thanks for helping me on this one.
Well i would suggest to create your local registry and then push these images there and pull it in your windows docker.
Sonatype nexus(artifact storage repository) can be used to store your docker images. Check if this helps.
I guess it's not possible to share the same folder (to reduce disk usage) since the stored files are totally different:
Under Windows the file is:
C:\Users\Public\Documents\Hyper-V\Virtual hard disks \MobyLinuxVM.vhdx
the vhdx extension is specific to MS systems.
and under linux it consist of 2 files:
/var/lib/docker/devicemapper/devicemapper/data
/var/lib/docker/devicemapper/devicemapper/metadata
see here for details
Where are Docker images stored on the host machine?
The technology under this is to have a specific fileSystem optimal for docker. Even if they used the same fileSystem storage, it wouldn't be a good idea imho.
If the purpose is only to gain time for resintalling, just dump all the images from on system, and re-pull them on the other one.
docker images --format "{{.Repository}}" > image-list.txt
then loop on the other OS
while read p; do
docker pull $p
done < image-listtxt

KVM virsh attach-disk does not honour device letter

I use the following command to attach a disk to a running KVM virtual server:
virsh attach-disk vps_89 /dev/nbd31 --target vdc --driver tap
The disk is attached correctly. However, the disk is not attached as vdc inside the virtual server. I know the documentation says that the --target parameter is only giving a "hint" to the virtual server, but I can't really believe there is NO way to force the disk to be attached as a certain device (at least on Linux).
I am controlling my virtual servers through scripts and I must trust that the disk is attached to the exact device as I tell it to.
When I detach the disk using this:
virsh detach-disk vps_89 --target vdc
Then re-attaching the same disk again, the device-ID seems to be increamented each time, i.e vdc, vdd, vde, vdf etc... (totally ignoring my --target param)
Does anyone know a good (and reliable) way of attaching disks to KVM in a predictable way?
According to RedHat [1] and libvirt [2] this is not a bug, it's a feature:
The actual device name specified is not guaranteed to map to the
device name in the guest OS. Treat it as a device ordering hint.
The only available solution is to use UUID (/dev/disk/by-uuid/) for handling disk inside virtual machine.
https://bugzilla.redhat.com/show_bug.cgi?id=693372
http://libvirt.org/formatdomain.html

Resources