Is the proc file system held in memory ( Physical Memory ). Since Kernel constantly updates the proc , I am guessing that the content of the proc have to be in physical memory(RSS) for efficiency.
Can you any one shed some light on it.
The Linux /proc File System is a virtual filesystem that exists in RAM (i.e., it is not stored on the hard drive). That means that it exists only when the computer is turned on and running.
Here are some resources where you can find more details about this:
The /proc filesystem
Exploring /proc File System in Linux
Discover the possibilities of the /proc directory
Hope that helps.
Related
I have an annoying issue regarding getting USB_mass_storage on BBB to work when connected to Windows
I have created an image :
dd bs=1M if=/dev/zero of=/usb.bin count=64
Formatted it:
mkdosfs /usb.bin -F 32 -I
I have mounted it, copied files to and from it, no problem.
Then I created a USB mass storage :
modprobe g_mass_storage file=./usb.bin stall=0 ro=0
Connected it to a USB port on my Linux, nor problem, I can see and manipulate files
On Windows I can see the drive, the size is correct, but filesystem is not recognized.
With ro=0 I am able to create a partition from within Windows and format it. I can copy files to and from it but when I mount it on BBB I can not see the files copied using Windows. I can still though see the files I copied to the mountpoint on BBB.
Can someone tell me what I am doing wrong ?
I disabled everything regarding g_multi, including RNDIS, Serial, CDC.
And it works perfectly under Linux.
You have created a raw disk image without a partition table on the Linux side. Linux doesn't care if it's a file, if it has a partition table, etc.
Windows however gets confused by the lack of partition table. As you noticed.
Having a partition table is preferable. What you can do on the Linux side of things:
losetup --partscan - Have the file get processed as a disk with partition table and get devices for each partition
Mount the partition directly using an offset
In this particular case the latter is probably the quickest. There is only one partition and the offset is known.
fdisk -l ./usb.bin
Multiply the Start value by the Units size. Use it as the offset below:
mount -o loop,offset=12345 ./usb.bin /mnt
Make sure to never access from both sides at the same time as this will lead to filsystem damage and data loss.
See also e.g. https://askubuntu.com/a/69447
I've been trying out various fuzzers (AFL, Nautilus, KLEE, etc) on different applications that take a file input and I was looking into pointing the "out" directory of these fuzzers (e.g. afl-fuzz -i in -o out ./app ##) to some sort of partition in memory (like ramfs). Is this necessary for these types of fuzzers? I'm concerned with all of the I/O to my disk for reading and writing files to send to the application.
I came across this answer to a similar question: Running Docker in Memory?
They mentioned that you can use -v to accomplish this. But when I tried to mount the RAM disk using the -v option for the out directory, I saw a significant performance drop in executions/sec in AFL. This dropped from ~2000 execs/sec to ~100 execs/sec. I know this is not because of the RAM disk partition, because using -v without the RAM disk passed in yields the same poor performance. Currently I have been running the fuzzer and then copying the contents over after I stop it to improve the performance. Should I be concerned with the hit on my disk?
I am trying to benchmark I/O performance on my host and docker container using flexible IO tool with O_direct enabled in order to bypass memory caching. The result is very suspicious. docker performs almost 50 times better than my host machine which is impossible. It seems like docker is not bypassing the caching at all. even if I ran it with --privileged mode. This is the command I ran inside of a container, Any suggestions?
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=4k --numjobs=1 --size=10G --runtime=600 --group_reporting --output-format=json >/home/docker/docker_seqread_4k.json
(Note this isn't really a programming question so Stackoverflow is the wrong place to ask this... Maybe Super User or Serverfault would be a better choice and get faster answers?)
The result is very suspicious. docker performs almost 50 times better than my host machine which is impossible. It seems like docker is not bypassing the caching at all.
If your best case latencies are suspiciously small compared to your worst case latencies it is highly likely your suspicions are well founded and that kernel caching is still happening. Asking for O_DIRECT is a hint not an order and the filesystem can choose to ignore it and use the cache anyway (see the part about "You're asking for direct I/O to a file in a filesystem but...").
If you have the option and you're interested in disk speed, it is better to do any such test outside of a container (with all the caveats that implies). Another option when you can't/don't want to disable caching is ensure that you do I/O that is at least two to three times the size (both in terms of amount and the region being used) of RAM so the majority of I/O can't be satisfied by buffers/cache (and if you're doing write I/O then do something like end_fsync=1 too).
In summary, the filesystem being used by docker may make it impossible to accurately do what you're requesting (measure the disk speed by bypassing cache while using whatever your default docker filesystem is).
Why a Docker benchmark may give the results you expect
The Docker engine uses, by default, the OverlayFS [1][2] driver for data storage in a containers. It assembles all of the different layers from the images and makes them readable. Writing is always done to the "top" layer, which is the container storage.
When performing reads and writes to the container's filesystem, you're passing through Docker's overlay2 driver, through the OverlayFS kernel driver, through your filesystem driver (e.g. ext4) and onto your block device. Additionally, as Anon mentioned, DIRECT/O_DIRECT is just a hint, and may not be respected by any of the layers you're passing through.
Getting more accurate results
To get an accurate benchmarks within a Docker container, you should write to a volume mount or change your storage driver to one that is not overlaid, such as the Device Mapper driver or the ZFS driver.
Both the Device Mapper driver and the ZFS driver require a dedicated block device (you'll likely need a separate hard drive), so using a volume mount might be the easiest way to do this.
Use a volume mount
Use the -v options with a directory that sits on a block device on your host.
docker run -v /absolute/host/directory:/container_mount_point alpine
Use a different Docker storage driver
Note that the storage driver must be changed on the Docker daemon (dockerd) and cannot be set per container. From the documentation:
Important: When you change the storage driver, any existing images and containers become inaccessible. This is because their layers cannot be used by the new storage driver. If you revert your changes, you can access the old images and containers again, but any that you pulled or created using the new driver are then inaccessible.
With that disclaimer out of the way, you can change your storage driver by editing daemon.json and restarting dockerd.
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.directlvm_device=/dev/sd_",
"dm.thinp_percent=95",
"dm.thinp_metapercent=1",
"dm.thinp_autoextend_threshold=80",
"dm.thinp_autoextend_percent=20",
"dm.directlvm_device_force=false"
]
}
Additional container benchmark notes - kernel
If you are trying to compare different flavors of Linux, keep in mind that Docker is still running on your host machine's kernel.
As far as I understand Docker uses memory mapped files to start from image. Since I can do this over and over again and as far as I remember start different instances of the same image in parallel, I guess docker abstracts the file system and stores changes somewhere else.
I wonder if docker can be configured (or does it by default) to run in a memory only mode without some sort of a temporary file?
Docker uses a union filesystem that allows it to work in "layers" (devicemapper, BTRFS, etc). It's doing copy-on-write so that starting new containers is cheap, and when it performs the first write, it actually creates a new layer.
When you start a container from an image, you are not using memory-mapped files to restore a frozen process (unless you built all of that into the image yourself...). Rather, you're starting a normal Unix process but inside a sandbox where it can only see its own unionfs filesystem.
Starting many copies of an image where no copy writes to disk is generally cheap and fast. But if you have a process with a long start-up time, you'll still pay that cost for every instance.
As for running Docker containers wholly in memory, you could create a RAM disk and specify that as Docker's storage volume (configurable, but typically located under /var/lib/docker).
In typical use-cases, I would not expect this to be a useful performance tweak. First, you'll spend a lot of memory holding files you won't access. The base layer of an image contains most Linux system files. If you fetch 10 packages from the Docker Hub, you'll probably hit 20G worth of images easily (after that the storage cost tends to plateau). Second, the system already manages memory and swapping pretty well (which is why a RAM disk is a performance tweak) and you get all of that applied to processes running inside a container. Third, for most of the cases where a RAM disk might help, you can use the -v flag to mount the disk as a volume on the container rather than needing to store your whole unionfs there.
I have an ESXi 4.1 host with some virtual machine. The host was using an external storage via NFS and local storage with a SATA disk.
I've moved all virtual machines from the NFS datastore to the SATA datastore. Then, i tried to unmount the NFS datastore, but failed with the error that was in use. But, the datasotre was empty.
So, I've used the SSH access to unmount the NFS datastore:
~ # esxcfg-nas -l
nfs1 is /vmware from 192.168.2.131 mounted
~ # esxcfg-nas -d nfs1
NAS volume nfs1 deleted.
~ # esxcfg-nas -l
nfs1 is /vmware from 192.168.2.131 unmounted
But, now at the vSphere Client, there's a big message showing:
The VMware ESX Server does not have persistent storage.
At configuration->Storage, the list is empty, and before remove the NFS datastore, there was the two datastores (NFS and SATA).
But, all seems to be working perfect. All virtual machines continues working.
I tried to Rescan All, with no luck. If I try to add a new storage, the SATA disk appears as available.
What can I do to restore the datastore ? I'm scared to do anything and lost all my data from the SATA disk.
Any idea ?
It seems that there is two very smart people than can downvote my problem without sharing their thoughts.
For all other people with the same problem, I've found the solution. When I try to refresh datastores, although the vSphere Client shows 'Complete', at the file /var/log/messages this is logged:
Jun 13 11:32:34 Hostd: [2014-06-13 11:32:34.677 2C3E1B90 error 'FSVolumeProvider' opID=EB3B0782-00001239] RefreshVMFSVolumes: ProcessVmfs threw HostCtlException Error interacting with configuration file /etc/vmware/esx.conf
Jun 13 11:32:34 ker failed : Error interacting with configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail.
[...]
Jun 13 11:32:35 ith configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail. Lock should be released by (0)
To solve this, just run from the SSH access:
# services.sh restart
And my SATA datastore appears with no problem.
Hope this helps somebody sometime.