pvcreate not able to initialize physical volume - lvm

I got some application which will call the pvcreate each time.
I can see the volumes in my vm as follow:
$ pvscan
PV /dev/vda5 VG ubuntu-vg lvm2 [99.52 GiB / 0 free]
Total: 1 [99.52 GiB] / in use: 1 [99.52 GiB] / in no VG: 0 [0 ]
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5'
Can't initialize physical volume "/dev/vda5" of volume group "ubuntu-vg" without -ff
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5' -ff
Really INITIALIZE physical volume "/dev/vda5" of volume group "ubuntu-vg" [y/n]? y
Can't open /dev/vda5 exclusively. Mounted filesystem?
I have also tried wipsfs and observed the same result for above commands
$ wipefs -af /dev/vda5
/dev/vda5: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
How can I execute pvcreate?
Anything to be added for my vm?

It seems your hdd (/dev/vda5) is already been used in your ubuntu-vg. I think you can not use same hdd partition in 2 different PV's. or you can not add it again.

Related

Why is my cgroup write throughput not limited?

I am trying to set upper write throughput limit per cgroup via blkio cgroup controller.
I have tried it like this:
echo "major:minor 10485760" > /sys/fs/cgroup/blkio/docker/XXXXX/blkio.throttle.write_bps_device
This should limit throughput to 10 MBps. However tool, that's monitoring servers disk, reports this behaviour.
I thought that, the line should hold somewhere around 10M. Can somebody explain this behaviour to me and maybe propose a better way to limit throughput?
Are you sure that the major/minor numbers that you specified in the command line are correct? Moreover, as you are running in docker, the limitation is for the processes running in the docker container not for the processes running outside. So, you need to check from where the information taken by the monitoring tool come from (does it take numbers for all the processes inside and outside the container or only for the processes inside the container?).
To check the setting, the Linux documentation provides an example with the dd command and a device limited to 1MB/second on reads. You can try the same with a limit on the writes to see if the monitoring tool is coherent with the output of dd. Make the latter run in the container.
For example, my home directory is located on /dev/sdb2:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
[...]
/dev/sdb2 2760183720 494494352 2125409664 19% /home
[...]
$ ls -l /dev/sdb*
brw-rw---- 1 root disk 8, 16 mars 14 08:14 /dev/sdb
brw-rw---- 1 root disk 8, 17 mars 14 08:14 /dev/sdb1
brw-rw---- 1 root disk 8, 18 mars 14 08:14 /dev/sdb2
I check the speed of the writing in a file:
$ dd oflag=direct if=/dev/zero of=$HOME/file bs=4K count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4,2 MB, 4,0 MiB) copied, 0,131559 s, 31,9 MB/s
I set the 1MB/s write limit on the whole disk (8:16) as it does not work on individual partitions (8:18) on which my home directory resides:
# echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device
# cat /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device
8:16 1048576
dd's output confirms the limitation of the I/O throughput to 1 MB/s:
$ dd oflag=direct if=/dev/zero of=$HOME/file bs=4K count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4,2 MB, 4,0 MiB) copied, 4,10811 s, 1,0 MB/s
So, it is possible to make the same in a container.

How to get disk device information in container in golang?

I want to get all disk devices of host machine in Go or C++ language in a docker container. More information such as free spaces are also needed. What should I do or can this be possible ?
There is nothing special about Go or C++ that is required. You can use any relevant code or libraries that would examine Linux system devices for disk space or free space, because the environment the docker container provides is (typically) a Linux environment.
Docker Solution
docker run --privileged <image> <program> will populate the /dev file system in the container, which contains the device files relevant to your system and allows the container to access those devices.
User Solution
You will have to tell your users, e.g. in DockerHub documentation, or in error messages, to use the
--privileged flag
when running your image or it won't be able to access system devices.
You should expect some scrutiny or cynicism from some of your more knowledgeable users.
Like: why does it need that?
Details
According to Luc Juggery's blog on Medium:
Purpose of the --privileged flag
Running a container with the --privileged flag gives all the capabilities to the container and also access to the host’s devices (everything that is under the /dev >folder)...
However, he confuses the issue for beginners a bit by running docker from vagrant.
He also warns us:
If you use the --privileged flag when running a container, make sure you know what you are doing.
And I agree with that completely. Using --privileged gives the container the permission to modify the host.
It is easier to see what is happening from a Linux host running docker.
Example 1:
From the Linux host we will start an ubuntu container (without --privileged) and run sfdisk to see the disk partitions and ls -l /dev/s* to see the disk devices. It doesn't work because the container has no privileges to access the host in this way. The container's environment can not scan the disks on the host in any way.
paul#somewhere:~$ docker run -it ubuntu /bin/bash
root#175db156cb32:/# sfdisk --list
(blank output)
root#175db156cb32:/# ls -l /dev/sd*
ls: cannot access '/dev/sd*': No such file or directory
Example 2:
Now we run docker run --privileged
paul#somewhere:~$ docker run --privileged -it ubuntu /bin/bash
root#c62b42161444:/# sfdisk --list
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EE70993B-4640-4899-B142-18B89DD16CB8
Device Start End Sectors Size Type
/dev/sda1 2048 923647 921600 450M Windows recovery environment
/dev/sda2 923648 1128447 204800 100M EFI System
/dev/sda3 1128448 1161215 32768 16M Microsoft reserved
/dev/sda4 1161216 467810878 466649663 222.5G Microsoft basic data
/dev/sda5 467812352 468858879 1046528 511M Windows recovery environment
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 2F514662-72A3-4126-9868-40CEB6ADA416
Device Start End Sectors Size Type
/dev/sdb1 34 262177 262144 128M Microsoft reserved
/dev/sdb2 264192 5860532223 5860268032 2.7T Microsoft basic data
Partition 1 does not start on physical sector boundary.
Disk /dev/sdc: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x299c6114
Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 2048 89843711 89841664 42.9G 83 Linux
/dev/sdc2 89843712 480468991 390625280 186.3G 83 Linux
/dev/sdc3 480471038 488396799 7925762 3.8G 5 Extended
/dev/sdc5 480471040 488396799 7925760 3.8G 82 Linux swap / Solaris
root#c62b42161444:/# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 11 02:43 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 11 02:43 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 11 02:43 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 11 02:43 /dev/sda3
brw-rw---- 1 root disk 8, 4 Aug 11 02:43 /dev/sda4
brw-rw---- 1 root disk 8, 5 Aug 11 02:43 /dev/sda5
brw-rw---- 1 root disk 8, 16 Aug 11 02:43 /dev/sdb
brw-rw---- 1 root disk 8, 17 Aug 11 02:43 /dev/sdb1
brw-rw---- 1 root disk 8, 18 Aug 11 02:43 /dev/sdb2
brw-rw---- 1 root disk 8, 32 Aug 11 02:43 /dev/sdc
brw-rw---- 1 root disk 8, 33 Aug 11 02:43 /dev/sdc1
brw-rw---- 1 root disk 8, 34 Aug 11 02:43 /dev/sdc2
brw-rw---- 1 root disk 8, 35 Aug 11 02:43 /dev/sdc3
brw-rw---- 1 root disk 8, 37 Aug 11 02:43 /dev/sdc5
root#c62b42161444:/# exit
and the docker container is allowed to access the host devices.

Docker containers: drop cache without root - other options?

I'm doing some query tests with Impala/HDFS inside docker containers (swarm). In order to compare the queries (different scale factors), I want to drop the cache. Normally this is easily done by
$ sync
$ echo 1 > /proc/sys/vm/drop_caches
but I don't have admin rights on the host system. Is there another way to drop the cache from the inside of the containers? Is it an option to create another big table and execute queries on this table so that its data overwrite the cache?
You cannot do this from inside the conatiner. The root user in the container is in a different namespace than the actual root, and only the latter has access to /proc.
You could try volume mount in the host proc filesystem into the container. This seems to work for me:
$ docker run -ti --rm -v /proc/sys/vm/drop_caches:/drop_caches alpine
/ # free
total used free shared buffers cached
Mem: 2046644 808236 1238408 688 2248 118244
-/+ buffers/cache: 687744 1358900
Swap: 1048572 31448 1017124
/ # dd if=/dev/zero of=/dummy count=500 bs=1M
500+0 records in
500+0 records out
/ # free
total used free shared buffers cached
Mem: 2046644 1333892 712752 688 2268 630268
-/+ buffers/cache: 701356 1345288
Swap: 1048572 31448 1017124
/ # echo 3 > drop_caches
/ # free
total used free shared buffers cached
Mem: 2046644 790136 1256508 688 764 101552
-/+ buffers/cache: 687820 1358824
Swap: 1048572 31448 1017124
/ #
... but that is assuming you control how the container is started, which would more or less mean you're admin.
Can be achieved by starting container in privileged mode using --privileged flag.

How to check the number of cores used by docker container?

I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.

Docker increase disk space

I have a docker running and it gives me disk space warning. How can i increase the docker space and start again? (The same container)
Lets say I want to give like 15gb.
You can also increase disk space through the docker GUI
I assume you are talking about disk space to run your containers.
Make sure that you have enough space on whatever disk drive you are using for /var/lib/docker which is the default used by Docker. You can change it with the -g daemon option.
If you don't have enough space you may have to repartition your OS drives so that you have over 15GB. If you are using boot2docker or docker-machine you will have to grow the volume on your Virtual Machine. It will vary depending on what you are using for Virtualization (i.e VirtualBox, VMware, etc)
For example if you are using VirtualBox and docker-machine you can start with something like this for a 40GB VM.
docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default
I ran into similar problem with my docker-vm (which is 'alpine-linux' on VMware Fusion in OS X):
write error: no space left on device alpinevm:/mnt/hgfs
failed to build: .. no space left on device
.. eventually this guide helped me to resize/expand my docker volume.
TL;DR:
1 - Check size of partition containing /var/lib/docker
> df -h
/dev/sda3 17.6G 4.1G 12.6G 25% /var/lib/docker
look for '/dev/sdaN', where N is your partition for '/var/lib/docker', in my case /dev/sda3
2 - Shut down your VM, open VM Settings > Hard Disk(s) > change size of your 'virtual_disk.vmdk' (or whatever is your machine's virtual disk), then click Apply (see this guide).
3 - Install cfdisk and e2fsprogs-extra which contains resize2fs
> apk add cfdisk
> apk add e2fsprogs-extra
4 - Run cfdisk and resize/expand /dev/sda3
> cfdisk
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M 83 Linux
/dev/sda2 206848 4241407 4034560 1.9G 82 Linux swap / Solaris
/dev/sda3 4241408 83886079 79644672 12.6G 83 Linux
[Bootable] [ Delete ] [ Resize ] [ Quit ] [ Type ] [ Help ] [ Write ] [ Dump ]
.. press down/up to select '/dev/sda3'
.. press left/right/enter to select 'Resize' -> 'Write' -> 'Quit'
5 - Run resize2fs to expand the file system of /dev/sda3
> resize2fs /dev/sda3
6 - Verify resized volume
> df -h
/dev/sda3 37.3G 4.1G 31.4G 12% /var/lib/docker
To increase space available for Docker you will have to increase your docker-pool size. If you do a
lvs
You will see the docker-pool logical volume and its size. If your docker pool is sitting on a volume group that has free space you can simply increase the docker-pool LV by
lvextend -l 100%FREE <path_to_lv>
# An example using this may looks like this:
# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
You can check out more docker diskspace tips here
Thanks
Docker stores all layers/images in its file formate (i.e. aufs) in default /var/lib/docker directory.
If you are getting disk space warning because of docker then there must of lot of docker images and you need to clean up it.
If you have option to add disk space then can you create separate partition with bigger size and mount your /var/lib/docker over there which will help you to get rid of filling root partition.
some extra information can be found here on managing disk space for docker .
http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html

Resources