Know the average of disk used in GCE - monitor

i have a new instance in GCE and after some days i migrate my websites and now are running in GCE.
I like to know what amount of disk i have available in GCE.
I used the monitor tools but i not found the information only found the total volume of the disk but not the amount of size the disk used or the available.. it's posible?

The amount of storage you have available to your project is determined by the resource quota in place, which in turn is determined by the project's billing. This could be:
Free trial - GB of PD: 10240 - quota can not be modified - more info here 1
Billing enabled - Standard PD total GB: 5120; SSD PD total GB: 1024; Local SSD total GB: 1500 - quota can be increased upon request. More info here 2
By default, linux VMs deploy with a 10 GB disk, unless an already existing disk was used; the default is 100 GB for Windows machines.

Related

How to get host system information Like memory ,cpu utilization etc in azure iot edge device

How to get host system information Like memory ,cpu utilization metrics etc in azure iot edge device.
Is there Azure Java SDK available for this. What azure tools can be used. Does Azure Agent has these details.
Regards
Jayashree
You can use your own solution to access these metrics. Or, you can use the metrics-collector module which handles collecting the built-in metrics and sending them to Azure Monitor or Azure IoT Hub. For more information, see Collect and transport metrics and Access built-in metrics.
Also, You can declare how much of the host resources a module can use. This control is helpful to ensure that one module can't consume too much memory or CPU usage and prevent other processes from running on the device. You can manage these settings with Docker container create options in the HostConfig group, including:
Memory: Memory limit in bytes. For example, 268435456 bytes = 256 MB.
MemorySwap: Total memory limit (memory + swap). For example,
536870912 bytes = 512 MB.
NanoCpus: CPU quota in units of 10-9 (1 billionth) CPUs. For example,
250000000 nanocpus = 0.25 CPU.
Reference: Restrict module memory and CPU usage

how to set the quota of local storage cover all nova computes in openstack

The ceph storage could set and show by openstack quota set[show], but where and how could I set the local storage capability of the all nova computes which has the local storage by images_type = raw in the nova.conf's [libvirt] setting.
I don't think it is possible. Openstack doesn't support local disk quotas.
Local disk usage can be limited by the flavors that a project is permitted to use. This means that it could could be controlled indirectly by instance or VCPU quotas.
For example, if you implemented your flavors with a (max) ratio of 10GB of local disk per VCPU, and limited the VCPUs per compute node to 64, you would be effectively limiting local disk usage to (max) 640GB.

influxDB query speed

My influxdb measurement have 24 Field Keys and 5 tag keys.
I try to do 'select last(cpu) from mymeasurement', and found result :
When there is no client throwing data into it, it'll take around 2 seconds to got the result
But when I run 95 client throwing data (per 5 seconds) into it, the query will take more than 10 seconds before it show the result. is it normal ?
Note :
My system is a Centos7 VM in xenserver with 4 vcore CPU and 8 GB ram, the top command show 30% cpu while that clients throw datas.
Some ideas:
Check your vCPU configuration on other VMs running on the same host. Other VMs you might have that don't need the extra vCPUs should only be configured with one vCPU, for a latency boost.
If your DB server requires 4 vCPUs and your host already has very little CPU% used during queries, you might want to check the storage and memory configurations of the VM in case your server is slow due to swap partition use, especially if your swap partition is located on a Virtual Disk over the network via iSCSI or NFS.
It might also be a memory allocation issue within the VM and server application. If you have XenTools installed on the VM, try on a system without the XenTools installed to rule out latency issues related to the XenTools driver.

Bonsai elasticsearch vs Amazon elasticsearch price/month comparison?

Does anyone here can help me compare the price/month of these two elasticsearch hosting services?
Specifically, what is the equivalent of the Bonsai10 that costs $50/month when compared to the amazon elasticsearch pricing?
I just want to know which of the two services saves me money on a monthly basis for my rails app.
Thanks!
Bonsai10 is 8 core 1GB memory 10GB disk, limited to 20 shards & 1 million documents.
Amazon's AES doesn't have comparable sizing/pricing. All will be more expensive.
If you want 10GB of storage, you could run a single m3.large.elasticsearch (2 core 7.5GB memory, 32GB disk) at US$140/month.
If you want 8 cores, single m3.2xlarge.elasticsearch (8 core 30GB memory, 160GB disk) at US$560/month.
Elastic's cloud is more comparable. 1GB memory 16GB disk will run US$45/month. They don't publish the CPU count.
Out of the other better hosted elasticsearch providers (because they list actual resources you receive, full list below), qbox offers the lowest cost comparable plan for US$40/month for 1GB memory 20GB disk. No CPU count https://qbox.io/pricing
Objectrocket
Compose.io (an IBM company)
Qbox
Elastic

Random Inode/Ram Cache Drops in CentOS

I run a CentOS 5.7 machine (64bit) with 24GB ram and 4x SAS drives in RAID10 setup.
This machine runs nginx/1.0.10, php-fpm & xcache. About a month back the RAM usage of this machine has changed.
About every few hours the 'CACHE' is flushed from the RAM, this happens exactly when the 'Inode table usage' drops. I'm pretty sure these drops are related. (see the 2 attached images).
This server hosts quite a lot of small files (20M all a few KB big). Not many files are deleted (maybe 100 per hour (total size a few MB max)), not enough to account for the huge Inode table drops.
I also have no crons running which could cause these drops.
Sar -r output: http://pastebin.com/C4D0B79i
My question: Why are these huge RAM/Inode usage drops happening? How can I get Nginx/PHP to use all of my servers RAM?
EDIT: I have put my configs here: http://pastebin.com/iEWJchc4 and the output of LSOF here: http://hostlogr.com/lsof.txt. The thing i do notice the VERY large number of php-fpm processes that go to /dev/zero. Which is specified in my xcache configuration. Could that possibly be wrong?
solved it by putting vm.zone_reclaim_mode = 0

Resources