how to set the quota of local storage cover all nova computes in openstack - storage

The ceph storage could set and show by openstack quota set[show], but where and how could I set the local storage capability of the all nova computes which has the local storage by images_type = raw in the nova.conf's [libvirt] setting.

I don't think it is possible. Openstack doesn't support local disk quotas.
Local disk usage can be limited by the flavors that a project is permitted to use. This means that it could could be controlled indirectly by instance or VCPU quotas.
For example, if you implemented your flavors with a (max) ratio of 10GB of local disk per VCPU, and limited the VCPUs per compute node to 64, you would be effectively limiting local disk usage to (max) 640GB.

Related

How to get host system information Like memory ,cpu utilization etc in azure iot edge device

How to get host system information Like memory ,cpu utilization metrics etc in azure iot edge device.
Is there Azure Java SDK available for this. What azure tools can be used. Does Azure Agent has these details.
Regards
Jayashree
You can use your own solution to access these metrics. Or, you can use the metrics-collector module which handles collecting the built-in metrics and sending them to Azure Monitor or Azure IoT Hub. For more information, see Collect and transport metrics and Access built-in metrics.
Also, You can declare how much of the host resources a module can use. This control is helpful to ensure that one module can't consume too much memory or CPU usage and prevent other processes from running on the device. You can manage these settings with Docker container create options in the HostConfig group, including:
Memory: Memory limit in bytes. For example, 268435456 bytes = 256 MB.
MemorySwap: Total memory limit (memory + swap). For example,
536870912 bytes = 512 MB.
NanoCpus: CPU quota in units of 10-9 (1 billionth) CPUs. For example,
250000000 nanocpus = 0.25 CPU.
Reference: Restrict module memory and CPU usage

Is it possible to increase Database storage in Neo4j Aura

Planning to subscribe Aura cloud managed services with memory 4GB, 0.8 CPU and 8 GB storage plan.
But the storage is not enough. Is it possible to increase the storage in this plan?
How many core of CPUs in this plan if its mentioned as 0.8 CPU?
The Aura pricing structure is very simple. You can increase storage (or memory or CPU) by paying for a higher-priced tier. Of course, you can contact Neo4j directly to ask if they have any other options.
0.8 CPU means that you get the equivalent of 80% of a single core.
You can get more details from the Aura knowledge base and developer guide.

Why does dataflow use additional disks?

When I see the details of my dataflow compute engine instance, I can see two categories of disks being used - (1) Boot disk and local disks, and (2) Additional disks.
I can see that the size that I specify using the diskSizeGb option determines the size of a single disk under the category 'Boot disk and local disks'. My not-so-heavy job is using 8 additional disks of 40GB each.
What are additional disks used for and is it possible to limit their size/number?
Dataflow will create for your job Compute Engine VM instances, also known as workers.
To process the input data and store temporary data, each worker may require up to 15 additional Persistent Disks.
The default size of each persistent disk is 250 GB in batch mode and 400 GB in streaming mode. 40GB is very far from the default value
In this case, the Dataflow service will span more disks for your worker. If you want to keep a 1:1 ratio between workers and disks, please increase the ‘diskSizeGb’ field.
The existing answer explains how many disks, and information about the disks - but it does not answer the main question: Why so many disks per worker?
WHY does Dataflow need several disks per worker?
The way in which Dataflow does load balancing for streaming jobs is that a range of keys is allocated to each disk. Persistent state about each key is stored in these disks.
A worker can be overloaded if the ranges that are allocated to its persistent disks have a very high volume. To load-balance, Dataflow can move a range from one worker to another by transferring a persistent disk to a different worker.
So this is why Dataflow uses multiple disks per worker: Because this allows it to do load balancing and autoscaling by moving the disks from worker to worker.

Bonsai elasticsearch vs Amazon elasticsearch price/month comparison?

Does anyone here can help me compare the price/month of these two elasticsearch hosting services?
Specifically, what is the equivalent of the Bonsai10 that costs $50/month when compared to the amazon elasticsearch pricing?
I just want to know which of the two services saves me money on a monthly basis for my rails app.
Thanks!
Bonsai10 is 8 core 1GB memory 10GB disk, limited to 20 shards & 1 million documents.
Amazon's AES doesn't have comparable sizing/pricing. All will be more expensive.
If you want 10GB of storage, you could run a single m3.large.elasticsearch (2 core 7.5GB memory, 32GB disk) at US$140/month.
If you want 8 cores, single m3.2xlarge.elasticsearch (8 core 30GB memory, 160GB disk) at US$560/month.
Elastic's cloud is more comparable. 1GB memory 16GB disk will run US$45/month. They don't publish the CPU count.
Out of the other better hosted elasticsearch providers (because they list actual resources you receive, full list below), qbox offers the lowest cost comparable plan for US$40/month for 1GB memory 20GB disk. No CPU count https://qbox.io/pricing
Objectrocket
Compose.io (an IBM company)
Qbox
Elastic

Know the average of disk used in GCE

i have a new instance in GCE and after some days i migrate my websites and now are running in GCE.
I like to know what amount of disk i have available in GCE.
I used the monitor tools but i not found the information only found the total volume of the disk but not the amount of size the disk used or the available.. it's posible?
The amount of storage you have available to your project is determined by the resource quota in place, which in turn is determined by the project's billing. This could be:
Free trial - GB of PD: 10240 - quota can not be modified - more info here 1
Billing enabled - Standard PD total GB: 5120; SSD PD total GB: 1024; Local SSD total GB: 1500 - quota can be increased upon request. More info here 2
By default, linux VMs deploy with a 10 GB disk, unless an already existing disk was used; the default is 100 GB for Windows machines.

Resources