when I start minio without setting up the requests_max parameter, the deafult value is 0, and, according to documentation, when requests_max parameter is set to 0, Minio will automatically calculate the maximum number of requests depending on available memory.
NOTE: A zero value of requests_max means MinIO will automatically calculate requests based on available RAM size and that is the default behavior.
https://github.com/minio/minio/blob/master/docs/throttle/README.md
So, on splash screen at startup, I see the requests_max parameter calculated based on available RAM size
Automatically configured API requests per node based on available memory on the system: 20
Now my question is:
Is there a mc command, or any other way, to find the actual maximum request value (calculated based on available memory)?
Thank you
Currently, this value is not exposed in any other way (i.e. other than the log line when it is not set on server startup). You could try opening a feature request to get this item added to the server info command: mc admin info.
The MinIO team is available on their public slack channel or by email to answer questions 24/7/365.
Related
I have an Azure IoT edge device. This edge device has one module that simulates a real machine. I would like to configure this edge module (e.g. the simulation time interval, number of items to simulate). I could use either desired properties or environment variables for that. Which makes more sense? What are the intentions and the main differences between desired properties and environment variables?
I don't see much differences as:
Both can be conveniently updated in the Azure portal.
Both make the reported values accessible.
The only difference I see so far is that I can subscribe to changes to desired properties. This doesn't seem to be possible for changes to environment variables (however, then the module would restart and read the new environment variables).
Desired properties represent the state of your module and is better suited than environment variables for few reasons.
Change in desired properties triggers method on the device without restart of the module, which in case of environment variable is necessary
At scale, changing desired properties is possible via Jobs API, while for environment variables, you will need to build additional automation
Desired properties are part of the device twin, which is kept in sync on the cloud side, while environment variables are part of deployment manifest. Twin is better suited to represent the device state than the environment variable.
As explained above, the desire properties are part of your device or module Digital Twin. The Digital Twin is stored in the IoT Hub (in the device registry) and is used to keep the state of your devices in sync with the [cloud] backend services.
The advantage of using the device twin to store your device state is that it can be changed from the backend side, by modifying the device desired properties, and your device will receive the desired property change and then, your device can use the device reported properties to inform that the request change has been accepted (and executed whatever the action it should run in the device). This allows to maintain the device and backend in sync.
For detailed information on device twin and desired and reported properties, check: https://learn.microsoft.com/azure/iot-hub/iot-hub-devguide-device-twins
Here is a popular design available in internet of Tiny URL application:
The application is loadbalanced with the help of a zookeeper, where each server is assigned a counter range registerd with zookeeper.
Each application is having an locally available counter, which increments on every write request.
There is Cache available which(probably) gets updated with each write request.
Gaps in my understanding:
For every write request, we dont check if a tiny url exists in the db for the large URL..so we keep on inserting all write requests(even though a tiny url already exist for that particular large URL). Is that correct? If so then would there be a clean up activity(removing redundant duplicate tiny urls for same large URL) at some intentional downtime of application in the day?
What is the point of scaling...if for 1 million(or more) range of counter value there is just one server handling the request. Wouldn't there be a problem..? say for example there is large scale writing operation, would there be a vertical scaling to avoid slowness?
Kindly correct if there if I have got anything wrong here.
Design problems are open ended; keeping that in mind, here is my take on your questions.
Why there is no check if a large URL is already in the database
It may be a requirement to allow users to have their own tiny urls, even if they point to the same large url. For example, every use might want to see stats on how many times their specific tiny url was clicked one; this is a typical usage for tiny urls - put them into a blog/video/letter to get stats.
Scaling the service
Let me extend "each server is assigned a counter range registered". This implies that generated IDs have structure X bits of service id + Y bits from local counter. X bits are assigned by the zookeeper, and this is what makes each server responsible for one range.
Several server will be placed behind a load balancer. When a request comes to the load balancer, the request will be sent to a randomly picked server. If servers are overloaded, you could just add more servers behind the load balancer, each of those servers owns its own range. This will allow the service as a whole to scale up and down (and no need in vertical scaling).
The key understanding to this design is that those ranges are arbitrary ranges. There is no need for them to be consequential.
I'm currently running the openresty image on docker:
https://hub.docker.com/r/openresty/openresty
and with my current configuration i am logging the response bodies of requests which are quite large, this seems to be exceeding the maximum log length possible for openresty with a default set to 2048 as mentioned here:
https://openresty-reference.readthedocs.io/en/latest/Lua_Nginx_API/#ngxlog
However I cant find the mentioned file src/core/ngx_log.h within the docker container.
Can this file be found and modified somewhere else or is there a different way to change the maximum log lengths possible?
I am new in Zabbix and I am using Zabbix 3.4 version. I have installed server on Linux and want to monitor and check status of Windows service using its Windows agent.
I got the status of services using the key below
service.info[<serviceName>,state]
It returns me proper status of service. Now I want to check how much CPU is utilized by process and how much memory is utilized by process.
I tried some of keys but it's not returning proper value.
perf_counter[\Process(<processName>)\% User Time] // to get CPU utilization by process
proc_info[<processName>,wkset] // to get memory utilize by process
system.cpu.util[,system,avg5] // to get total CPU utilization
vm.memory.size[available] // to get total RAM utilization
But none of above working properly. I tried other keys also but agent logs say it's unsupported. I checked forum and searched on Google but nothing found.
Usually there isn't a direct match Windows Service -> Specific process.
Any service spawns N processes for its internals and also can spawn additional processes to manage incoming connection, log requests and so on.
Think about a classic httpd server: you should find at least one master process, various pre-forked server processes and php/php-fpm processes for current requests.
Regarding the keys you provided, what do you mean by "not working properly" ?
You can refer to Zabbix documentation for Windows-specific items for the exact syntax of the items and the meaning of the return values.
You can use Zabbix item for CPU Utilization of average 5 min:
system.cpu.util[,,avg5]
This will give you average usage of CPU per 5 mins on Windows server. You can then create an appropriate trigger for the same.
We have a requirement to measure unix server downtime for a month using geneos . We explored some of the plug ins available available in geneos but we were not able to find .
Requirement is geneos sampler should add the total time the unix server went down in month and display result. Thanks in advance.
There is no sampler to check the server downtime.
Considering you have a script (written in bash/perl or any other language for that matter) which is able to generate the required output, you could use such a script in the 'toolkit' plugin in Geneos.
Remember that the script should produce comma separated values as an output along with a header (title) record.
I would implement Gateway monitoring using the "Gateway-probeData" plugin.
This will check that the Gateway can communicate with the Netprobe.
Additionally I have set up a crontab job to check the Netprobe is running
# Check and restart Netprobes
0,15,30,45 * * * * /script/to/check/all/probes.sh
Subsequently I would set up dbLogging in the GSE and the Gateway can store the historic data of the connectivity of the Server (via the Netprobe)