I'm currently running the openresty image on docker:
https://hub.docker.com/r/openresty/openresty
and with my current configuration i am logging the response bodies of requests which are quite large, this seems to be exceeding the maximum log length possible for openresty with a default set to 2048 as mentioned here:
https://openresty-reference.readthedocs.io/en/latest/Lua_Nginx_API/#ngxlog
However I cant find the mentioned file src/core/ngx_log.h within the docker container.
Can this file be found and modified somewhere else or is there a different way to change the maximum log lengths possible?
Related
when I start minio without setting up the requests_max parameter, the deafult value is 0, and, according to documentation, when requests_max parameter is set to 0, Minio will automatically calculate the maximum number of requests depending on available memory.
NOTE: A zero value of requests_max means MinIO will automatically calculate requests based on available RAM size and that is the default behavior.
https://github.com/minio/minio/blob/master/docs/throttle/README.md
So, on splash screen at startup, I see the requests_max parameter calculated based on available RAM size
Automatically configured API requests per node based on available memory on the system: 20
Now my question is:
Is there a mc command, or any other way, to find the actual maximum request value (calculated based on available memory)?
Thank you
Currently, this value is not exposed in any other way (i.e. other than the log line when it is not set on server startup). You could try opening a feature request to get this item added to the server info command: mc admin info.
The MinIO team is available on their public slack channel or by email to answer questions 24/7/365.
I'm using several containers in my website to do different tasks and the traefik(v2.2) container is used as router to these containers.
the problem is, treafik compresses all server responses using gzip algorithm. even if I disable compression using "compress = false" in docker-compose file, at the same time it adds wrong content-type to some of my jpeg images and it makes "some" images unreadable for browsers.
treafik will add auto-detected content-type to those responses which doesn't have content-type already.
I searched a while and according to official documentations gzip algorithm itself may cause problem for some jpeg images.(reference)
Now I just have a bunch of guesses about the origin of the problem. and unable to solve it after trying few hours.
do you have any idea?
if you need any specefic data,please ask!
Thanks!
This is a known issue for traefik to auto detect and set a default value to Content-Type if none is provided in response.
And this pr fix it by disabling the auto detection.
https://github.com/traefik/traefik/pull/6097
I'm setting up an environment for our data scientists to work on. Currently we have a single node running Jupyterhub with Anaconda and Dask installed. (2 sockets with 6 cores and 2 threads per core with 140 gb ram). When users create a LocalCluster, currently the default settings are to take all the available cores and memory (as far as I can tell). This is okay when done explicitly, but I want the standard LocalCluster to use less than this. Because almost everything we do is
Now when looking into the config I see no configuration dealing with n_workers, n_threads_per_worker, n_cores etc. For memory, in dask.config.get('distributed.worker') I see two memory related options (memory and memory-limit) both specifying the behaviour listed here: https://distributed.dask.org/en/latest/worker.html.
I've also looked at the jupyterlab dask extension, which lets me do all this. However, I can't force people to use jupyterlab.
TL;DR I want to be able set the following standard configuration when creating a cluster:
n_workers
processes = False (I think?)
threads_per_worker
memory_limit either per worker, or for the cluster. I know this can only be a soft limit.
Any suggestions for configuration is also very welcome.
As of 2019-09-20 this isn't implemented. I recommend raising an feature request at https://github.com/dask/distributed/issues/new , or even a pull request.
I am trying to configure Docker Deamon to follow CIS bencmarks. One of the recommendations is to configure default ulimit parameters when starting. They give the example of
dockerd --default-ulimit nproc=1024:2048 --default-ulimit nofile=100:200
How do I know how to calculate the best settings for nproc and nofile for my particular environment?
I guess, to set those, you would need to know what your application upper limits might be. You would need to first run the applications and put them under the type of loads you expect them to be under and then measure the limits and open files.
See https://unix.stackexchange.com/questions/230346/how-to-check-ulimit-usage for information on how to check the limits. You would need to get the limits for all the PIDs that you are running as containers. I would then pad them a bit to account for headroom.
Even then, you are likely to go down a path of chasing limits constantly as it will probably be difficult to accurately get all of them correct before the application goes to production.
When Trying to upload 2GB of stream i got invalid content length error
am running Apache as frontend server to mongrel and chrome as my browser.
One more thing one i do it with mongrel alone am able to upload this 2 GB of stream ,cud anybody tell me whats the problem and how do i configure content length in apache??
I'd imagine the problem is that Apache considers it a denial of service attempt and has a cap to prevent you locking up all the server's resources (very reasonable), whether that's configurable or not I'm not sure - can't find anything but will keep hunting. If it's not, you can always build your own copy of Apache with the limit removed.
Have you considered sending the data in reasonable-sized chunks?
You might also wish to inspect the outgoing request using packet inspection or browser debugging tools. It's possible the content-length is being malformed by the browser (I doubt they have a 2GB test case...)