I tried to limit the maximum memory docker container can use like:
docker run -m --memory=512m
And I'm getting
invalid argument "--memory=512m" for "-m, --memory" flag: invalid size: '--memory=512m'
I couldn't find any examples in the documentation Docker Doc
You must have a flag and value with a space between them. You must also specify the image name in your command
docker run -m 512m {{IMAGE}}
Related
I'm looking for a way to reduce the output generated by docker compose up.
When running in CI all the "interactive" output for download and extract progress is completely useless and generate lots of useless text.
docker has --quiet but I don't see the same for docker compose.
There is a --quiet-pull option that lets you reduce the output generated docker compose up and docker compose run
docker compose up --quiet-pull
You can always run the docker compose in a detached mode with the -d parameter and then check logs of the service/container you want with docker logs --follow <container>
There was an option to set the log-level with --log-level [DEBUG, INFO, WARNING, ERROR, CRITICAL] but it is deprecated from version 2.0.
I am on working on Windows Server 2019 and trying to run a docker container of CentOS on it. I am running the following:
PS C:\Windows\system32> docker run -dit --name=testing23 --cpu-shares=12 raycentos:1.0
6a3ffb86c1d9509a9d80f0de54fc6acf5731ca645ee74e6aabe41d7153b3af70
PS C:\Windows\system32> docker exec -it 6a3ffb86c1d9509a9d80f0de54fc6acf5731ca645ee74e6aabe41d7153b3af70 bash
(app-root) bash-4.2# nproc
2
It still specifies only 2, and not 32. How can we assign more CPUs to the container?
refer to this topic for more details https://docs.docker.com/config/containers/resource_constraints/#cpu
you have to add the values with proper flags
try :
--cpus=<value> for maximum CPU resources a container can use
--cpuset-cpus = 12
By default, all can be used, or you can limit it per container using the --cpuset-cpus parameter.
docker run --cpuset-cpus="0-2" myapplication:latest
That would restrict the container to 3 CPU's (0, 1, and 2). See the docker run docs for more details.
The chosen way to limit CPU usage of containers is with a fractional limit on CPUs:
docker run --cpus 2.5 myapplication:latest
I have tried to install TFServing in docker container several times.
However I still can't build it without any error.
I follow the installation steps on the official site. But I still meet compile error during building.
I doubt that if there is some flaw in dockerfile which I built.
I attach the screenshot of the error.
I found a dockerfile on the net which meet my demand.
https://github.com/posutsai/DeepLearning_Docker/blob/master/Tensorflow-serving/Dockerfile-Tensorflow-serving-gpu
As of now (Oct 2019), official docker image of TFServing for both CPU & GPU is available at https://hub.docker.com/r/tensorflow/serving.
To set up your tfserving image on docker, just pull the image & start the container.
Pull the official image
docker pull tensorflow/serving:latest
Start the container
docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/path/to/model/dir,target=/models/inception --name tfserve -e MODEL_NAME=inception -t tensorflow/serving:latest
For using GPU, pull GPU specific image and pass the appropriate parameters in docker command
docker pull tensorflow/serving:latest-gpu
docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/path/to/model/dir,target=/models/inception --name tfserve_gpu -e MODEL_NAME=inception --gpus all -t tensorflow/serving:latest-gpu --per_process_gpu_memory_fraction=0.001
Please note,
gpus all flag is used to allocate all available GPUs (in case you have multiple in your machine) to the docker container.
User gpus device=1 to select first GPU device, if you need to restrict usage to a particular device.
per_process_gpu_memory_fraction flag is used to restrict GPU memory usage by the tfserving docker image. Pass its value according to your program need.
I have a testenvironment for code in a docker image which I use by running bash in the container:
me#host$ docker run -ti myimage bash
Inside the container, I launch a program normally by saying
root#docker# ./myprogram
However, I want the process of myprogram to have a negative niceness (there are valid reasons for this). However:
root#docker# nice -n -7 ./myprogram
nice: cannot set niceness: Permission denied
Given that docker is run by the docker daemon which runs as root and I am root inside the container, why doesn't this work and how can force a negative niceness?
Note: The docker image is running debian/sid and the host is ubuntu/12.04.
Try adding
--privileged=true
to your run command.
[edit] privileged=true is the old method. Looks like
--cap-add=SYS_NICE
Should work as well.
You could also set the CPU priority of the whole container with -c for CPU shares.
Docker docs: http://docs.docker.com/reference/run/#runtime-constraints-on-cpu-and-memory
CGroups/cpu.shares docs: https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt
I need to set the file descriptor limit correctly on the docker container
I connect to container with ssh (https://github.com/phusion/baseimage-docker)
Already tried:
edit limits.conf the container ignore this file
upstart procedure found at https://coderwall.com/p/myodcq but this docker image has different kind of init process. (runit)
I tried to modify configuration of pam library in /etc/pam.d
try to enabled pam for ssh in sshd_config
The output it always the same.
bash: ulimit: open files: cannot modify limit: Operation not permitted
The latest docker supports setting ulimits through the command line and the API. For instance, docker run takes --ulimit <type>=<soft>:<hard> and there can be as many of these as you like. So, for your nofile, an example would be --ulimit nofile=262144:262144
After some searching I found this on a Google groups discussion:
docker currently inhibits this capability for enhanced safety.
That is because the ulimit settings of the host system apply to the docker container. It is regarded as a security risk that programs running in a container can change the ulimit settings for the host.
The good news is that you have two different solutions to choose from.
Remove sys_resource from lxc_template.go and recompile docker. Then
you'll be able to set the ulimit as high as you like.
or
Stop the docker demon. Change the ulimit settings on the host. Start
the docker demon. It now has your revised limits, and its child
processes as well.
I applied the second method:
sudo service docker stop;
changed the limits in /etc/security/limits.conf
reboot the machine
run my container
run ulimit -a in the container to confirm the open files limit has been inherited.
See: https://groups.google.com/forum/#!searchin/docker-user/limits/docker-user/T45Kc9vD804/v8J_N4gLbacJ
If using the docker-compose file,
Based on docker compose version 2.x
We can set like as below, by overriding the default config.
ulimits:
nproc: 65535
nofile:
soft: 26677
hard: 46677
https://docs.docker.com/compose/compose-file/compose-file-v3/
I have tried many options and unsure as to why a few solutions suggested above work on one machine and not on others.
A solution that works and that is simple and can work per container is:
docker run --ulimit memlock=819200000:819200000 -h <docker_host_name> --name=current -v /home/user_home:/user_home -i -d -t docker_user_name/image_name
Actually, I have tried the above answer, but it did not seem to work.
To get my containers to acknowledge the ulimit change, I had to update the docker.conf file before starting them:
$ sudo service docker stop
$ sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"
$ sudo service docker start
Here is what I did.
set
ulimit -n 32000
in the file /etc/init.d/docker
and restart the docker service
docker run -ti node:latest /bin/bash
run this command to verify
user#4d04d06d5022:/# ulimit -a
should see this in the result
open files (-n) 32000
[user#ip ec2-user]# docker run -ti node /bin/bash
user#4d04d06d5022:/# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 58729
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 58729
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
The docker run command has a --ulimit flag you can use this flag to set the open file limit in your docker container.
Run the following command when spinning up your container to set the open file limit.
docker run --ulimit nofile=<softlimit>:<hardlimit> the first value before the colon indicates the soft file limit and the value after the colon indicates the hard file limit. you can verify this by running your container in interactive mode and executing the following command in your containers shell ulimit -n
PS: check out this blog post for more clarity
For boot2docker, we can set it on /var/lib/boot2docker/profile, for instance:
ulimit -n 2018
Be warned not to set this limit too high as it will slow down apt-get! See bug #1332440. I had it with debian jessie.