I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}
I want to run an image which I have already created and uploaded on the docker hub. Is it possible to run that image on lxc/lxd? Basically I want to do performance comparison between docker and lxc.
I have installed skopeo, umoci, go-md2man and jq.
Now, when I try to run the command lxc-create c1 -t oci – --url docker://awaisaz/test:part2
it gives trust policy error. /etc/containers/policy.json not such file or directory
Can anyone suggest me a solution or alternate way to do this?
So, you want to run a docker container inside an LXC Container.
firstly, you need to make docker process up and running inside an lxc container.
sudo lxc config edit <lxc-container-name>
In Config Object, Add
linux.kernel_modules: overlay,ip_tables
security.nesting: true
security.privileged: true
Then Exit from that YAML File, And Restart the LXC Container
sudo lxc restart <container_name>
After Successfull restart of LXC Container.
exec into that container by
sudo lxc exec <container_name> /bin/bash
Then,
sudo rm /var/lib/docker/network/files/local-kv.db
Restart Docker Service,
service docker restart (In LXC Container)
Then you can use docker process in LXC Container as if you are in a VM.
I would like my centos7 container to log message to /var/log/messages
[root#gen-r-vrt-057-009 ~]# docker exec -it rsyslog_base_centos7 "/bin/bash"
[root#gen-r-vrt-057-009 /]# logger "lior"
[root#gen-r-vrt-057-009 /]# cat /var/log/messages
[root#gen-r-vrt-057-009 /]#
I installed rsyslog, tried running container in several ways:
docker run -dit --name rsyslog_base_centos7 --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host -v /dev/log:/dev/log --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
But nothing seems to do the trick.
Container os and docker version:
[root#gen-r-vrt-057-009 /]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root#gen-r-vrt-057-009 /]# exit
[root#gen-r-vrt-057-009 ~]# docker -v
Docker version 17.03.2-ce, build f5ec1e2
Any ideas?
Thanks
If I understand you correctly, you want to run rsyslog inside the container but want to make rsyslog log data from the host machine. By default, this is not possible due to isolation.
It is an interesting use case, and we probably should add an issue tracker at https://github.com/rsyslog/rsyslog-docker.
You can probably achieve your goal by mounting /dev/log into the container, but depending on the host OS that requires some extra work there as well.
The rsyslog/rsyslog_base_centos7 is designed with the intent to provide a base container that you can use to make applications inside the container use rsyslog logging.
Please also have a look at this Twitter conversation: https://twitter.com/rgerhards/status/978183898776686592 - doc updates will be upcoming once we have the actual procedure.
Note: This answer was completely rewritten as I originally totally missed the point.
Smart people from rsyslog put the following Docker image together:
https://hub.docker.com/r/rsyslog/rsyslog_base_centos7
It allows for your use case :
c) want to run a client machine where rsyslog processes log messagesv
(the default CentOS 7 config does NOT work inside a container, but
this container has a corrected config!)
Here is a URL to a patch you can throw to CentOS7 docker config to make it work:
https://gist.github.com/oleksandriegorov/2718a7e35b8d17ada934b651d627ab97
Of course, restart rsyslogd to apply changes.
I'm using packer docker builder with ansible to create docker image (https://www.packer.io/docs/builders/docker.html)
I have a machine(client) which is meant to run build scripts. The packer docker is executed with ansible from this machine. This machine has docker client. It's connected to a remote docker daemon. The environment variable DOCKER_HOST is set to point to the remote docker host. I'm able to test the connectivity and things are working good.
Now the problem is, when I execute packer docker to build the image, it errors out saying:
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker612435850:/packer-files -d -i -t ubuntu:latest /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
==> docker: See 'docker run --help'.
It seems the packer docker is stuck looking at local daemon.
Workaround: I renamed docker binary and introduced a script called "docker" which sets DOCKER_HOST and invokes the original docker binary with parameters passed on.
Is there a better way to deal this?
Packers Docker builder doesn't work with remote hosts since packer uses the /packer-files volume mount to communicate with the container. This is vaguely expressed in the docs with:
The Docker builder must run on a machine that has Docker installed.
And explained in Overriding the host directory.
I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is a just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted