I recently installed Docker on my Raspberry Pi 4 and connected it to a portrainer instance on my other server. On my Raspberry Pi I created two docker containers, but somehow docker automatically creates random ubuntu containers with names like:
I don't have an idea why it is doing this: /
But when I delete those Containers, a few hours later there are some other containers again.
I hope anyone can help me with that kind of problem.
Ok i think i solved this question...
I run this webinterface (Portrainer) on my public hosted server. And i only shared my ip with my port for Portrainer as "Endpoint" and now i have disabled the port on my raspberry for all other IPs then my Raspberry PI. And now i solved this problem. No container is created anymore. I came up to this solution, because i saw the infos, this container was created and it "wgets" some ".sh"-file from some ip with executing some shell commands. And i thought, "this is not from mine, this is someone want to mine some bitcoins on my raspberry". (because this script downloaded some mining scripts.....
:PS: My english is very bad. But i hope it helped someone other.
Those random names are created automatically when a container is started without a name attribute. If you did not start an unnamed container yourself (by issuing docker run without the --name option), those are most likely being created by a docker build.
You can delete those stopped containers manually one at a time or use commands like docker system prune (see docker help system prune for documentation) to cleanup your daemon from unused objects (including those stopped containers).
Related
Regarding running Docker from within WSL without Docker Desktop, there is a comprehensive article here. However,
When it comes to sharing the Docker daemon between WSL instances, the article only touches the starting bits. This is to ask for the whole process.
First of all, why configuring it to use a socket stored in the shared /mnt/wsl directory, instead of commonly suggested exposing the 2375 port from docker? The reason I'm asking is that I found it challenging to find something to be used as the shared /mnt/wsl directory between different WSL instances, as trying to make use of exiting Windows drive (ntfs share mount) will be peoples' first instinct, however, it won't work. I tried that, trying to call mknod to create a device file in ntfs shared folder, and got:
mknod: /mnt/d/foobar: Operation not supported
Is it because,
The issue is that Docker runs in 2375 but its bound just for localhost in some setups (WSL2 backend / Linux container)
Is it still true? But even that, it's fine with my above case as I'm only sharing the Docker daemon between WSL instances on the same localhost.
So, this is to ask for a total solution for sharing the Docker daemon between WSL instances, that is practical and anyone can follow. thx!
Say we provision an overlay network using docker swarm and create various containers with following names:
Alice
Bob
Larry
John
now if we try to ping any container from another it fails because it does not know how to do the IP lookup i.e., alice does not know bob's IP and so on. We have been taking care of this by manually editing the /etc/hosts on every container and entering the name/IP key value pair in that file but this is becoming very tedious with every restart of our network. There ought to be a better way of handling this.
E.g., services created using docker stack do not suffer from this problem. Due to various reasons we are stuck with creating containers using the vanilla docker create. How can we make containers discover each other on the overlay network without manual labor of editing /etc/hosts?
Below is detailed workflow we currently have to follow:
we first provision a docker swarm and overlay network
Then for each container, we create it using the docker create command and then start it using docker start command. we use the --network flag to attach the container to the overlay network at time of creation
We then use docker container inspect to get the IP address of each container. This involves running n commands and noting down IP address.
Then we log into each container and edit the /etc/hosts file by hand and enter the (name, IP) key-value pair of the other containers. So this means having to enter n*(n-1) records by hand when summed across containers.
Not sure why docker create does not do all this automatically - docker already knows (or can know) all the IP addresses. Containers provisioned using docker stack e.g., do not have to go through this manual process to "discover" each other. The reason why we cannot use docker stack is because:
it does not allow us to specify container name
we run various commands (mostly docker cp) before starting the container and not possible to do this using stack
You might have seen this already: DNS on User defines networks
Have you created your services like in the section „Attach a service to an overlay“ in this doc?
It seems that the only thing that is needed is to refer the containers by their {name}.{network} instead of just the {name}. No need to edit /etc/hosts or use the --add-host flag or run some additional dns server. Refer https://forums.docker.com/t/need-help-connecting-containers-in-swarm-mode/77944/6
Further details: the official documentation for docker does not mention anywhere the necessity to add .{network} suffix to the {containername}. Indeed on this link, Step #7 under the Walk-through, there is no .{network} suffix used. So not sure why we need to do that. The version of docker we are using is 18.06.1-ce for linux.
I had a similar issue : I'm following this official tutorial to create a docker swarm overlay network on two Raspberry pi 3 and the ping was impossible unless I found on Github the answer : as I understood, it seems that latest version of alpine (for a reason that I ignore) is not suitable for Raspberry pi 3 so the solution would be the use of the version 3.12.3 like this : sudo docker run -dit --name alpine1 --network test1 alpine:3.12.3
Hope that this might help someone :)
I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.
My team and I are converting some of our infrastructure to docker using docker-compose. Everything appears to be working great the only issue I have is doing a restart it gives me a connection pool is full error. I am trying to figure out what is causing this. If I remove 2 containers or (1 complete setup) it works fine.
A little background on what I am trying to do. This is a Ruby on Rails application that is being ran with multiple different configurations for different teams within an organization. In total the server is running 14 different containers. The host server OS is CentOS, and the compose command is being ran from a MacBook Pro on the same network. I have also tried this with a boot2docker VM with the same result.
Here is the verbose output from the command (using the boot2docker vm)
https://gist.github.com/rebelweb/5e6dfe34ec3e8dbb8f02c0755991ef11
Any help or pointers is appreciated.
I have been struggling with this error message as well with my development environment that uses more than ten containers executed through docker-compose.
WARNING: Connection pool is full, discarding connection: localhost
I think I've discovered the root cause of this issue. The python library requests maintains a pool of HTTP connections that the docker library uses to talk to the docker API and, presumably, the containers themselves. It's my hypothesis that only those of us that use docker-compose with more than 10 containers will ever see this. The problem is twofold.
requests defaults its connection pool size to 10, and
there doesn't appear to be any way to inject a bigger pool size from the docker-compose or docker libraries
I hacked together a solution. My libraries for requests were located in ~/.local/lib/python2.7/site-packages. I found requests/adapters.py and changed DEFAULT_POOLSIZE from 10 to 1000.
This is not a production solution, is pretty obscure, and will not survive a package upgrade.
You can try reset network pool before deploy
$ docker network prune
Docks here: https://docs.docker.com/engine/reference/commandline/network_prune/
I got the same issue with my Django Application. Running about 70 containers in docker-compose. This post helped me since it seems that prune is needed after setting COMPOSE_PARALLEL_LIMIT
I did:
docker-compose down
export COMPOSE_PARALLEL_LIMIT=1000
docker network prune
docker-compose up -d
For future readers. A small addition to the answer by #andriy-baran
You need to stop all containers, delete them and them run network prune (because the prune command removes unused networks only)
So something like this:
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker network prune
Currently, I'm working on a project where we are using ROS, Bebop_autonomy, and OpenCV to control a Parrot Bebop2 autonomously. The machines we use in the workspace are running Ubuntu 14.04.5, and I can start a container using an image
I created with "docker run -it --network=host username/image". After configuring everything inside the container, the bebop_autonomy node is fine and can communicate on the Bebop's network perfectly. When you run ip addr in both the container and host machine, they show the same address, as you'd expect.
However, when trying to run it on my Windows machine, the ip is different than the host machine, and I'm never receiving any ACK packets when I try to communicate with the Bebop. I'm assuming this is because the packets aren't being sent to the right ip, or they aren't being forwarded correctly.
I have tried creating my own network and setting the ip manually with "docker network create" and passing it in to the run command as an argument, but I can't seem to get it to work at all. I've also tried creating different switches in the Hyper-V manager, but nothing I've read in the last few days has helped me figure this out.
I've got a good handle on how docker works, but most of the reference material I see is talking about a host that already runs linux. If I can't figure this out, it's almost useless for us to continue with docker in the first place.
Is there any way to configure Docker for Windows to work in the same way that Docker works on Linux when providing --network=host?
I ended up achieving what I wanted by creating a separate network in Hyper-V manager, setting that network to only use an external wifi-adapter, and running the container using that network. There has to be a better way though.