I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.
Related
So I'm trying to setup a new container that needs to communicate with mysql. I setup the mysql container. I did a
docker network ls
to see the name of the network it uses. When I start the snipe-it container using
docker run -d -p 8082:80 -p 443:443 --name="snipeit" --network=mysql_default --mount source=snipe-vol,dst=/var/lib/snipeit --env-file=./snipe-it-env snipe/snipe-it
When going to the web portal of the docker container I get a message from the setup script saying it can't connect to the db. To update the settings in the .env file.
As far as I can tell, the environment variables were all correct.
Use busybox image to diagnose it first:
docker run --rm -it --network=mysql_default busybox
Then in console try to ping your mysql instance: ping mysql or whatever your container name with mysql is defined as a service.
You may check what services are actually running on given network with:
docker network inspect mysql_default
There should be section named "Containers" with proper name of container with mysql - providing it is running right now.
I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost
I have installed docker compose and used it a little. Then decided I did not need it it. Now when I create containers by hand they are assigned a network with an ip address, gateway and other things. When I inspected older containers before i installed docker compose they do not have these network settings.
I have tried unoinstalling docker compose and reinstalling docker which did not work. Is there anything I can do? The reason I am asking is I can't link containers together because every new container is assigned an ip address and other network settings.
Docker always does that, nothing to do with compose. Compose doesn't modify your Docker installation in any way, purely connects to the daemon to run commands under the hood.
By linking containers together I'm assuming you mean just so they can communicate with each other? --link is deprecated for some time now in favor of docker network .... Try the following:
$ docker network create test-net
$ docker run -d --name c1 --net test-net alpine:3.3 sleep 20000
$ docker run -it --name c2 --net test-net alpine:3.3 ping c1
Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/
I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.