Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/
Related
I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping
I've installed docker in a VM which is publicy available on internet. I've installed mongodb in a docker container in the VM.Mongodb is listening on 27017 port.
I've installed using the following steps
docker run -p 27017:27017 --name da-mongo -v ~/mongo-data:/data/db -d mongo
The port from container is redirected to the host using the -p flag. But the port 27017 is exposed on the internet. I don't want it to happen.
Is there any way to fix it?
Well, if you want it available for certain hosts then you need a firewall. But, if all you need is it working on localhost (your VM machine), then you don't need to expose/bind the port with the host. I suggest you to run the container without the -p option, then, run the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' your_container_id_or_name
After that, it will display an IP, it is the IP of the container you've just ran (Yes, docker uses somewhat an internal virtual network connecting your containers and your host machine between them).
After that, you can connect to it using the IP and port combination, something like:
172.17.0.2:27017
When you publish the port, you can select which host interface to publish on:
docker run -p 127.0.0.1:27017:27017 --name da-mongo \
-v ~/mongo-data:/data/db -d mongo
That will publish the container port 27017 to host interface 127.0.0.1 port 27017. You can only add the interface to the host port, the container itself must still bind to 0.0.0.0.
I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.
Consider:
docker run -p 5000:5000 -v /host/:/host appimage
it forwards 5000 to 50000
even in multiple:
docker run -p 5000:5000 -p 5001:5001 -v /host/:/host appimage
What I want to know is:
docker run -p allports:allports
is there any command available that allows to forward all ports in container? Because in my case I am running flask app. For testing purpose I want to run multiple flask instances. So for each flask instance I want to run it in different ports. This auto multi-port forwarding would help.
You can expose a range of ports using the -p option, for example:
docker run -p 2000-5000:2000-5000 -v /host/:/host appimage
See the docker run reference documentation for more details.
You might have a working set-up by using docker run --net host ..., in which case host's network is directly exposed to the continer and all port bindings are "public". I haven't tested this with multiple containers simultaneously but it might work just fine.
I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).