How to routing http access to multiple docker container - docker

How to routing http access for any domain to their specific docker container. So,
any request for:
web1.mydomain.com is for docker container with id asda912kas
web2.mydomain.com is for docker container with id: 8uada0a9sd
etc
Every docker container is running apache, mysql, and wordpress or other web apps. web1.mydomain.com and web2.mydomain.com is using same public IP Address (like apache vhost does)
[sorry for my poor english]

If your web containers are run on the same machine, you can use jwilder/nginx-proxy (https://github.com/jwilder/nginx-proxy)
You run it with port 80 mapped:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And then you run your web containers with environment variable VIRTUAL_HOST:
docker run -d -e VIRTUAL_HOST=web1.mydomain.com image1
docker run -d -e VIRTUAL_HOST=web2.mydomain.com image2
This works well for small deployments.

Related

Manage other container with nginx container

How do I use the nginx in the container and access other container with setup config file?
I am a beginner for docker.
I try to learn how to use nginx manage my applications by docker containers.
I will use the "pgadmin" as an application in container for example.
Create & start containers. I try to use the [link] parameter to connect two containers.
sudo docker create -p 80:80 -p 443:443 --name Nginx nginx
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --link Nginx:PSQLA --name PSQLA dpage/pgadmin4
sudo docker start Nginx
sudo docker start PSQLA
Go to Nginx bash and install nano edit.
sudo docker exec -it Nginx bash
apt update
apt install nano
Create and setup the nginx config file in admin.conf.
nano etc/nginx/conf.d/admin.conf
In the admin.conf is following blow.
{
listen 80;
server_name admin.my-domain-name;
location / {
proxy_pass http://PSQLA:80;
}
}
I get this error blow.
2020/10/17 01:57:16 [emerg] 333#333: host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
nginx: [emerg] host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
How do I use the nginx in the container and access other container with setup config file?
Try the following commands (in the same order) to launch the containers:
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --name PSQLA dpage/pgadmin4
sudo docker create -p 80:80 -p 443:443 --link PSQLA:PSQLA --name Nginx nginx
sudo docker start PSQLA
sudo docker start Nginx
Now edit the Nginx configurations and you should not encounter the error anymore.
Tl;dr
As mentioned in the docker documentation:
When you set up a link, you create a conduit between a source container and a recipient container. The recipient can then access select data about the source.
In order to access PSQLA from Nginx container, we need to link Nginx container to PSQLA container and not the other way around.
Now the question is: What difference does that even makes?
For this we need to understand how --link option works in docker.
The docker adds a host entry for the source container to the /etc/hosts file
We can verify this in the /etc/hosts file inside the Nginx container. It contains a new entry something like this (The id and IP might be different in your case):
172.17.0.4 PSQLA 1117cf1e8a28
This entry makes Nginx container access PSQLA container using the container name.
Please refer this for better understanding:
https://docs.docker.com/network/links/#updating-the-etchosts-file
Important Note
As mentioned in the Docker documentation:
The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.

Expected exposed port on Redis container isn't reachable, even after binding the port

I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping

How to connect to server on Docker from host machine?

Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.

Resources