I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.
Related
My setup is based on running two Docker containers, one with an API and the other with a DB.
This methodology makes it possible that both containers have an exposed port to web services.
But what I want is that the DB container (toolname-db) can only be exposed to the API container (toolname-api). This makes sure that the DB is not not exposed to web services directly.
How do I have to alter my setup in order to make sure what I want is possible?
Currently I use the following commands:
sudo docker build -t toolname .
sudo docker run -d -p 3333:3333 --name=toolname-db mdillon/postgis
sudo docker run -it -p 4444:4444 --name=toolname-api --network=host -d toolname
A container will only be reachable from outside Docker space if it has published ports. So you need to remove the -p option from your database container.
For the two containers to be able to talk to each other they need to be on the same network. Docker's default here is for compatibility with what's now a very old networking setup, so you need to manually create a network, though it doesn't need any special setting.
Finally, you don't need --net host. That disables all of Docker's networking setup; port mappings with -p are disabled, and you can't communicate with containers that don't themselves have ports published. (I usually see it recommended as a hack to work around hard-coded localhost connection strings.)
That leaves your final setup as:
sudo docker build -t toolname .
sudo docker network create tool
sudo docker run -d --net=tool --name=toolname-db mdillon/postgis
sudo docker run -d --net=tool -p 4444:4444 --name=toolname-api toolname
As #BentCoder suggests in a comment, it's very common to use Docker Compose to run multiple containers together. If you do, it creates a network for you which can save you a step.
I'm running 2 Docker containers on a host. In my first container, I started it this way:
docker run -d --name site-a -p 127.0.0.1:3000:80 nginx
This maps the port 80 to the host machine's port 3000. It also has a name called site-a, which I want to use it in another container.
Then in my other container, which is a main reverse proxy container, I configured the nginx's configuration to have an upstream pointing the the first container (site-a):
upstream my-site-a {
server site-a:80;
}
I then run the reverse proxy container this way:
docker run -d --name reverse-proxy -p 80:80 nginx
So that my reverse-proxy container will serve from site-a container.
However, there are 2 problems here:
The upstream in my nginx configuration doesn't work when I use server site-a:80;. How can I get
nginx to resolve the alias "site-a" to the IP of site-a container?
When starting site-a container, I followed an answer at here
and bound it to the host machine's port 3000 with this: -p 127.0.0.1:3000:80 Is this neccessary?
In order for your containers to be mutually reachable via their name, you need to add them to the same network.
First create a network with this command:
docker network create my-network
Then, when running your containers, add the --network flag like this:
docker run -d --name site-a -p 127.0.0.1:3000:80 --network my-network nginx
Of course you need to do the same thing to both containers.
As per your second question, there's no need to map the port on your host with the -p flag as long as you don't want to reach site-a's container directly from your host.
Of course you still need to use the -p flag on the reverse proxy container in order to make it reachable.
If you combine multiple containers to more complex infrastructure it's time to move to more complex technologies. Basically you have the choice between docker-compose and docker stack. Kubernetes could also be an option but it's more complicated.
That techniques provide solutions for container discovery and internal name resolving.
I suggest to use docker stack. Instead of compose it has no additional requirements beside docker.
I have pulled hyperledger/composer-rest-server docker image , Now if i wanted to run this docker image then on which port should i expose ? Like mentioned below.
docker run --name composer-rest-server --publish XXXX:YYYY --detach hyperledger/composer-rest-server
Here please tell me what should i replace for XXXX & YYYY ?
I run the rest server in a container using a command as follows:
docker run -d \
-e COMPOSER_CARD="admin#test-net" \
-e COMPOSER_NAMESPACES="never" \
-v ~/.composer:/home/composer/.composer \
--name rest -p 3000:3000 \
hyperledger/composer-rest-server
For the Published Port, the first element is the Port that will be used on the Docker Host, and the second is the Port it is forwarded to inside the container. (The Port inside the container will always be 3000 by default and is more complex to change.)
I'm passing 2 environment variables into the Container which the REST server will recognise - Namespaces just keeps the endpoints simple, but the COMPOSER_CARD is essential for the REST server to start properly.
I'm also sharing a volume between the Docker Host and the Container which is where the Cards are stored, so that the REST server can find the COMPOSER_CARD referred to in the environment variable.
Warning: If you are trying to test the REST server with the Development Fabric you need to understand the IP Network and Addressing of the Docker containers - by default the Composer Business Network Cards will be built using localhost as the address of the Fabric servers, but you can't use localhost in the REST server container as that will redirect inside the container and fail to find the Fabric.
There is a tutorial in the Composer Docs that is focused on Multi-User authentication, but it does also cover the networking aspects of using the REST Server Container. There is general information about the REST server here.
I have Docker engine installed on Debian Jessie and I am running there container with nginx in it. My "run" command looks like this:
docker run -p 1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
It works fine, problem is that now content of this container is accessible via http://{server_ip}:1234. I want to run multiple containers (domains) on this server so I want to setup reverse proxies for them.
How can I make sure that container will be only accessible via reverse proxy and not directly from IP:port? Eg.:
http://{server_ip}:1234 # not found, connection refused, etc...
http://localhost:1234 # works fine
//EDIT: Just to be clear - I am not asking how to setup reverse proxy, but how to run Docker container to be accessible only from localhost.
Specify the required host IP in the port mapping
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
If you are doing a reverse proxy, you might want to put them all on a user defined network along with your reverse proxy, then everything is in a container and accessible on their internal network.
docker network create net
docker run -d --net=web -v /var/www/:/usr/share/nginx/html nginx:1.9
docker run -d -p 80:80 --net=web haproxy
Well, solution is pretty simple, you just have to specify 127.0.0.1 when mapping port:
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
This question already has answers here:
Assign static IP to Docker container
(8 answers)
Closed 7 years ago.
I have a docker-compose setup with a bunch of backend services (postgres, redis, ...), a few apps (rails, node, ...) and an nginx on top of it.
The apps are connected to the databases using docker env variables (e.g. DOCKERCOMPOSEDEMO_POSTGRES_1_PORT_5432_TCP_ADDR), and the nginx is connected to the apps using the docker generated /etc/hosts: (e.g. upstream nodeapp1-upstream { server dockercomposedemo_node_app1_1:3000; })
The problem is that each time I restart some service it gets a new IP address, and thus everything on top of it can't connect to it any more, so restarting a rails app requires to restart nginx, and restarting a database requires to restart the apps and the nginx.
Am I doing somethings wrong, or is it the intended behaviour? Always restarting all that stuff doesn't look like a good solution.
Thank you
It is an intended behaviour, there are many ways how to avoid restart of dependent services, I'm using next approach:
I run most of my dockerized services tied to own static ips using the next approach:
I create ip aliases for all services on docker host
Then I run each service redirecting ports from this ip into container so each service have own static ip which could be used by external users and other containers.
Sample:
docker run --name dns --restart=always -d -p 172.16.177.20:53:53/udp dns
docker run --name registry --restart=always -d -p 172.16.177.12:80:5000 registry
docker run --name cache --restart=always -d -p 172.16.177.13:80:3142 -v /data/cache:/var/cache/apt-cacher-ng cache
docker run --name mirror --restart=always -d -p 172.16.177.19:80:80 -v /data/mirror:/usr/share/nginx/html:ro mirror
...