I have Docker engine installed on Debian Jessie and I am running there container with nginx in it. My "run" command looks like this:
docker run -p 1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
It works fine, problem is that now content of this container is accessible via http://{server_ip}:1234. I want to run multiple containers (domains) on this server so I want to setup reverse proxies for them.
How can I make sure that container will be only accessible via reverse proxy and not directly from IP:port? Eg.:
http://{server_ip}:1234 # not found, connection refused, etc...
http://localhost:1234 # works fine
//EDIT: Just to be clear - I am not asking how to setup reverse proxy, but how to run Docker container to be accessible only from localhost.
Specify the required host IP in the port mapping
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
If you are doing a reverse proxy, you might want to put them all on a user defined network along with your reverse proxy, then everything is in a container and accessible on their internal network.
docker network create net
docker run -d --net=web -v /var/www/:/usr/share/nginx/html nginx:1.9
docker run -d -p 80:80 --net=web haproxy
Well, solution is pretty simple, you just have to specify 127.0.0.1 when mapping port:
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
Related
I'm following the following tutorial on how to start a basic nginx server in a docker container. However, the example's nginx docker container runs on localhost (0.0.0.0) as shown here:
Meanwhile, when I run it it for some reason it runs on the IP 10.0.75.2:
Is there any particular reason why this is happening? And is there any way to get it to run on localhost like in the example?
Edit: I tried using --net=host but had no results:
The default network is bridged. The 0.0.0.0:49166->443 shows a port mapping of exposed ports in the container to high level ports on your host because of the -P option. You can manually map specific ports by changing that flag to something like -p 8080:80 -p 443:443 to have port 8080 and 443 on your host map into the container.
You can also change the default network to be your host network as you've requested. This removes some of the isolation and protections provided by the container, and limits your ability to configure integrations between containers, which is why it is not the default option. That syntax would be:
docker run --name nginx1 --net=host -d nginx
Edit: from your comments and a reread I see you're also asking about where the 10.0.75.2 ip address comes from. This is based on how you launch the docker daemon. That IP binding is assigned when you pass the --ip flag to the daemon documentation here. If you're running docker in a vm with docker-machine, I'd expect this to be the IP of your vm.
A good turnaround is to set using -p flag (--publish short)
docker run -d -p 3000:80 --name <your_image_name> nginx:<version_tag>
How does Docker handle docker run -d --net=host <image> if I run 2 images that have the exact same port EXPOSEd?
For example, if I run:
$ docker run -d --net=host nginx
$ docker run -d --net=host nginx
$ docker run -d --net=host httpd
# I now have 3 containers running, all of which EXPOSE port 80
# what does the following return?
$ curl http://localhost:80/
What response do I get? The first nginx? The second nginx? The Apache httpd? And how does Docker manage it under the covers? There is no NAT involved since I did --net=host
Well, I have my answer. Just like a process that tries to bind to a port already in use will fail with EADDRINUSE, so too will a container that tries to bind to a port already in use on the host if --net=host.
I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).
I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.
We can create a new container and define your application port in docker run command like
sudo docker run -d -p 5000:5000 training/webapp python app.py
or
sudo docker run -d -P training/webapp python app.py
But, what if someone forgot to specify -p or -P option in docker run command? The container get created and runs the application locally. Now how could I assign a port on which application is running locally in container to the port of my Ubuntu host machine?
Kindly, help on this.
Thanks.
Short: You can't. You need to stop the container (or not) and start a new one with the proper parameters.
Docker spins up a local proxy and setup the iptables for proper NAT. If you really can't start a new container, you could manually setup the iptables and spin up a socat. You can take a look at the network part of the Docker code for more info.