expose docker container's application url to host - docker

I have a docker image with some ruby on rails environment built in (i.e. installing some rails gems and system packages) and I have an EXPOSE 3000 to expose the port at the end.
I ran a container with docker run -p 3000:3000 -ti <image> bash, then start the rails server. The logs are saying the web server is available on localhost:3000. I tried to connect to both the IPAddress as specified in docker inspect <id> and localhost on my host machine, but neither would be able to connect. What could be the problem here?

If your application is listening on localhost, it will only respond to requests from the container's localhost - that is, other processes inside the container.
To fix this you need to set the listen address of your server to listen to any address (usually, you specify this as 0.0.0.0). I've never used rails, but from a quick search, you should use the -b option.
So changing your ENTRYPOINT or CMD in your Dockerfile to contain -b 0.0.0.0 would probably do it.

Related

Docker - Web browser cannot connect to a running web app container on server

I have successfully built my web app image and ran a container on my server, which is an EC2 instance, with no error at all, but when I tried to access the web page it returned no connection, even though I accessed through the binded port of the host server. The build and run processes gave absolutely no error, either build error or connection error. I'm new to both Docker and AWS, so I'm not sure what could be the problem. Any help from you guys is really appreciated. Thanks a lot!
Here is my Dockerfile
FROM ubuntu
WORKDIR /usr/src/app
# install dependencies, nothing wrong
RUN ...
COPY...
#
# exposed ports
EXPOSE 5000
EXPOSE 5100
CMD ...
Docker build
$ sudo docker built -t demo-app
Docker run command
$ sudo docker run -it -p 8080:5000 -p 808:5100 --name demo-app-1 demo-app
I accessed through the binded port of the host server.
It's mean the application is running, and you're able to access using curl localhost:8080.
Now there are mainly two-issue if you're able to access the application after doing ssh to EC2 instance and verify the application responding on localhost of EC2.
Security group not allowing connection on the desired port, allow 8080 and the check
The instance is in private subnet, you can verify the instance.

Docker container with published port not accessible from outside

My Docker container is running Rails on port 3000, and I'm publishing the port to port 8900. See:
$ docker-compose ps
Name Command State Ports
rails_poc_1 /bin/sh -c puma -C config/ ... Up 0.0.0.0:3000->8900/tcp
However, when visiting http://localhost:8900 my browser displays ERR_CONNECTION_REFUSED.
When curling port 3000 from inside the container with docker exec 8fcceed1d477 curl localhost:3000 I get a valid response which proves Rails is working properly.
Am I overlooking something?
I think you have your port mapping reversed. Your ps line should look more like:
0.0.0.0:8900->3000/tcp
If you want to access 3000 outside the container as 8900

start a docker LAMP image with apache bound to non-standard port

I am new to docker, using https://github.com/mattrayner/docker-lamp
I've read about the docker run command but still not quite getting the -p option. Is there a way to make it tell Apache to listen on a non-standard port?
I have succeeded in starting it on the default port 80, then re-configuring/re-loading Apache, from within the container, to bind itself to port 8080. But in that scenario I can't access the container's Apache from outside it via localhost:8080. (If that makes sense.)
I simply want to develop something using PHP 5.6 without disturbing anything else on my local setup, which is running PHP 7.0. If there's another way to achieve the same end, I'm good with that too.
The -p or --publish option is a host:container port mapping specifically so that you don't have to change what may already be running inside the container.
If the container is already running on port 80 but you want to access it externally (via your host or laptop) via port 8080, then you can simple run with -p 8080:80 which will map your host port 8080 to the container port 80.
Multiple containers can run and use port 80 on the same host (since the containers have their own IP address on the Docker network). But you can only expose one port at a time.
For example, if you had 3 containers you wanted to run and all of them were listening on port 80, you could start the first with -p 8080:80, the second with -p 8082:80, and the third with -p 8084:80.
The -p section of https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose does into this a bit deeper.

How to connect two docker containers through localhost?

I have two services running in separate containers, one is grunt(application) and runs off port 9000 and the other is sails.js (server) which runs off port 1337. What I want to try to do is have the client app connect with the server through localhost:1337. Is this feasible? Thanks.
HOST
You won't be able to connect to the other container with localhost (as localhost is the current container) but you can connect via the container host (the host that is running your container). In your case you need boot2docker VM IP (echo $(boot2docker ip)). For this to work, you need to expose your port at the host level (which you are doing with -p 1337:1337).
LINK
Another solution that is most common and that I prefer when possible, is to link the containers.
You need to add the --name flag to the server docker run command:
--name sails_server
You need to add the --link flag to the application docker run command:
--link sails_server:sails_server
And inside your application, you will be able to access the server at sail_server:1337
You could also use environment variables to get the server IP. See documentation: https://docs.docker.com/userguide/dockerlinks/
BONUS: DOCKER-COMPOSE
Your run commands may start to be a bit long... in this case I like to use docker-compose that allows me to define my containers and their relationships (volumes, names, link, commands...) in one file.
Yes if you use docker parameter -p 1337:1337 in your docker run command, it will expose the port 1337 from inside the container to your localhost:1337

Running Docker container on a specific port

I have a Rails app deployed with Dokku on DigitalOcean. I've created a Postgres database and linked it with a Rails app. Everything worked fine until I restarted the droplet. I figured out that apps stopped working because on restart every Docker container gets a new port and Rails application isn't able to connect to it. If I run dokku postgresql:info myapp it shows the old port, but it has changed. If I manually change database.yml and push it to the dokku repo everything works.
So how do I prevent Docker from assigning different port each time the server restarts? Or maybe there is an option to change ports of running containers.
I don't have much experience with Dokku but for docker there's no such thing of A container's port.
In docker you can expose a container's port to receive incoming request and map it to specific ports in your host machine.
With that you can, for instance, run your postgres inside a container and tell docker that you wanna expose the 5432, default postgresql port, to receive incoming requests:
sudo docker run --expose=5432 -P <IMAGE> <COMMAND>
The --expose=5432 tells docker to expose the port 5432 to receive incoming connections from outside world.
The -P flag tells docker to map all exposed ports in your container to the host machine's port.
with that you can connect to postgres pointing to your host's ip:port.
If you want to map a container's port to a different host machine port you can use the -p flag:
sudo docker run --expose=5432 -p=666 <IMAGE> <COMMAND>
Not sure if this can help you with Dokku environment, but I hope so.
For more information about docker's run command see: https://docs.docker.com/reference/commandline/cli/#run

Resources