Running Docker container on a specific port - ruby-on-rails

I have a Rails app deployed with Dokku on DigitalOcean. I've created a Postgres database and linked it with a Rails app. Everything worked fine until I restarted the droplet. I figured out that apps stopped working because on restart every Docker container gets a new port and Rails application isn't able to connect to it. If I run dokku postgresql:info myapp it shows the old port, but it has changed. If I manually change database.yml and push it to the dokku repo everything works.
So how do I prevent Docker from assigning different port each time the server restarts? Or maybe there is an option to change ports of running containers.

I don't have much experience with Dokku but for docker there's no such thing of A container's port.
In docker you can expose a container's port to receive incoming request and map it to specific ports in your host machine.
With that you can, for instance, run your postgres inside a container and tell docker that you wanna expose the 5432, default postgresql port, to receive incoming requests:
sudo docker run --expose=5432 -P <IMAGE> <COMMAND>
The --expose=5432 tells docker to expose the port 5432 to receive incoming connections from outside world.
The -P flag tells docker to map all exposed ports in your container to the host machine's port.
with that you can connect to postgres pointing to your host's ip:port.
If you want to map a container's port to a different host machine port you can use the -p flag:
sudo docker run --expose=5432 -p=666 <IMAGE> <COMMAND>
Not sure if this can help you with Dokku environment, but I hope so.
For more information about docker's run command see: https://docs.docker.com/reference/commandline/cli/#run

Related

expose docker container's application url to host

I have a docker image with some ruby on rails environment built in (i.e. installing some rails gems and system packages) and I have an EXPOSE 3000 to expose the port at the end.
I ran a container with docker run -p 3000:3000 -ti <image> bash, then start the rails server. The logs are saying the web server is available on localhost:3000. I tried to connect to both the IPAddress as specified in docker inspect <id> and localhost on my host machine, but neither would be able to connect. What could be the problem here?
If your application is listening on localhost, it will only respond to requests from the container's localhost - that is, other processes inside the container.
To fix this you need to set the listen address of your server to listen to any address (usually, you specify this as 0.0.0.0). I've never used rails, but from a quick search, you should use the -b option.
So changing your ENTRYPOINT or CMD in your Dockerfile to contain -b 0.0.0.0 would probably do it.

Custom configuration for redis container

I want to adapt my redis settings via custom conf file and followed the documentation for the implementation. Running my container with the following command throws no error - so far so good.
docker run --name redis-container --net redis -v .../redis:/etc/redis -d redis redis-server /etc/redis/redis.conf
To check if my config file is read I switched the default port 6379 to port 6380 but looking at my docker ports via docker ps shows the default 6379 as my port.
Is there a difference between the redis port itself and the container port or where is my problem located?
The standard Redis image Dockerfile contains the line
EXPOSE 6379
Once a port has been exposed this way, there is no way to un-expose it. Exposing a port has fairly few practical effects in modern Docker, but the most visible is that 6379/tcp will show up in the docker ps output for each exposed port even if it's not separately published (docker run -p). There's no way to remove this port number from the docker ps output.
Docker's port system (the EXPOSE directive and the docker run -p option) are a little bit disconnected from what the application inside the container is actually doing. In your case the container is configured to expose port 6379 but the process is actually listening on port 6380; Docker has no way of knowing these don't match up. Changing the application configuration won't change the container configuration, and vice versa.
As a practical matter you don't usually need to change application ports. Since this Redis will be the only thing running in its container and its corresponding isolated network namespace, it can't conflict with other Redises on the host or in other containers. If you need to remap it on the host, you can use different port numbers for -p; the second number must match what the process is listening on (and Docker can't auto-detect or check this) but the first can be any port.
docker run -p 6380:6379 ... redis
If you're trying to check whether your configuration has had an effect, running CONFIG GET via redis-cli could be a more direct way to ask what the server's configuration is.

docker networking: How to know which ip will be assigned?

Trying to get acquainted with docker, so bear with me...
If I create a database container (psql) with port 5432 exposed, and then create another webapp which wants to connect on 5432, they get assigned some ip addresses on the bridge network from docker...
probably 172.0.0.1 and 172.0.0.2 respectively. if I fire up the containers, inspect their ips with docker network inspect <bridge id>
if I then take those ips and plug in the port on my webapp settings, everything works great...
BUT I shouldn't have to run my webapp, shell into it, change settings, and then run a server, I should be able to just run the container...
So what am I missing here, is there a way to have these two containers networked without having to do all of that?
Use a Docker network
docker create network myapp
docker run --network myapp --name db [first-container...]
docker run --network myapp --name webapp [second-container...]
# ... and so on
Now you can refer to containers by their names, from within other containers. Just like they were hostnames in DNS.
In the application running in the webapp container, you can configure the database server using db as if it is a hostname.

How to connect two docker containers through localhost?

I have two services running in separate containers, one is grunt(application) and runs off port 9000 and the other is sails.js (server) which runs off port 1337. What I want to try to do is have the client app connect with the server through localhost:1337. Is this feasible? Thanks.
HOST
You won't be able to connect to the other container with localhost (as localhost is the current container) but you can connect via the container host (the host that is running your container). In your case you need boot2docker VM IP (echo $(boot2docker ip)). For this to work, you need to expose your port at the host level (which you are doing with -p 1337:1337).
LINK
Another solution that is most common and that I prefer when possible, is to link the containers.
You need to add the --name flag to the server docker run command:
--name sails_server
You need to add the --link flag to the application docker run command:
--link sails_server:sails_server
And inside your application, you will be able to access the server at sail_server:1337
You could also use environment variables to get the server IP. See documentation: https://docs.docker.com/userguide/dockerlinks/
BONUS: DOCKER-COMPOSE
Your run commands may start to be a bit long... in this case I like to use docker-compose that allows me to define my containers and their relationships (volumes, names, link, commands...) in one file.
Yes if you use docker parameter -p 1337:1337 in your docker run command, it will expose the port 1337 from inside the container to your localhost:1337

How to make a container visible to the outside network, and handle I.P addresses in production

I have:
a Windows server on bare metal with Hyper-V
Ubuntu server running in Hyper-V
a Docker container with an NGINX web application running in Ubuntu server
Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don't know how to make the Docker container visible to the external network. I also don't know how to handle the fact that the I.P address changes every time the image is run.
What's the correct way to:
make a Docker container visible to the external network?
handle Docker container I.P. addresses in a repeatable way in production?
When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:
docker run -p 80:80 nginx
This would route port 80 from the Ubuntu server to port 80 within the Nginx container.
You should check the Docker documentation on this at https://docs.docker.com/reference/run/#expose-incoming-ports.
When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here: https://docs.docker.com/reference/builder/#expose.

Resources