Make a request from one container to the second container with localhost - docker

I have two docker-compose files set up - one for the frontend application, and one for the backend.
Frontend runs on 3000 port and is exposed on 80: 0.0.0.0:80:3000
Backend runs on 3001 port and is exposed on the same port also publicly: 0.0.0.0:3001:3001
From the host machine, I can easily make a request to the backend:
$ curl 127.0.0.1:3001
But I cannot do it from the frontend container - nothing is listening on that port because those are two different containers in different networks.
I tried to connect both of them in one network - then I can use the IP of the backend container, or a hostname, to make a valid request. But it's still not the localhost. How can I solve this?

When using Docker, localhost points to the container itself, not to your computer. There are a few ways to do what you want. But none of them will work with localhost from a container.
The cleanest way to do it is by setting up hostnames for your services within the yml and set up your applications to look for those hostnames instead of localhost.
Let me know if you need examples and I will look for them at home and post it here to you.

Related

How to get access to the API in the container that the host have access to but a container doesn't?

My host has access to the API but when I enter docker container it doesn't. I've already added DNS to docker daemon
The host I want to access is an external API working on 80 port
I want to connect to it via CURL
When I do PING from my host I get responses
When I do PING from container I get unknown host
My networking is set to bridge.
I was thinking about setting up the proxy but maybe there's a better way.
The easiest way would probably be to use network_mode: host in your docker-compose.yml. This is great as long as you use a singe container, but if you want several to communicate, bridge is better.
See here for more info: https://docs.docker.com/network/#network-drivers

Talk to server on docker container with no exposed ports

I have some docker containers talking together through docker bridge networks. They cannot be accessed from outside (I was said) as they are launched from a script with a default command which does not include 'expose' nor '-p' option. I cannot change that script.
I would like to connect to one of this containers which runs a server and listens for requests on port 8080. I tried connecting that bridge to a newly created docker bridge network, but i did not succede.
Now I am thinking of creating a new container and letting it talk to the server one (through bridge networks). As it is a new contaienr I can use the 'expose' or '-p' options, so it would be able to talk to the host machine.
Is it a good idea? How can I forward every request made to that container to the server one and get responses back to the host machine then?
Thanks
Within the default docker network, all ports are exposed. So you only need a container that exposes a port to the host machine and is in the same network as the other containers you have already created.
This is a relatively normal pattern. You can use a reverse proxy like nginx to achieve something like this.
There are some containers that automate this process:
https://github.com/jwilder/nginx-proxy
If you have no control over the other containers though, you will need to write the proxy config by hand.
If the container to which you are trying to connect is an http server, you may be able to use a ready-made container image that can work as an http forwarder (e.g., nginx - it is relatively easy to configure it as an http forwarder).
If you need plain tcp forwarding, you could make a container running 'socat' (socat can work as a tcp forwarder).
NOTE: in either case, you will be exposing a listener that wasn't meant to be on a public address. Do take measures not to allow unauthorized connections.

Docker containers serving different subdomains on port 80

Is it possible to have a 2 docker containers serve on port 80 but different subdomains or hostnames?
Something like:
api.example.com goes to a node application
app.example.com goes to a Java application
Yes you can. using a proxy.
There is a project by jwilder/nginx-proxy which allows you to give your hostname via an enviroment variable which will than route your request to the appropriate container.
A good example of this implemented is given here: https://blog.florianlopes.io/host-multiple-websites-on-single-host-docker/
No. The first container you start will have exclusive access to the port, and if you try and start a second container on the same port it will fail.
Instead, use a load balancer such as Nginx or Traefik to handle the incoming traffic to port 80 and proxy it on to your two app containers based on host headers.

How to assign the host ip to a service that is running with docker compose

I have several services specified inside a docker compose file that are communication with each other via links. Now I want one of these services to talk to the outside world and fetch some data from another server in the host network. But the docker service uses its internally assigned IP address which leads to the firewall of the host network blocking his requests. How can I tell this docker service to use the IP address of the host instead?
EDIT:
I got a step further, what I'm looking for is the network_mode option with the value host. But the Problem is that network_mode: "host" cannot be mixed with links. So i guess i have to change the configuration of all the docker services to not use links. I will try how this works out.
You should open a port like to that service
ports:
8000:8000
The 8000 on the left is the host port and the 8000 on the right will be the IP port

Set specific IP or name for my docker machine

Is there any way to set either the IP or ideally a ID and hostname in my hosts file in my docker-compose.yml file? At the moment I'm SSH'ing into my docker DB via SequelPro, but if I start up more than one machine I get different IP's which I then need to update in SequelPro every time.
Ideally I cant to be able to docker-compose up -d and then be able to visit myproject.domain.com straight off without having to find the allocated IP each time and change my host file or worry about the allocated IP being different.
Is this possible?
You have a few options; which one is best really depends on your particular needs. You say that you are connecting to your container via SSH, but this sounds like a workaround for something: presumably, your Docker container is offering some sort of useful service other than ssh, and that's what you actually need to access.
The easiest solution is simply to expose the network port for that service on your host using the ports directive in your docker-compose.yaml file. If you just need access locally, you can do something like:
ports:
- "127.0.0.1:8001:8001"
That would expose container port 8001 as port 8001 on your local host. If you need external access to the service (that is, access from some place other than the docker host), you could expose the port on a host interface:
ports:
- "8001:8001"
In that case, you could access the service as <your_host_name_or_ip>:8001.
If this doesn't meet your needs, there are solutions out there that will register container names in DNS, but I haven't used one recently enough to make a suggestion.

Resources