This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 2 years ago.
I have a RESTful application that takes a callback URL as an argument running on port 8000. It does some work and calls the callback URL to make sure that it should continue work and calls if anything goes wrong. In my tests, I just spin up an HTTP server on localhost port 8009 to wait for responses.
When I run the application directly, it works fine because the localhost is the same. However, when I run the application locally in a container, it obviously doesn't work because of the network isolation of the container. Here's part of my docker-compose.yml:
version: '3'
services:
app:
build: .
ports:
- "8000:8000"
I tried to add - "8009:8009" under ports, but that only goes one direction. I can call port 8000 to call the API, but I can't figure out how to allow the container to call my laptop's http://localhost:8009.
Any idea how to allow the container to call out to another port on my laptop, or if there's a better way to test the callback functionality? I could run the tests inside the container, but I like being able to run them separately.
Use --network: "host" in your docker-compose file, then localhost in your docker container will point to your docker host.
If your laptop is running Mac and Windows, use host host.docker.internal, that resolves to host machine's IP address and then connect using your laptop hostname.
Related
I am deploying an application in a Docker container. The application sends requests to another server with a callback URL. The callback URL contains the host and port name where actually the app runs.
To configure this callback URL in a "stable, non-dynamic" test environment is easy because we know the IP and port where the app runs. But in Docker, the callback URL is the IP address of the host machine + the port that was configured in the docker-compose.yml file. So both parameter is dynamic, can not be hardcoded in the Docker image.
I need the docker host IP and the exposed port by the container info somehow in the container.
This is how my container gets the docker host machine IP:
version: '3'
services:
my-server:
image: ...
container_name: my-server
hostname: my-server
ports:
- "1234:9876"
environment:
- DOCKER_HOST_IP=${HOST_IP}
I set the host IP when I spin up the container:
HOST_IP=$(hostname -i) docker-compose up
Maybe this is not an elegant way but this is the best that I could do so far.
But I have no idea, how to get the exposed port info inside the container.
My idea was that once I know the host IP in the container, I can use nmap $HOST_IP to get the opened port list and grep for the proper line somehow. But this does not work, because I run many Docker containers on this host, and I am not able to select the proper line with grep.
here is the result of th nmap:
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
443/tcp open https
5001/tcp open commplex-link
5002/tcp open rfe
7201/tcp open dlip
1234/tcp open vcom-tunnel
1235/tcp open vcom-tunnel
1236/tcp open teradataordbms
60443/tcp open unknown
So when I execute nmap from the container then I can see all of the opened ports in my host machine. But I have no idea, how to select the line which belongs to the container where I am.
Can I can customize somehow the service name before docker spin-up the containers?
What is the best way to get the port number that was opened on the host machine by the container?
You should pass the complete externally-visible callback URL to the application.
ports:
- "1234:9876"
environment:
- CALLBACK_URL=http://physical-host.example.com:1234/path
You can imagine an interesting variety of scenarios where the host IP address isn't directly routable either. As a basic example, say you're running the container, on your laptop, at home. The laptop's IP address might be 192.168.1.2/24 but that's still a "private" address; you need your router's externally-visible IP address, and there's no easy way to discover that.
xx.xx.xx.xx /--------\ 192.168.1.1 192.168.1.2 /----------------\
------------| Router |---------------------------| Laptop |
\--------/ | Docker |
| 172.17.1.2 |
Callback address must be | Server |
http://xx.xx.xx.xx/path \----------------/
In a cloud environment, you can imagine a similar setup using load balancers. Your container might run on some cloud-hosted instance. The container might listen on port 11111, and you remap that to port 22222 on the instance. But then in front of this you have a load balancer that listens on the ordinary HTTPS port 443, does TLS termination, and then forwards to the instance, and you have a DNS name connected to that load balancer; the callback address would be https://service.example.com/path, but without explicitly telling the container this, there's no way it can figure this out.
I have two dockerized applications that are part of a docker network and which both start on the 8080 port. I need them both to be exposed on the host machine, that's why I expose them to 8080 and 8081 correspondingly.
app-1:
ports:
- "8080:8080"
app-2:
ports:
- "8081:8080"
I don't have control over these applications (I cannot change their ports), they are only a part of an end-to-end test suite that needs to be run in order to execute tests.
Problem: Depending on wether I execute tests in a docker container (a 3d application in the same docker-compose file) or locally, I have to use different ports (8080 or 8081) because the requests go either within a docker network or over the host machine. It is inconvenient.
Question: Is there a way to remap ports in the compose file the way that the port will be the same inside and outside the docker network? For instance, it would be great if I could refer to app-2 using the 8081 port inside the docker network.
I would appreciate any tips.
I faced a similar problem and I resolved it using the following method. It was a NodeJS-express application.
I ran a container on the defined port and connected with the CLI of
the container. Found the Environment file in which the port was
defined.
Copied that file using docker cp onto my local machine.
Modified the file and updated the port.
Stopped the Container.
Replaced the environment file inside the container with the updated
file again
using docker cp
Committed that container as an image using docker commit
Ran the container on the updated port and using the newly committed image.
I have two docker-compose files set up - one for the frontend application, and one for the backend.
Frontend runs on 3000 port and is exposed on 80: 0.0.0.0:80:3000
Backend runs on 3001 port and is exposed on the same port also publicly: 0.0.0.0:3001:3001
From the host machine, I can easily make a request to the backend:
$ curl 127.0.0.1:3001
But I cannot do it from the frontend container - nothing is listening on that port because those are two different containers in different networks.
I tried to connect both of them in one network - then I can use the IP of the backend container, or a hostname, to make a valid request. But it's still not the localhost. How can I solve this?
When using Docker, localhost points to the container itself, not to your computer. There are a few ways to do what you want. But none of them will work with localhost from a container.
The cleanest way to do it is by setting up hostnames for your services within the yml and set up your applications to look for those hostnames instead of localhost.
Let me know if you need examples and I will look for them at home and post it here to you.
To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.
I have exposed port 80 in my application container's dockerfile.yml as well as mapping "80:80" in my docker-compose.yml but I only get a "Connection refused" after I do a "docker-compose up" and try to do a HTTP GET on port 80 on my docker-machine's IP address. My docker hub provided RethinkDB instance's admin panel gets mapped just fine through that same dockerfile.yml ("EXPOSE 8080") and docker-compose.yml (ports "8080:8080") and when I start the application on my local development machine port 80 gets exposed as expected.
What could be going wrong here? I would be very grateful for a quick insight from anyone with more docker experience!
So in my case, my service containers both bound to localhost (127.0.0.1) and therefore seemingly the exposed ports were never picked up via my docker-compose port mapping. I configured my services to bind to 0.0.0.0 respectively and now they works flawlessly. Thank you #creack for pointing me in the right direction.
In my case I was using
docker-compose run app
Apparently
docker-compose run command does not create any of the ports specified in the service configuration.
See https://docs.docker.com/compose/reference/run/
I started using
docker-compose create app
docker-compose start app
and problem solved.
In my case I found that the service I am trying to set up had all their networks as internal: true. It is strange that it didn't give me an issue when doing a docker stack deploy
I have opened up https://github.com/docker/compose/issues/6534 to ask for a proper error message so it will be obvious for other people.
If you are using the same Dockerfile, make sure you also expose the port 80 EXPOSE 80 otherwise, your compose mapping 80:80 will not work.
Also make sure that your http server listens on 0.0.0.0:80 and not localhost or a different port.