Access host machine dns from a docker container - docker

/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?

You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.

Related

mediasoup v3 with Docker

Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start
It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.
You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.
If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.

Name a service in docker compose to be used as a hostname

I want to spin up a database container (e.g: MongoDb) with docker-compose so that I can run some tests against the database.
This is my mongodb.yml docker-compose file.
version: '3.7'
services:
mongo:
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=example
mongo-express:
image: mongo-express:latest
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=example
depends_on:
- mongo
When I run it with docker-compose -f mongodb.yml up I can successfully connect to MongoDb on localhost. In other words, the following connection string is valid: "mongodb://root:example#localhost:27017/admin?ssl=false"
I want to use the equivalent to alias so that instead localhost, MongoDb is accessible through hostname potato
In GitLab CI/CD, with a Docker runner, I can spin up a mongo container and provide an alias without any issue. Like this:
my_ci_cd_job:
stage: my_stage
services:
- name: mongo:latest
alias: potato
variables:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
This allows me to connect within the GitLab CI/CD with "mongodb://root:example#potato:27017/admin?ssl=false"
I need the same locally, so that I can reuse my connection string.
I've added under image: mongo:latest the following
container_name: potato
But I cannot connect to the server potato. I've tried a few combinations with network, alias, etc. No luck. I don't event understand what I am doing anymore. Isn't there a simple equivalent to give an alias to a container so that my C# app or MongoDb client can access it?
Even the documentation, at the section https://docs.docker.com/compose/compose-file/#external_links is useless in my opinion. It mentions random samples not defined elsewhere.
Any help is much appreciated!
I've tried..
I've tried the following: How do I set hostname in docker-compose? without success
I have spent a few hours reading the docker compose documentation and it's extremly confusing. The fact that most of the questions are out of context without specific examples, does not help either, because it requires some deeper knowledge.
SOLUTION
Thanks to the replies it's clear this is a hacky thing not really recommended.
I went with the recommendations and I have now the connection string as:
mongodb://root:example#potato:27017/admin?ssl=false
by default. That way, I don't need to change anything for my GitLab CI/CD pipeline that has the alias potato for mongo db.
When running locally the mongoDb container with docker compose, it does so in localhost, but I edited my hosts file (e.g: in Windows C:\Windows\System32\drivers\etc, in Linux /etc/hosts) to make any request to potato go to localhost or 127.0.0.1
127.0.0.1 potato
And that way I can connect to MongoDb from my code or Robo3T as if it was running on a host called potato.
Feel free to comment any better alternative if there is. Ta.
If I understand you correctly, you want to bind a hostname (e.g., potato) to an ip address on your host machine.
Afaik this is not possible, but there are workarounds[1].
Everytime you start your docker-compose a network is used between those containers, and there is no way for you to be sure which ip addresses they will get. These could be 172.17.0.0/24 or 172.14.0.0/24 or anything else really.
The only thing you know for sure is that your host will have a service running at port 27017. So you could say that the hostname potato points to localhost on your hostmachine by adding 127.0.0.1 potato to your /etc/hosts file on your host.
That way the connection string "mongodb://root:example#localhost:27017/admin?ssl=false" will point to the local port from the perspective of your host machine, while it will point to the docker container from the perspective of the rest of your docker-compose services.
I do have to say that I find this a hacky approach, and as #DavidMaze said, it's normal to need different connection strings depending on the context you use them in.
[1] https://github.com/dlemphers/docker-local-hosts

is there any way to connect 2 dockers using docker-compose file without docker-swarm-mode

I want to run a webapp and a db using Docker, is there any way to connect 2 dockers(webApp Docker Container in One Machine and DB Docker container in another Machine) using docker-compose file without docker-swarm-mode
I mean 2 separate server
This is my Mongodb docker-compose file
version: '2'
services:
mongodb_container:
image: mongo:latest
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
Here is my demowebapp docker-compose file
version: '2'
services:
demowebapp:
image: demoapp:latest
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost
- MONGO_URL=mongodb://35.168.21.133/demodb
ports:
- 3000:3000
Can any one suggest me How to do
Using only one docker-compose.yml with compose version: 2 there is no way to deploy 2 services on two different machines. That's what version: 3 using a stack.yml and swarm-mode are used for.
You can however deploy to two different machines using two docker-compose.yml version 2, but will have to connect them using different hostnames/ips than the service-name from the compose-file.
You shouldn't need to change anything in the sample files you show: you have to connect to the other host's IP address (or DNS name) and the published ports:.
Once you're on a different machine (or in a different VM) none of the details around Docker are visible any more. From the point of view of the system running the Web application, the first system is running MongoDB on port 27017; it might be running on bare metal, or in a container, or port-forwarded from a VM, or using something like HAProxy to pass through from another system; there's literally no way to tell.
The configuration you have to connect to the first server's IP address will work. I'd set up a DNS system if you don't already have one (BIND, AWS Route 53, ...) to avoid needing to hard-code the IP address. You also might look at a service-discovery system (I have had good luck with Hashicorp's Consul in the past) which can send you to "the host system running MongoDB" without needing to know which one that is.

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

docker-compose scale with nginx and without environment variable

I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.

Resources