Using docker service names from outside the network - docker

My app has 2 dependencies which I specify in my docker-compose, a postgres and kafka service:
services:
postgres:
image: postgres:9.6-alpine
ports:
- "5432:5432"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
I run my code and tests outside the docker network, and use these two containers as my dependencies.
As these both expose ports, I can configure my app to hit them via: localhost:5432, localhost:9092. This works.
The problem I have is when I want to test the app image itself, I add this as a service to the docker-compose file:
app:
image: myapp
links:
- postgres
- kafka
The app is still configured to use localhost, so I allow the app container to access my network using --net=host
Whilst the app container can now access localhost:5432 and localhost:9092 (confirmed by curling from inside the container), the host names fail to resolve when the code runs and the dependencies are unreachable - possibly as a result of using localhost from inside the container and confusing the client libraries? I'm really not sure.
It feels like the use of localhost in the app configuration isn't the right approach here. Is it possible to refer to the service names 'postgres' and 'kafka' from outside the docker network?

Why do you want to continue using localhost:xxx in your app?
The best approach for you is to change connection strings in your application when it is being launched from docker-compose. You just use postgres:5432 and kafka:9092 and everything will work, because inside docker-compose network all machines are visible to each other under their service names.
If for some great reasons you insist on using localhost as a connection target, you need to turn all services into host mode. But remember - in this case ports are not exposed, so you access services with their original port values.
version: '3'
services:
postgres:
image: postgres:9.6-alpine
network_mode: "host"
kafka:
image: wurstmeister/kafka
network_mode: "host"
app:
image: myapp
network_mode: "host"
And by the way, forget about links. They are deprecated.

Related

Why is that I am able to access container outside the bridge network?

I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.

how to connect my docker container (frontend) connect to a containerized database running on a different VM

Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)

Why would i use docker links when i still need to hardcode the address?

Hello i have not understood the following :
-In the docker world we have from what i understood :
A port that the application exposes
A port that the container exposes for the application
A port that the host maps the container port
So given these facts in a configuration of 2 containers within docker-expose
If:
app | Host Port | Container Port | App Port
app1 8300 8200 8200
app2 9300 9200 9200
If app2 needs to communicate with with app1 directly through docker-host why would i use links ,since i still have to somehow hardcode in the environment of app2 the hostname and port of app1 (container_name of app1 and port of container of app1)?( In our example : port=8200 and host=app1Inst)
app1:
image: app1img
container_name: app1Inst
ports:
- 8300:8200 //application code exposes port 8200 - e.g sends to socket on 8200
networks:
- ret-net
app2:
image: app2img
container_name: app2Inst
ports:
- 9300:9200
depends_on:
- app1
networks:
- ret-net
links:
- app1
///i still need to say here
/ environment : -
/ - host=app1Inst
/ - port=8200 --what do i gain using links?
networks:
ret-net:
You do not need to use links on modern Docker. But you definitely should not hard-code host names or ports anywhere. (See for example every SO question that notes that you can interact with services as localhost when running directly on a developer system but needs some other host name when running in Docker.). The docker-compose.yml file is deploy-time configuration and that is a good place to set environment variables that point from one service to another.
As you note in your proposed docker-compose.yml file, Docker networks and the associated DNS service basically completely replace links. Links existed first but aren’t as useful any more.
Also note that Docker Compose will create a default network for you, and that the service block names in the docker-compose.yml file are valid as host names. You could reduce that file to:
version: '3'
services:
app1:
image: app1img
ports:
- '8300:8200'
app2:
image: app2img
ports:
- '9300:9200'
env:
APP1_URL: 'http://app1:8200'
depends_on:
- app1
Short answer, no you don't need links, also its now deprecated in docker & not recommended.
https://docs.docker.com/network/links/
Having said that, since both your containers are on the same network ret-net, they will be able to discover & communicate freely between each other on all ports, even without the ports setting.
The ports setting comes into play for external access to the container, e.g. from the host machine.
The environment setting just sets environment variables within the container, so the app knows how to find app1Inst & the right port 8200.

Service access another service on 127.0.0.1?

I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379

How to share localhost between two different Docker containers?

I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
hostname: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.
Mac version: 10.12.4
Docker version: Docker version 17.03.0-ce, build 60ccb22
I have done quite some prior research, but couldn't find a solution.
Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2
The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.
The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "host"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
image: postgres:9.6
network_mode: "host"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.
The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.
The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.
For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:
version: "2"
services:
postgres:
container_name: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Then you can make a second service in that same network namespace:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "container:postgres.dev"
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html

Resources