I am trying to understand how I access containers between each other through their container name. Specifically when using a pgadmin container and connecting to a postgresql container through dns.
In docker-compose V3 , I cannot link them, nor does networks: seem to be available either.
The main reason to need this is when the containers spin up they don't have a static IP address, so in pgadmin I can't connect to the postgresql DB using the same IP every time , so a dns name would work better (ie: the container name).
Can we do this with docker-compose or at least set a static ip address for a specific container?
I have tried creating a user defined network:
networks:
backed:
and then using it in the service:
app:
networks:
- backend
This causes a docker-compose error regarding an invalid option of "networks" in the app.
docker-compose.yml
version: "0.1"
services:
devapi:
container_name: devapi
restart: always
build: .
ports:
- "3000:3000"
api-postgres-pgadmin:
container_name: api-postgres-pgadmin
image: dpage/pgadmin4:latest
ports:
- "5050:80"
environment:
- PGADMIN_DEFAULT_EMAIL=stuff#stuff.com
- PGADMIN_DEFAULT_PASSWORD=12345
api-postgres:
container_name: api-postgres
image: postgres:10
volumes:
- ./data:/data/db
ports:
- "15432:5432"
environment:
- POSTGRES_PASSWORD=12345
Actually, I spot one immediate problem:
version: "0.1"
Why are you doing this? The current version of the compose file format is 3.x. E.g:
version: "3"
See e.g. the Compose file version 3 reference.
The version determines which feature are available. It's entirely possible that by setting version: "0.1" you are explicitly disabling support for the networks parameter. You'll note that the reference shows examples using the networks attribute.
As an aside, unless there is a particular reason you ened it, I would drop the use of the container_name in your compose file, since this makes it impossible to run multiple instances of the same compose file on your host.
networks are available from docker-compose version 3 but you are using version:"0.1" in your docker-compose file.
Change the version: "0.1" to version: "3" in docker-compose.yml.
Related
I have created a docker swarm and trying to use overlay network in order to communicate between the 2 services deployed over that swarm.
the Docker compose of 1 service looks like:
version: '3'
services:
web:
container_name: "eureka"
image: eureka
environment:
EUREKA_HOST: eureka
ports:
- 8070:8070
networks:
- net_swarm
networks:
net_swarm:
external:
name: net_swarm
Second :
version: '3'
services:
web:
image: zuul-service
environment:
EUREKA_HOST: eureka_web
ports:
- 8069:8069
networks:
- net_swarm
networks:
net_swarm:
external:
name: net_swarm
i did a docker deploy --compose-file docker-compose.yml eureka to create the service 1, which spun up with service name as eureka_web, as seen above the same is referenced in the compose file of service 2 as EUREKA_HOSTS, however since this "eureka_web" has an underscore the host isnt getting picked when trying to run the second file.(primarily becoz of the underscore)
Can i somehow override the underscore in the service name or is there any other work around?
Don't give the container name.
So that your service name will act as the host name.
Also the host name with underscores should not cause any problem. Try finding out the actual rootcause.
Edit:
Your service name and the host name is web. And I can't say about this line, without looking at the docker file.
environment:
EUREKA_HOST: eureka
I have the following docker-compose file content:
version: '3.4'
services:
local-app:
build: ./app/
command: node app
ports:
- '7001:7001'
links:
- search-svc
networks:
docker_app-network:
external: true
external_links:
-search-svc
Basically what I 'm trying to do is to link the ' local-app ' container with another already running container the ' search-svc '. By running the docker compose I get the following error:
The Compose file './docker-compose.yaml' is invalid because:
Invalid top-level property "external_links". Valid top-level sections for this Compose file are: secrets, version, volumes, services, configs, networks, and extensions starting with "x-". You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
I have read the documentation but I can't find any solution to my problem.
Can anyone suggest anything that might help?
Thanks in advance
Yaml files are space sensitive. You tried to define external_links at the top level of the file rather than as part of a service. This should by syntactically correct:
version: '3.4'
services:
local-app:
build: ./app/
command: node app
ports:
- '7001:7001'
links:
- search-svc
external_links:
- search-svc
networks:
docker_app-network:
external: true
That said, linking is deprecated in docker, it is preferred to use a common network (excluding the default bridge network named bridge) and then use the integrated DNS server for service discovery. It looks like you have defined your common network but didn't use it. This would place your service on that network and rely on DNS:
version: '3.4'
services:
local-app:
build: ./app/
command: node app
ports:
- '7001:7001'
networks:
- docker_app-network
networks:
docker_app-network:
external: true
In my docker-compose file there is a need for several containers to know the hostname of a specific container, including this specific container.
Links will not work, since a container can not link to itself.
Basically, what I am looking for is a way to alias localhost in docker-compose.
I think the correct answer is from
Aliases can be defined as part of the network declaration for a service. See aliases in Compose file reference for more details on that. – King Chung Huang Apr 24 '17 at 15:18
here is the example from the doc
version: '2'
services:
web:
build: ./web
networks:
- new
worker:
build: ./worker
networks:
- legacy
db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql
networks:
new:
legacy:
you can access the db in this docker-compose, also you can use mysql to connect this db
You should avoid using links. Instead, services on the same Docker network can find each other by using service names as DNS names. Use that to reference the specific container you described, including when it references itself.
For example, in the following made up Docker Compose file, if someservice was a web server serving on port 80, anotherservice service would be able to connect to it at http://someservice/, because they're on a common network the_net.
version: '3'
services:
someservice:
image: someserviceimage
networks:
- the_net
anotherservice:
image: anotherserviceimage
networks:
- the_net
networks:
the_net:
someservice can also reach itself at http://someservice/.
extra_hosts did the trick for me.
extra_hosts:
- "hostname:127.0.0.1"
From the docker-compose docs:
extra_hosts Add hostname mappings. Use the same values as the docker
client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229" An entry with the ip address and hostname will be created in /etc/hosts inside containers for this service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
Here is one more useful parameter external_links that didn't mentioned in this thread:
external_links:
- redis_1
- project_db_1:mysql
- project_db_2:postgresql
- {CONTAINER}:{ALIAS}
It is close to extra_hosts:
extra_hosts:
- "somehost:127.0.0.1"
- "otherhost:127.0.0.2"
- {HOSTNAME}:{IP}
but allow to set dynamic container IPs
In addition of it aliases may be added at network level. Here is two answers [ 1, 2 ] how to add alias to network, but I want to add extra info:
If you didn't set network in docker file you may use default to add alias to default network:
services:
db:
networks:
default:
aliases:
- database
- postgres
Thanks to this answer
I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
hostname: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.
Mac version: 10.12.4
Docker version: Docker version 17.03.0-ce, build 60ccb22
I have done quite some prior research, but couldn't find a solution.
Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2
The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.
The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "host"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
image: postgres:9.6
network_mode: "host"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.
The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.
The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.
For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:
version: "2"
services:
postgres:
container_name: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Then you can make a second service in that same network namespace:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "container:postgres.dev"
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html
I have a couple of app containers that I want to connect to the mongodb container. I tried with external_links but I can not connect to the mongodb.
I get
MongoError: failed to connect to server [mongodb:27017] on first
connect
Do I have to add the containers into the same network to get external_links working?
MongoDB:
version: '2'
services:
mongodb:
image: mongo:3.4
restart: always
ports:
- "27017:27017"
volumes:
- data:/data/db
volumes:
data:
App:
version: '2'
services:
app-dev:
restart: Always
build: repository/
ports:
- "3000:80"
env_file:
- ./environment.env
external_links:
- mongodb_mongodb_1:mongodb
Networks:
# sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
29f8bae3e136 bridge bridge local
67d5519cb2e6 dev_default bridge local
9e7097c844cf host host local
481ee4301f7c mongodb_default bridge local
4275508449f6 none null local
873a46298cd9 prod_default bridge local
Documentation at https://docs.docker.com/compose/compose-file/#/externallinks says
If you’re using the version 2 file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them.
Ex:
Create a new docker network
docker network create -d bridge custom
docker-compose-1.yml
version: '2'
services:
postgres:
image: postgres:latest
ports:
- 5432:5432
networks:
- custom
networks:
custom:
external: true
docker-compose-2.yml
version: '2'
services:
app:
image: training/webapp
networks:
- custom
external_links:
- postgres:postgres
networks:
custom:
external: true
Yuva's answer above for the version 2 holds good for version 3 as well.
The documentation for the external_links isn't clear enough.
For more clarity I pasted the version 3 variation with annotation
version: '3'
services:
app:
image: training/webapp
networks:
- <<network created by other compose file>>
external_links:
- postgres:postgres
networks:
<<network created by other compose file>>:
external: true
Recently I faced Name resolution failure trying to link 2 containers handled by docker-compose v3 representing gRPC server and client in my case, but failed and with external_links.
I'll probably duplicate some of the info posted here, but will try to summarize
as all these helped me solving the issue.
From external_links docs (as mentioned in earlier answer):
If you’re using the version 2 or above file format, the externally-created containers must be connected to at least one of the same networks as the service that is linking to them.
The following configuration solved the issue.
project-grpc-server/docker-compose.yml
version: '3'
services:
app:
networks:
- some-network
networks:
some-network:
Server container configured as expected.
project-grpc-client/docker-compose.yml
services:
app:
external_links:
# Assigning easy alias to the target container
- project-grpc-server_app_1:server
networks:
# Mentioning current container as a part of target network
- project-grpc-server_some-network
networks:
# Announcing target network (where server resides)
project-grpc-server_some-network:
# Telling that announced network already exists (shouldn't be created but used)
external: true
When using defaults (no container_name configured) the trick with configuring client container is in prefixes. In my case network name had prefix project-grpc-server_ when working with docker-compose and than goes the name itself some-network (project-grpc-server_some-network). So fully qualified network names should be passed when dealing with separate builds.
While container name is obvious as it appears from time to time on the screen the full network name is not easy-to-guess candidate when first facing this section of Docker, unless docker network ls.
I'm not a Docker expert, so please don't judge too strict if all this is obvious and essential in Docker world.