I am using docker for windows and have a docker compose file that is creating a couple of customer applications as well as a RabbitMq and a Seq container. These are all talking to each other via instance names on the local network created by docker-compose, for example;
version: '3.4'
services:
legacydata.workerservice:
container_name: legacydata.workerservice
image: ${DOCKER_REGISTRY-}legacydataworkerservice
build:
context: .
dockerfile: LegacyData.Worker/Dockerfile
depends_on:
- rabbitmq
legacydata.consumer:
container_name: legacydata.consumer
image: ${DOCKER_REGISTRY-}legacydataconsumer
build:
context: .
dockerfile: LegacyData.Consumer/Dockerfile
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
environment:
- RABBITMQ_ERLANG_COOKIE='secretcookie'
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=password
ports:
- 5672:5672
- 15672:15672
## Move Seq to Azure ACI
# seq:
# image: datalust/seq:latest
# container_name: seq
# ports:
# - 5341:80
# environment:
# ACCEPT_EULA: Y
I want move the Seq instance into Azure ACI (I have this running and I can access it as expected for example http://myseq.southuk.azurecontainer.io).
How do I configure the docker to allow my local containers to access both each other, and internet resources?
Actually, you can create a container group contains all the containers, the containers inside a container group can communicate with each other with the ports that they have exposed. You follow the example here. And in default, the containers also can access the Internet with no problem.
The only problem is that you need to create all the images yourself locally from the Dockerfile. ACI does not support to create the images for you.
Related
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
I'm little bit confused with docker and network communication. I tried many things but it didn't work :-(.
I have following docker compose:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:stable-alpine
restart: unless-stopped
tty: true
ports:
- 80:80
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
networks:
- frontend
- backend
app:
restart: unless-stopped
tty: true
build:
context: .
dockerfile: Dockerfile
container_name: app
expose:
- "9090"
ports:
- 9090:9090
networks:
- backend
networks:
frontend:
backend:
And I would like to communicate:
From nginx to app //this probably works
From app to postgreSQL which is installed on server (no docker container)
I cannot do this, I tried many things but something is wrong :-(
You can choose any of these two options:
Make your postgresql listen to all your network interfaces (or the docker bridge for more secure but complex setup), to achieve that you need to make sure your config looks like this:
# grep listen /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
Use host network mode in your docker compose, which runs docker in your host network name space instead of creating a new network:
network_mode: "host"
I have two docker containers. One container is a database and the other is a web application.
Web application calls the database through this link http://localhost:7200. However, the web application docker container cannot reach the database container.
I tried this docker-compose.yml, but does not work:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
links:
- graph-db
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
networks:
- backend
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
hostname: graph-db
ports:
- "7200:7200"
networks:
backend:
driver: "bridge"
So I have two containers:
web application: http://localhost:8080/reasoner and this container calls a database in http://localhost:7200 which resides in a different container.
However database container is not reachable by web container.
SOLUTION
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
ports:
- "7200:7200"
and replace http://localhost:7200 in web app code with http://graph-db:7200
Do not use localhost to communicate between containers. Networking is one of the namespaces in docker, so localhost inside of a container only connects to that container, not to your external host, and not to another container. In this case, use the service name, graph-db, instead of localhost, in your app to connect to the db.
Your db host is graph-db, and that name that you should use in database configuration in your app. eg: http://graph-db:7200
From docker network documentation (bridge networks - the default network driver in Docker):
Imagine an application with a web front-end and a database back-end.
If you call your containers web and db, the web container can connect
to the db container at db, no matter which Docker host the application
stack is running on.
I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.
I have a Docker container that runs a simple web application. That container is linked to two other containers by Docker Compose with the following docker-compose.yml file:
version: '2'
services:
mongo_service:
image: mongo
command: mongod
ports:
- '27017:27017'
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '8000:8000'
Once the web container starts, I want to write some content to /bitnami/tomcat/data/ on the tomcat_service container. I tried just writing to that disk location from within the web container but am getting an exception:
No such file or directory: '/bitnami/tomcat/data/'
Does anyone know what I can do to be able to write to the tomcat_service container from the web container? I'd be very grateful for any advice others can offer on this question!
you have to use docker volumes if you want one service to write to other service. If web writes to someFolderName the same file will exist in the tomcat_service.
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
volumes:
- my_shared_data:/bitnami/tomcat/data/
web:
volumes:
- my_shared_data:/someFolderName
volumes:
my_shared_data:
Data in volumes persist and they will be available even next time you re-create docker containers. You should always use docker volumes when writing some data in docker containers.