Below is my docker compose file :
version: "3.5"
services:
docker-mysql:
image: mysql
container_name: docker-mysql
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=user_management
networks:
- technet
networks:
technet:
name: technet
Now I want to start the MySQL container in technet network with the name docker-mysql. Next, I will change the network to technet2 in the docker-compose.yml manually and username and password but start with same container and image name.
Currently, the behavior is the old container in the technet is stopped and rebuilt inside the technet2 network. But I want to have 2 containers of MySQL with the same container name docker-mysql but in a different network i.e. technet and technet2.
Related
I have a Java application running in a Docker container and rabbitmq in another container.
How can I connect the containers to use rabbitmq in my Java application?
You have to set up a network and attach the running containers to the network.
Then you have to set the connection URL of your app to the name of the rabbitmq's network name in Docker container.
The easiest way is to create docker-compose file because it will create the network and attach the containers automatically.
Create a network
Connect the container
Or
Docker compose file
Example of docker-compose.yml
version: '3.7'
services:
yourapp:
image: image_from_dockerhub_or_local // or use "build: ./myapp_folder_below_this_where_is_the_Dockerfile" to build container from scratch
hostname: myapp
ports:
- 8080:8080
rabbitmq:
image: rabbitmq:3.8.3-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: pass
ports:
- 5672:5672
- 15672:15672
You can run it with docker-compose up command.
Then in your connection url use host:rabbitmq, port:5672.
Note that you don't have to create a port forward if you don't want to reach rabbitmq from your host machine.
I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.
Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)
I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.
This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql