This question already has answers here:
How to get Docker containers to talk to each other while running on my local host?
(4 answers)
Closed 2 years ago.
I want my Docker containers to work on the same IP. Is it possible? I want them to have the same IP address so that they can link to each other through it.
Have a look at https://docs.docker.com/compose/networking/ to learn how containers are made accessible with docker-compose.
The gist of it is that you access containers by the name you've given them in the compose-file. So in that example
version: "3.9"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
you can address the hosts as web and db.
Related
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 10 months ago.
I have a host that runs a native mysql installation (not a container).
From a docker container I now want to connect from a java spring-boot application to that port (3306 by default).
But it does not work:
docker-compose.yml:
version: '3.7'
services:
customer-app:
ports:
- "3306:3306"
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://localhost:3306/db
Result from docker-compose up:
Cannot start service customer-app: driver failed programming external connectivity on endpoint:
Error starting userland proxy: listen tcp4 0.0.0.0:3306: bind: address already in use
This is probably not a question directly to a java application, but more general:
How can I access a port on the host system from inside a docker container?
I added the following to docker-compose.yml:
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://host.docker.internal:3306/db
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Communication between multiple docker-compose projects
(20 answers)
Closed 1 year ago.
I have two apps (microservices) in separate docker composes
app1.yml
version: "3.4"
services:
app1:
image: flask-app1
environment:
- APP2_URL=http://localhost:8000
ports:
- 5000:8000
volumes:
- "../:/app/"
depends_on:
- db_backend1
restart: on-failure
db_backend1:
...
app2.yml
version: "3.4"
services:
app2:
image: flask-app2
ports:
- 8000:8000
volumes:
- "..:/app"
restart: on-failure
of course they have other dependecies (database server, etc)
i need to run both of them locally, each of them can run well locally, but in this case app1 need to fetch data from app2 by sending a http get request, so i set the app2 url (http://localhost:8000) as an environment variable (just for dev purposes). but the always get requests exception error saying the connection end closed.
So, it will be great if anyone knows how to sort it out.
The container is a “device” so it has it’s own “localhost” so when you set the url as is, it’s called internally which is not what you want.
The solution is to create a network between the composes so you can refer to the specific container as “containerName:port”.
You can refer to :
Communication between multiple docker-compose projects
This question already has answers here:
What is the difference between ports and expose in docker-compose?
(5 answers)
Closed 3 years ago.
Below is example code:
services:
db:
image: "mysql:8"
restart: always
environment:
MYSQL_DATABASE: 'test'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'test'
MYSQL_ROOT_PASSWORD: 'test'
ports:
- "3309:3306"
expose:
- "3309"
By definition, in docker-compose file, does expose: function on the host port or the container port?
Does ports: follow the [host_port]:[container_port] convention or [container_port]:[host_port]?
what exactly is the example code above doing with ports?
EXPOSE is simply used for documentation purposes, and not to actually publish any of the ports. Think of it as meta data, allowing other developers or admins to have some sort of documentation on the image.
When you publish ports, you're doing it on the localhost machine of that container, or host machine.
For example, if you run a redis container on your host machine and publish 6379, your localhost:6379 will be mapped to the container's port of 6379.
The convention goes: host-port:container-port
This question already has answers here:
Will a docker container auto sync time with its host machine?
(7 answers)
Closed 5 years ago.
I've followed installation docs in http://docs.drone.io/installation/
Below is my docker-compose.yml file
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 80:8000
- 9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_HOST= localhost
- DRONE_GITLAB=true
- DRONE_GITLAB_CLIENT=dfsdfsdf
- DRONE_GITLAB_SECRET=dsfdsf
- DRONE_GITLAB_URL=https://tecgit01.com
- DRONE_SECRET=${DRONE_SECRET}
drone-agent:
image: drone/agent:0.8
restart: always
depends_on:
- drone-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
I'm running this on OSX(10.13.1) with Docker version 17.09.0-ce, build afdb6d4.
Local time in drone_agent is very different from the host time. This is causing the AWS API calls to fail when building my app. It throws this error.https://forums.aws.amazon.com/thread.jspa?threadID=103764#. I tried logging the current time inside the app to verify the time difference.
Is there a config to sync host time with the docker agent?
As you've pointed out, this isn't a Drone.io issue, it's instead an issue with the underlying VM running docker's clock becoming out of sync.
This can be fixed by following the steps outlined in the question you linked to.
This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql