I have the below service in docker compose:
services:
mysql:
image: mysql:8.0
networks:
my_network:
ipv4_address: 172.22.0.11
ports:
- 3307:3306
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.22.0.0/27
When I bring this up, I am able to access the db using localhost:3307. When I remove the ports section, I can access the db using 172.22.0.11:3306.
I though that by having both the configurations, the DB should be accessible using 172.22.0.11:3307. Is this not the case? Also, it is possible to achieve?
The db container exposes port 3306 on whatever network it is attached to. In this case, my_network.
The ports: directive tells docker to expose the port on the external network and how to map it to the local port. In this case, port 3307 is exposed on your localhost and maps to port 3306 on my_network.
Related
I have started learning about Docker and containers, and have been given an assignment to "Host a docker container on the external network (the one that the host is connected to) with its own IP address that is valid for said network".
As far as my understanding goes, Docker containers allow to forward ports onto the host, without exposing the docker container to the outside network. Is there any way to expose the whole container, with all its ports and have its own IP onto the external network?
Here is a excerpt from a test docker-compose.yaml file:
env20:
build: ./env20
image: env20
container_name: env20
hostname: env20
ports:
- "22:22/tcp"
- "80:80/tcp"
depends_on:
- mysql
networks:
gnet:
ipv4_address: 10.10.11.30
restart: unless-stopped
#############################################################
# Netowrk setup
#############################################################
networks:
gnet:
name: gnet
driver: macvlan
ipam:
driver: default
config:
- subnet: 10.10.11.0/24
gateway: 10.10.11.1
Any help would be appreciated!
I'im fairly new to docker and docker compose.
I have a simple scenario, based on three applications (app1, app2, app3) that I want to connect to my host's network. The purpose is having an internet connection also inside the container.
Here is my docker-compose file:
version: '3.9'
services:
app1container:
image: app1img
build: ./app1
networks:
network_comp:
ipv4_address: 192.168.1.1
extra_hosts:
anotherpc: 192.168.1.44
ports:
- 80:80
- 8080:8080
app2container:
depends_on:
- "app1container"
image: app2img
build: ./app2
networks:
network_comp:
ipv4_address: 192.168.1.2
ports:
- 3100:3100
app3container:
depends_on:
- "app1container"
image: app3img
build: ./app3
networks:
network_comp:
ipv4_address: 192.168.1.3
ports:
- 9080:9080
networks:
network_comp:
driver: ""
ipam:
driver: ""
config:
- subnet: 192.168.0.0/24
gateway: 192.168.1.254
I already read the docker-compose documentation, which says that there is no a bridge driver for Windows OS. Is there anyway a solution to this issue?
You shouldn't usually need to do any special setup for this to work. When your Compose service has ports:, that makes a port available on the host's IP address. The essential rules for this are:
The service inside the container must listen on the special 0.0.0.0 "all interfaces" address (not 127.0.0.1 "this container only"), on some (usually fixed) port.
The container must be started with Compose ports: (or docker run -p). You choose the first port number, the second port number must match the port inside the container.
The service can be reached via the host's IP address on the first port number (or, if you're using the older Docker Toolbox setup, on the docker-machine ip address).
http://host.example.com:12345 (from other hosts)
|
v
ports: ['12345:8080'] (in the `docker-compose.yml`)
|
v
./my_server -bind 0.0.0.0:8080 (the main container command)
You can remove all of the manual networks: configuration in this file. In particular, it's problematic if you try to specify the Docker network to have the same IP address range as the host network, since these are two separate networks. Compose automatically provides a network named default that should work for most practical applications.
I have a simple docker network set up with two containers. The frontend has its port mapped 80:80 so that I can access it via localhost in my browser. My backend does not have its port (3000) mapped. I want my frontend to be accessible on my network but do not want my backend to be.
The problem I have run into is that I cannot make requests from my frontend to my backend from the browser. Both containers are on the proper network and can communicate internally.
ex: using curl inside the frontend container provides the expected output from the backend:
curl 10.5.0.4:3000/test
But calling
axios.get(10.5.0.4:3000/test) from the webpage will result in a 404 error.
Is the problem that the axios request is coming from the browser and therefore cannot connect to the docker network? If so is there a solution that does not involve making the backend to be accessible outside the docker network?
Here is my docker-compose file for clarity
services:
backend:
build: ./testbe
image: testbe:dev
networks:
test_net:
ipv4_address: 10.5.0.4
frontend:
build: ./testfe
image: testfe:dev
networks:
test_net:
ipv4_address: 10.5.0.5
ports:
- "80:80"
networks:
test_net:
driver: bridge
driver_opts:
com.docker.network.endable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 10.5.0.0/5
gateway: 10.5.0.1
You shouldn't use the private IP of the container to call it, but call the name it has on the network.
I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh
Hi everyone i have create a network with mac-vlan type in docker because i wanted my containers to be on the same LAN as host.Now the strange thing which i have noticed is that when i stop and then restart a container with docker start command the container gets started but the IP assigned to it is the one that was assigned before the container was shutdown. doesn't IP change when containers are restarted furthermore the container is now not reachable because the IP its showing as its own has now been reassigned to another machine on the network from what i have read that the container is assigned the same IP as before but if the container couldn't get the IP it fails to start but my container is starting just fine. What am i missing here? on ubuntu version 17.10 docker version 17.11.0-ce Api version 1.34 (both client and server)
You should not use static IP's in docker unless you are working with something that allows routing from outside to the inside container, like in you're case macvlan. DNS is already there for service discovery inside of the container network and supports container scaling. And outside the container network, you should use exposed ports on the host.
That being said, you can achieve the above using docker-compose like below :
services:
mysql:
container_name: backend-database
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
mynetwork:
ipv4_address: 10.5.0.5
apache-tomcat:
container_name: apache-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
mynetwork:
ipv4_address: 10.5.0.6
depends_on:
- mysql
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1