I'm struggling to configure docker-compose file in order to achieve below structure. Web container needs to be accessible through virtual pcs, physical devices (local & external), but the Keycloak container needs to be only accessible by web container. How can I achieve this?
Desired Network Structure
Web Container starts flask app expose on port 5000.
My docker-compose file currently:
version: '2'
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- .:/app
depends_on:
- keycloak
keycloak:
container_name: keycloak
image: jboss/keycloak:13.0.1
ports:
- '8080:8080'
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
If a container doesn't have ports:, it (mostly*) isn't accessible from outside of Docker. If your goal is to have the container only be accessible from other containers, you can just delete ports:.
In comments you ask about the container being reachable from other containers. So long as both containers are on the same Docker network (or the same Compose-provided default network) they can communicate using the other container's Compose service name and the port the process inside the container is listening on. ports: aren't required, and they're ignored if they're present.
So in your setup, it should be enough to remove the ports: from the keycloak container.
version: '2.4'
services:
web:
build: .
ports:
- '5000:5000'
depends_on:
- keycloak
# can call keycloak:8080
keycloak:
image: jboss/keycloak:13.0.1
environment: { ... }
# no ports:, container_name: is also unnecessary
(*) On a native-Linux host, the container's Docker-internal IP address will be reachable from the same host, but not other hosts, if you have some way of finding it (including port-scanning 172.16.0.0/20). If someone can run docker commands then they can also easily attach other containers to the same network and gain access to the container, but if they can run docker commands then they can also pretty straightforwardly root the entire host.
Related
how can we run docker commands inside container with docker-compose?
Simply I want to get IP of some other network container.
I am running three container va-server, db and api-server. All the containers are in same docker-network
Here I am providing docker-compose file below:
version: "2.3"
services:
va-server:
container_name: va_server
image: nitinroxx/facesense:amd64_2022.11.28 #facesense:alpha
runtime: nvidia
restart: always
mem_limit: 4G
networks:
- perimeter-network
db:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
restart: always
volumes:
- ./facesense_db:/data/db
command: [--auth]
networks:
- perimeter-network
api-server:
container_name: api_server
image: nitinroxx/facesense:api_amd64_2022.11.28
ports:
- "80:80"
- "465:465"
restart: always
networks:
- perimeter-network
networks:
perimeter-network:
driver: bridge
ipam:
config:
- gateway: 10.16.239.1
subnet: 10.16.239.0/24
I have install docker inside the container which giving me below permission error:
docker.errors.dockerexception: error while fetching server api version: ('connection aborted.', permissionerror(13, 'permission denied')
...inside [a] container [...] I want to get IP of some other network container....
Docker provides an internal DNS service that can resolve container names to their Docker-internal IP addresses. From one of the containers you show, you could look up a host name like db to get the container's IP address; but in practice, this is a totally normal DNS name and all but the lowest-level networking interfaces can use those directly.
This does require that all of the containers involved be on the same Docker network. Normally Compose sets this up automatically for you; in the file you show I might delete the networks: blocks and container_name: overrides in the name of simplicity. Also see Networking in Compose in the Docker documentation.
In short:
You can probably use the Compose service names va-server, db, and api-server as host names without specifically knowing their IP addresses.
This probably means you never need to know the container IP addresses at all (they're usually unusable from outside Docker).
If you do need an IP address from inside a container, a DNS lookup can find it.
You can't usually run docker commands from inside containers. You can't do this safely without making it possible for the container to take over the whole host. There are usually better patterns that don't tie you to the Docker stack specifically.
I'm making a sample ETL app for learning purposes. I want to have 2 containers: a MySQL Container, and a Python container running with Redis to serve the data.
I want to open ports on these containers, but I don't want to open them to the internet for obvious security reasons.
If I open ports on these containers, will they remain only open to the Host machine, or does the Host machine also have to open these ports?
If you don't have any Compose ports: or docker run -p options, the containers will be able to communicate with each other but will not be reachable from the host or off-host. A sample Compose setup:
version: '3.8'
services:
app:
build: .
ports: ['8080:8080'] # the application itself is visible
environment:
PGHOST: postgres # using normal Docker inter-container communication
REDIS_HOST: redis
postgres:
image: postgres:13
# no ports:
redis:
image: redis:6
# no ports:
You can also set ports: with an optional bind address. If that address is 127.0.0.1, then the published port will be reachable from non-container processes on the host system. Other hosts will not be able to connect to it, and containers will not be able to use the host gateway address or host.docker.internal to connect to it.
services:
postgres:
# from non-container processes on the host,
# PGHOST=localhost PGPORT=54321 reaches this container
#
# from other containers in this Compose file,
# PGHOST=postgres PGPORT=5432 still works
ports: ['127.0.0.1:54321:5432']
I have two docker containers. One container is a database and the other is a web application.
Web application calls the database through this link http://localhost:7200. However, the web application docker container cannot reach the database container.
I tried this docker-compose.yml, but does not work:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
links:
- graph-db
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
networks:
- backend
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
hostname: graph-db
ports:
- "7200:7200"
networks:
backend:
driver: "bridge"
So I have two containers:
web application: http://localhost:8080/reasoner and this container calls a database in http://localhost:7200 which resides in a different container.
However database container is not reachable by web container.
SOLUTION
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
ports:
- "7200:7200"
and replace http://localhost:7200 in web app code with http://graph-db:7200
Do not use localhost to communicate between containers. Networking is one of the namespaces in docker, so localhost inside of a container only connects to that container, not to your external host, and not to another container. In this case, use the service name, graph-db, instead of localhost, in your app to connect to the db.
Your db host is graph-db, and that name that you should use in database configuration in your app. eg: http://graph-db:7200
From docker network documentation (bridge networks - the default network driver in Docker):
Imagine an application with a web front-end and a database back-end.
If you call your containers web and db, the web container can connect
to the db container at db, no matter which Docker host the application
stack is running on.
I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.
I am hosting 3 services using docker-compose.
version: '3.3'
services:
service-a:
container_name: service-a
network_mode: default
ports:
- 8001:8001
- 8080:8080
service-b:
container_name: service-b
network_mode: default
ports:
- 8180:8080
links:
- service-a:srv_a
service-api:
container_name: service-api
environment:
- SERVER_URL=http://localhost:8180/myserver
- 8001:8001
links:
- service-b: srv_b
However the service-api which is a spring boot application can't access the
service-b despite the link.
I can do that when using the browser.
What can I do to investigate the reasons for the lack of connectivity?
Should the link be somehow used in the server_url variable?
Each Docker container has it's own IP address. From the service-api container perspective, localhost resolve to its own IP address.
Docker-compose provides your containers with the ability to resolve other containers IP addresses from the docker compose service names.
Try:
service-api:
environment:
- SERVER_URL=http://service-b:8080/myserver
note that you need to connect to the container internal port (8080) and not the matching port published on the docker host (8180).