Unable to requests FastAPI running in docker-compose - docker

I have a Dockerfile which exposes an API on port 8000:
# ..
EXPOSE 8000
ENV PYTHONPATH="src/."
CMD ["gunicorn", "-b :8000", "-k", "uvicorn.workers.UvicornWorker", "fingerprinter.api.server:app"]
It's just a simple FastAPI server with a simple endpoint:
#app.get("/health")
def health():
return "OK"
This is the relevant part of the docker-compose.yaml:
version: "3.7"
services:
fprint-api:
container_name: fprint-api-v2
image: "fprint-api:v0.0.1"
depends_on:
- fprint-db
- fprint-svc
network_mode: "host"
extra_hosts:
- "host.docker.internal:host-gateway"
expose:
- "8000"
build:
context: ../.
dockerfile: docker/Dockerfile.fprint-api
However, I am not able to reach the endpoints.

Expose in Dockerfile does not really publish the port Dockerfile - EXPOSE. It is more like a
'documentation' for a reader of Dockerfile, it shows that the port is intended to be published.
In docker-compose.yml you map port from docker container to host system by using port, Docker compose - ports. In docker-compose.yml keyword expose exposes port only for the linked services, it does not publish the port to the host machine Docker compose - expose
So your docker-compose.yml file should look like this:
version: "3.7"
services:
fprint-api:
container_name: fprint-api-v2
image: "fprint-api:v0.0.1"
depends_on:
- fprint-db
- fprint-svc
# network_mode: "host"
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- "8000:8000"
build:
context: ../.
dockerfile: docker/Dockerfile.fprint-api

Related

Could not create server TCP listening socket *:6383 bind: Cannot assign requested address in redis clustering on docker (in windows)

I'm trying to set up redis clustring on windows docker.
it works fine only in redis-cli -h 127.0.0.1 -p 6383 inside docker container CLI all nodes are fine and cluster has no problem. this is one of the redis.config file nodes
redis.config file
port 6383
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
The problem is with the above configuration, it's not possible to access the clustering with the application because it's not reachable for app (this app works fine in redis single mode)
when I change "bind" redis.conf file to my computer ip which is 192.168.3.205 i get this error
enter image description here
I have tried the following:
open the above port in the firewall roll
with telnet command it seems nobody listennign on this port
telnet 192.168.3.205 6383 and 127.0.0.1 6383
in netstat prot 6383 not used by anyone
and this is my .yml file
version: "3.8"
networks:
default:
name: amin-cluster
services:
redis0:
container_name: node-0
image: mnadeem/redis
network_mode: "host"
volumes:
- C:\Windows\System32\6379\redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
build:
context: .
dockerfile: Dockerfile
hostname: node-0
restart: always
redis1:
container_name: node-1
image: mnadeem/redis
network_mode: "host"
volumes:
- C:\Windows\System32\6380\redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
build:
context: .
dockerfile: Dockerfile
hostname: node-1
restart: always
redis2:
container_name: node-2
image: mnadeem/redis
network_mode: "host"
volumes:
- C:\Windows\System32\6381\redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
build:
context: .
dockerfile: Dockerfile
hostname: node-2
restart: always
redis3:
container_name: node-3
image: mnadeem/redis
network_mode: "host"
volumes:
- C:\Windows\System32\6382\redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
build:
context: .
dockerfile: Dockerfile
hostname: node-3
restart: always
redis4:
container_name: node-4
image: mnadeem/redis
network_mode: "host"
volumes:
- C:\Windows\System32\6383\redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
build:
context: .
dockerfile: Dockerfile
hostname: node-4
restart: always
redis5:
container_name: node-5
image: mnadeem/redis
network_mode: "host"
volumes:
- C:\Windows\System32\6384\redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
build:
context: .
dockerfile: Dockerfile
hostname: node-5
restart: always
In your docker compose yml, you need to publish the port, and set one up for each service you want accessible from the host.
redis0:
ports:
- "6383:6383"
...
redis1:
ports:
- "12345:6383"
The syntax is "hostport:containerport", since you have 6 redis instances, assuming you want each accessable each host port will need to be different.
You can obviously omit ports if you don't need it accessible from the host.
For more details on how to publish ports, read the docker compose yml docs https://docs.docker.com/compose/compose-file/compose-file-v3/#ports

How to automatically get an available port?

I'm working with docker containers for some projects and to save time i clone the docker composer file for my other projects.
The problem I have is that the ports for my mysql_database and apache_service are fixed values.
Example:
version: "3.2"
services:
apache_service:
build:
context: './docker/apache/'
links:
- mysql_service:mysql_service
depends_on:
- mysql_service
ports:
- "8080:80" # "random_port:80"
volumes:
- ./:/var/www/
mysql_service:
build:
context : ./
dockerfile : ./docker/mysql/Dockerfile
command: [
'--character-set-server=utf8mb4',
'--collation-server=utf8mb4_unicode_ci',
'--default-authentication-plugin=mysql_native_password',
]
restart:
always
volumes:
- ./docker/initdb:/docker-entrypoint-initdb.d
- ./docker/mysql/logs:/var/log/mysql
ports:
- "4306:3306" # "random_port:3306"
environment:
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
container_name:
mysql_service
When i copy the docker-composer file and write docker-composer up i alwayas have to change the ports previously...
How i could automatily get an avaliable port for this services?
Use 0 as the host port: this way you will get the first available (random) port from the operating system; to later get the actual used port you can use the docker port <container> command.

Cannot configure nginx reverse proxy with php support in docker compose

I have been attempting to configure nginx reverse proxy with php support in docker compose that runs an app service on port 3838. I want the app to run the nginx-proxy on port 80. I have combed through several tutorials online but none of them has helped me resolve the problem. I also tried to follow this https://github.com/dmitrym0/simple-lets-encrypt-docker-compose-sample/blob/master/docker-compose.yml but it didn't work. Here is my current docker compose file.
docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "82:80"
- "444:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "/etc/nginx/certs"
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
ports:
- 3838:3838
Am I missing something. Sometimes I see virtual_host environment variables include in the docker-compose file. Is that needed? Also do I have to manually configure nginx config files and attach them to the jwilder/nginx-proxy dockerfile? I a newbie at docker and and I really need some help.
Please refer to the Multiple Ports section of the nginx-proxy official docs. In your case, besides setting a mandatory VIRTUAL_HOST env variable (without this a container won't be reverse proxied by the nginx-proxy service), you have to set the VIRTUAL_PORT variable as the nginx-proxy will default to the service running on port 80, but your app service is bind to 3838 port.
Try this docker-compose.yml file to see if it works:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
expose:
- 3838
environment:
- VIRTUAL_HOST=app.localhost
- VIRTUAL_PORT=3838

Docker compose service communication

So I have a docker-compose file with 3 services: backend, react frontend and mongo.
backend Dockerfile:
FROM ubuntu:latest
WORKDIR /backend-server
COPY ./static/ ./static
COPY ./config.yml ./config.yml
COPY ./builds/backend-server_linux ./backend-server
EXPOSE 8080
CMD ["./backend-server"]
frontend Dockerfile:
FROM nginx:stable
WORKDIR /usr/share/nginx/html
COPY ./build .
COPY ./.env .env
EXPOSE 80
CMD ["sh", "-c", "nginx -g \"daemon off;\""]
So nothing unusual, I guess.
docker-compose.yml:
version: "3"
services:
mongo-db:
image: mongo:4.2.0-bionic
container_name: mongo-db
volumes:
- mongo-data:/data
network_mode: bridge
backend:
image: backend-linux:latest
container_name: backend
depends_on:
- mongo-db
environment:
- DATABASE_URL=mongodb://mongo-db:27017
..etc
network_mode: bridge
# networks:
# - mynetwork
expose:
- "8080"
ports:
- 8080:8080
links:
- mongo-db:mongo-db
restart: always
frontend:
image: frontend-linux:latest
container_name: frontend
depends_on:
- backend
network_mode: bridge
links:
- backend:backend
ports:
- 80:80
restart: always
volumes:
mongo-data:
driver: local
This is working. My problem is that by adding ports: - 8080:8080 to the backend part, that server becomes available to the host machine. Theoretically the network should work without these lines, as I read it in the docker docs and this question, but if I remove it, the API calls just stop working (but curl calls written in the docker-compose under the frontend service will still work).
Your react frontend is making requests from the browser.
Hence the endpoint, in this case, your API needs to be accessible to the browser, not the container that is handing out static js, css and html files.
Hope this image makes some sense.
P.S. If you wanted to specifically not expose the API you can get the Web Server to proxy Requests to /api/ to the API container, that will happen at the network level and mean you only need to expose the one server.
I do this by serving my Angular apps out of Nginx and then proxy traffic for /app1/api/* to one container and /app2/api/* to another container etc

how can I communicate between containers?

Im using a container with api gateway in port 80, and I'm needing communicate the api gateway between another containers (all these one using dockerfile and docker-compose). How can I do these others conteiners not expose the port to localhost but communicate internally with the api gateway?
My docker-compose:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
ports:
- "3000:3000"
Solution:
Changed docker-compose file to:
version: '3.5'
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
expose:
- "3000"
image: api-name-service
container_name: api-name-service
networks:
- api-network
networks:
api-network:
name: api-network-service
When the services is in the same network, this services can communicate with service name, like "http://api-name-service:3000".
You want to use expose instead of ports:
https://docs.docker.com/compose/compose-file/compose-file-v2/#expose
For example:
services:
app1:
expose:
- "3000"
app2:
...
assuming some API on port 3000 in app1, then app2 would be able to access http://app1:3000/api
Use docker network. Here is a very good tutorial on the docker website on how to use networking b/w containers: https://docs.docker.com/network/network-tutorial-standalone/

Resources